feat(x/deployment): migrate store to collections#2042
Conversation
WalkthroughRefactors x/deployment storage to collections-backed IndexedMaps with new composite keys and indexes, adds a v1.2.0 deployment migration (v5→v6) converting legacy KV prefixes into IndexedMaps, removes several legacy v1.0/v1.1 upgrade migration files, updates GRPC queries to index-based offset/reverse pagination, and upgrades dependencies and tests. Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (8)
x/deployment/keeper/keys/key.go (1)
3-14: Consider guarding exported mutable byte-slice prefixes against in-place mutation.Exported
var []byteglobals can be mutated in place by any package (e.g.keys.DeploymentPrefixNew[0] = 0x00), silently corrupting all subsequent store operations.collections.NewPrefixcopies the bytes on construction so it is safe at that call site, but raw use elsewhere is not.♻️ Suggested alternatives
Option 1 — unexported constants with exported accessors (zero-allocation, immutable):
-var ( - DeploymentPrefix = []byte{0x11, 0x00} - GroupPrefix = []byte{0x12, 0x00} - DeploymentPrefixNew = []byte{0x11, 0x01} - DeploymentIndexStatePrefix = []byte{0x11, 0x02} - GroupPrefixNew = []byte{0x12, 0x01} - GroupIndexStatePrefix = []byte{0x12, 0x02} - GroupIndexDeploymentPrefix = []byte{0x12, 0x03} -) +var ( + DeploymentPrefix = []byte{0x11, 0x00} + GroupPrefix = []byte{0x12, 0x00} + DeploymentPrefixNew = []byte{0x11, 0x01} + DeploymentIndexStatePrefix = []byte{0x11, 0x02} + GroupPrefixNew = []byte{0x12, 0x01} + GroupIndexStatePrefix = []byte{0x12, 0x02} + GroupIndexDeploymentPrefix = []byte{0x12, 0x03} +)Option 2 — wrap each in
collections.NewPrefix(...)at declaration so callers receive a safe prefix type rather than a raw slice:var DeploymentPrefixNew = collections.NewPrefix([]byte{0x11, 0x01})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/deployment/keeper/keys/key.go` around lines 3 - 14, Exported mutable byte-slice prefixes (DeploymentPrefix, GroupPrefix, DeploymentPrefixNew, DeploymentIndexStatePrefix, GroupPrefixNew, GroupIndexStatePrefix, GroupIndexDeploymentPrefix) can be mutated in-place; change them so callers cannot modify the underlying bytes by either converting each declaration to a safe prefix type via collections.NewPrefix(...) at definition time or make the raw []byte unexported and provide exported accessor functions that return a copy (or a read-only wrapper) of the slice; update the declarations and any callers that reference these symbols to use the new prefix type or accessors accordingly so the underlying bytes remain immutable.x/deployment/genesis.go (2)
37-37: Type assertion bypasses theIKeeperinterface.The cast
kpr.(*keeper.Keeper)is necessary becauseDeployments()andGroups()are not on theIKeeperinterface. This is fragile — any alternativeIKeeperimplementation (e.g., mocks in tests) will panic here. Consider addingDeployments()andGroups()accessors toIKeeper, or adding dedicatedImportDeployment/ImportGroupmethods to the interface so genesis doesn't need the concrete type.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/deployment/genesis.go` at line 37, The code performs a concrete cast k := kpr.(*keeper.Keeper>, which bypasses the IKeeper interface and will panic for alternate implementations; update the IKeeper interface to expose the needed operations (add Deployments() and Groups() accessors or, better, add explicit ImportDeployment(…) and ImportGroup(…) methods) and then change genesis ingestion code to call those interface methods (referencing kpr and the genesis import logic) instead of casting to *keeper.Keeper so mocks and other IKeeper implementations work safely.
52-62: Group storage in genesis skips duplicate check, unlike deployments.Deployments (lines 41–47) are checked for duplicates via
HasbeforeSet, panicking on conflict. Groups are stored directly without aHascheck. For defensive genesis init, consider adding a duplicate guard for groups as well.Proposed fix
gpk := keys.GroupIDToKey(group.ID) + ghas, err := k.Groups().Has(ctx, gpk) + if err != nil { + panic(fmt.Errorf("deployment genesis init. group id %s: %w", group.ID, err)) + } + if ghas { + panic(fmt.Errorf("deployment genesis init. group id %s: group already exists", group.ID)) + } if err := k.Groups().Set(ctx, gpk, group); err != nil {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/deployment/genesis.go` around lines 52 - 62, The group loop stores groups without checking for duplicates; mirror the deployments guard by checking k.Groups().Has(ctx, gpk) before calling k.Groups().Set and panic on conflict. Specifically, inside the loop over record.Groups (using group, gpk, k.Groups()), add a Has check e.g. if k.Groups().Has(ctx, gpk) { panic(fmt.Errorf("duplicate deployment genesis group %s", group.ID)) } then proceed to Set, matching the duplicate detection pattern used for deployments.x/deployment/keeper/grpc_query.go (2)
63-112: Custom pagination is correct but may be slow for owner-filtered queries on large datasets.The current approach iterates all deployments of a given state via the state index, then applies owner/dseq filters in memory (
Accept). For chains with many deployments, filtering by owner will scan the entire state partition.If owner-based queries are common, consider adding an owner or
(owner, state)composite index to allow direct lookups. This isn't a blocker for the current migration but worth tracking.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/deployment/keeper/grpc_query.go` around lines 63 - 112, The current loop uses the state index (k.deployments.Indexes.State.MatchExact) and then applies req.Filters.Accept in-memory which causes full-state scans for owner-filtered queries; modify the query path to detect when the request includes an owner filter and, when present, use a new owner or (owner,state) composite index (e.g., k.deployments.Indexes.Owner or k.deployments.Indexes.OwnerState with MatchExact) to iterate only matching entries, falling back to the existing state index and req.Filters.Accept when no owner filter is provided; update the iterator selection logic before calling indexes.ScanValues and ensure downstream code (including ekeeper.GetAccount, k.GetGroups and the assembly of types.QueryDeploymentResponse) continues to work unchanged.
73-105: Error propagation via closure variableacctErrworks but is fragile.The
ScanValuescallback capturesacctErrto signal escrow-fetch failures. If future edits add more error paths inside the callback, this pattern can easily lose errors. A wrapper struct or breaking the loop differently would be safer, but the current code is functionally correct.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/deployment/keeper/grpc_query.go` around lines 73 - 105, The callback currently captures and mutates the outer variable acctErr which is fragile; instead declare a local error variable (e.g., acctErrLocal) immediately before calling indexes.ScanValues, set acctErrLocal inside the callback when k.ekeeper.GetAccount(ctx, ...) fails, return true to stop scanning, and after ScanValues returns check acctErrLocal and wrap/assign it to acctErr (or return it) so error propagation does not rely on closure mutation; reference ScanValues, the callback, acctErr (replace with acctErrLocal) and k.ekeeper.GetAccount to locate and change the code.upgrades/software/v1.2.0/deployment.go (1)
1-3: Nolint directive format is non-standard.The directive should follow the standard
//nolint:reviveformat (no space, colon separator).Fix
-// nolint revive +//nolint:revive🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/software/v1.2.0/deployment.go` around lines 1 - 3, The nolint directive above the package declaration is using a non-standard format "// nolint revive"; update it to the canonical form without space and with a colon ("//nolint:revive") so the linter correctly recognizes the rule suppression; locate the top-of-file comment near the package v1_2_0 declaration and replace the directive accordingly.x/deployment/keeper/keeper.go (2)
369-376:WithDeploymentspanics on iteration error.This is consistent with
GetGroupsbut means a storage error during iteration will crash the node. SinceWithDeploymentsis used inExportGenesis(which is called during node shutdown/export), a panic here could prevent clean state export. Consider returning an error instead, or at minimum wrapping the panic with a clear message (which is already done on line 374 — good).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/deployment/keeper/keeper.go` around lines 369 - 376, The WithDeployments method currently panics on iteration errors; change Keeper.WithDeployments to return an error instead of panicking (signature: func (k Keeper) WithDeployments(ctx sdk.Context, fn func(v1.Deployment) bool) error), replace the panic block with return fmt.Errorf("WithDeployments iteration failed: %w", err), and update callers (notably ExportGenesis) to handle and propagate/log the error so storage iteration failures don't crash the node during export or shutdown; ensure the Walk wrapper still calls fn and returns nil on success.
102-110:Deployments()andGroups()accessors expose internal storage.These are documented as being for genesis and migration use. Since they return mutable
*IndexedMappointers, any caller can directly mutate state bypassing keeper business logic (validation, events, telemetry). This is acceptable for genesis/migration but consider whether they should be on theIKeeperinterface or remain concrete-type-only as they are now.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/deployment/keeper/keeper.go` around lines 102 - 110, The Deployments() and Groups() accessors expose mutable *collections.IndexedMap pointers which can bypass keeper logic; remove these methods from the IKeeper interface (leave them on the concrete Keeper type only) so only code with a concrete Keeper can call them for genesis/migration, or alternatively make them unexported (deployments() / groups()) if they only need package-local access; update any callers to use the concrete Keeper where necessary and keep the existing return types if you want direct map access for genesis/migration.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/upgrade/workers_test.go`:
- Around line 169-178: The test comment says "with pagination and CountTotal"
but the PageRequest passed to pu.cl.Query().Deployment().Deployments only sets
Limit: 1; either change the comment to remove CountTotal or set CountTotal: true
on the sdkquery.PageRequest and add an assertion on pagedResp.Pagination.Total;
locate the PageRequest construction in the test (the PageRequest literal passed
into Deployments) and either update the comment string or add CountTotal: true
and a require.Equal/require.Greater/require.NotZero assertion against
pagedResp.Pagination.Total to validate total counting.
In `@upgrades/software/v1.2.0/deployment.go`:
- Line 43: The code unguardedly asserts skey.(*storetypes.KVStoreKey) when
calling runtime.NewKVStoreService, which can panic; change the logic where you
call m.StoreKey() and construct ssvc (the runtime.NewKVStoreService call) to
perform a comma-ok type assertion against *storetypes.KVStoreKey, check the
boolean, and return or log a descriptive error if the assertion fails (e.g.,
"expected *storetypes.KVStoreKey from m.StoreKey() but got %T"); ensure ssvc is
only constructed when the assertion succeeds so no runtime panic occurs.
In `@x/deployment/keeper/keeper.go`:
- Around line 139-154: GetDeployment and GetGroup currently treat any error from
k.deployments.Get / k.groups.Get as "not found"; change each to explicitly check
if errors.Is(err, collections.ErrNotFound) and return the zero value and false
only in that case, while for any other error log the unexpected storage/decoding
error (include err) via the keeper logger (e.g. k.Logger(ctx).Error or
ctx.Logger().Error) and then return the zero value and false; reference the
symbols GetDeployment, GetGroup, k.deployments.Get, k.groups.Get,
keys.DeploymentIDToKey, keys.GroupIDToKey, and collections.ErrNotFound when
making the change.
In `@x/deployment/simulation/proposals.go`:
- Around line 48-52: The MinDeposits sdk.Coins are appended in the wrong order
causing validation failure; in x/deployment/simulation/proposals.go update the
coin append sequence so sdk.NewInt64Coin("uact", ...) is appended before
sdk.NewInt64Coin("uakt", ...) (i.e., ensure "uact" precedes "uakt") before
assigning to params.MinDeposits on the types.DefaultParams() object; also mirror
the same corrected order in the similar occurrence in
x/take/simulation/proposals.go (where DenomTakeRates is set) for consistency.
In `@x/take/simulation/proposals.go`:
- Around line 49-51: Update the stale comment that currently reads "uakt must
always be present" to mention both denominations by changing it to "uakt and
uact must always be present" near the block that appends coins (the lines that
call sdk.NewInt64Coin for "uakt" and "uact" using simtypes.RandIntBetween);
ensure the comment is colocated with those append statements so it matches the
parallel change in x/deployment/simulation/proposals.go.
---
Nitpick comments:
In `@upgrades/software/v1.2.0/deployment.go`:
- Around line 1-3: The nolint directive above the package declaration is using a
non-standard format "// nolint revive"; update it to the canonical form without
space and with a colon ("//nolint:revive") so the linter correctly recognizes
the rule suppression; locate the top-of-file comment near the package v1_2_0
declaration and replace the directive accordingly.
In `@x/deployment/genesis.go`:
- Line 37: The code performs a concrete cast k := kpr.(*keeper.Keeper>, which
bypasses the IKeeper interface and will panic for alternate implementations;
update the IKeeper interface to expose the needed operations (add Deployments()
and Groups() accessors or, better, add explicit ImportDeployment(…) and
ImportGroup(…) methods) and then change genesis ingestion code to call those
interface methods (referencing kpr and the genesis import logic) instead of
casting to *keeper.Keeper so mocks and other IKeeper implementations work
safely.
- Around line 52-62: The group loop stores groups without checking for
duplicates; mirror the deployments guard by checking k.Groups().Has(ctx, gpk)
before calling k.Groups().Set and panic on conflict. Specifically, inside the
loop over record.Groups (using group, gpk, k.Groups()), add a Has check e.g. if
k.Groups().Has(ctx, gpk) { panic(fmt.Errorf("duplicate deployment genesis group
%s", group.ID)) } then proceed to Set, matching the duplicate detection pattern
used for deployments.
In `@x/deployment/keeper/grpc_query.go`:
- Around line 63-112: The current loop uses the state index
(k.deployments.Indexes.State.MatchExact) and then applies req.Filters.Accept
in-memory which causes full-state scans for owner-filtered queries; modify the
query path to detect when the request includes an owner filter and, when
present, use a new owner or (owner,state) composite index (e.g.,
k.deployments.Indexes.Owner or k.deployments.Indexes.OwnerState with MatchExact)
to iterate only matching entries, falling back to the existing state index and
req.Filters.Accept when no owner filter is provided; update the iterator
selection logic before calling indexes.ScanValues and ensure downstream code
(including ekeeper.GetAccount, k.GetGroups and the assembly of
types.QueryDeploymentResponse) continues to work unchanged.
- Around line 73-105: The callback currently captures and mutates the outer
variable acctErr which is fragile; instead declare a local error variable (e.g.,
acctErrLocal) immediately before calling indexes.ScanValues, set acctErrLocal
inside the callback when k.ekeeper.GetAccount(ctx, ...) fails, return true to
stop scanning, and after ScanValues returns check acctErrLocal and wrap/assign
it to acctErr (or return it) so error propagation does not rely on closure
mutation; reference ScanValues, the callback, acctErr (replace with
acctErrLocal) and k.ekeeper.GetAccount to locate and change the code.
In `@x/deployment/keeper/keeper.go`:
- Around line 369-376: The WithDeployments method currently panics on iteration
errors; change Keeper.WithDeployments to return an error instead of panicking
(signature: func (k Keeper) WithDeployments(ctx sdk.Context, fn
func(v1.Deployment) bool) error), replace the panic block with return
fmt.Errorf("WithDeployments iteration failed: %w", err), and update callers
(notably ExportGenesis) to handle and propagate/log the error so storage
iteration failures don't crash the node during export or shutdown; ensure the
Walk wrapper still calls fn and returns nil on success.
- Around line 102-110: The Deployments() and Groups() accessors expose mutable
*collections.IndexedMap pointers which can bypass keeper logic; remove these
methods from the IKeeper interface (leave them on the concrete Keeper type only)
so only code with a concrete Keeper can call them for genesis/migration, or
alternatively make them unexported (deployments() / groups()) if they only need
package-local access; update any callers to use the concrete Keeper where
necessary and keep the existing return types if you want direct map access for
genesis/migration.
In `@x/deployment/keeper/keys/key.go`:
- Around line 3-14: Exported mutable byte-slice prefixes (DeploymentPrefix,
GroupPrefix, DeploymentPrefixNew, DeploymentIndexStatePrefix, GroupPrefixNew,
GroupIndexStatePrefix, GroupIndexDeploymentPrefix) can be mutated in-place;
change them so callers cannot modify the underlying bytes by either converting
each declaration to a safe prefix type via collections.NewPrefix(...) at
definition time or make the raw []byte unexported and provide exported accessor
functions that return a copy (or a read-only wrapper) of the slice; update the
declarations and any callers that reference these symbols to use the new prefix
type or accessors accordingly so the underlying bytes remain immutable.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (21)
app/types/app.gogo.modtests/upgrade/test-cases.jsontests/upgrade/workers_test.goupgrades/software/v1.2.0/deployment.goupgrades/software/v1.2.0/init.gox/deployment/genesis.gox/deployment/keeper/grpc_query.gox/deployment/keeper/grpc_query_test.gox/deployment/keeper/indexes.gox/deployment/keeper/keeper.gox/deployment/keeper/key.gox/deployment/keeper/keys/deployment.gox/deployment/keeper/keys/group.gox/deployment/keeper/keys/key.gox/deployment/keeper/keys/types.gox/deployment/module.gox/deployment/simulation/genesis.gox/deployment/simulation/proposals.gox/market/handler/handler_test.gox/take/simulation/proposals.go
💤 Files with no reviewable changes (2)
- app/types/app.go
- x/deployment/keeper/key.go
5a0f914 to
2d55114
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (1)
upgrades/software/v1.2.0/deployment.go (1)
43-43: Unguarded type assertion onStoreKeymay panic — previously flagged.This concern was already raised in a prior review:
skey.(*storetypes.KVStoreKey)will panic ifm.StoreKey()returns a non-*KVStoreKeyimplementation. Please apply the comma-ok fix from the earlier comment.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/software/v1.2.0/deployment.go` at line 43, Replace the unguarded type assertion on skey so it cannot panic: when calling runtime.NewKVStoreService(skey.(*storetypes.KVStoreKey)), first perform a comma-ok type assertion on skey (from m.StoreKey()) to check it is a *storetypes.KVStoreKey, handle the false case by logging/returning an error (or falling back) and only call runtime.NewKVStoreService when the assertion succeeds; update references around skey, m.StoreKey(), runtime.NewKVStoreService and storetypes.KVStoreKey accordingly.
🧹 Nitpick comments (1)
x/deployment/genesis.go (1)
52-63: Groups lack a duplicate-existence check that deployments have.Deployments guard against duplicates with
Has()(lines 41–47), but groups proceed directly toSet()without checking. For a well-formed genesis this is harmless, but the asymmetry means a corrupt genesis state with duplicate group IDs silently overwrites the earlier entry instead of failing loudly. Consider adding the same guard for symmetry and defensive correctness.🛡️ Proposed fix
gpk := keys.GroupIDToKey(group.ID) + hasGroup, err := k.Groups().Has(ctx, gpk) + if err != nil { + panic(fmt.Errorf("deployment genesis groups init. group id %s: %w", group.ID, err)) + } + if hasGroup { + panic(fmt.Errorf("deployment genesis groups init. group id %s: %w", group.ID, v1.ErrInvalidGroupID)) + } if err := k.Groups().Set(ctx, gpk, group); err != nil {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/deployment/genesis.go` around lines 52 - 63, The groups loop in genesis (iterating record.Groups and using keys.GroupIDToKey + k.Groups().Set) lacks the duplicate-existence guard that deployments use; add a Has check before Set by calling k.Groups().Has(ctx, gpk) and if it returns true panic with a clear duplicate-id error (mirroring the deployment guard pattern), otherwise proceed to k.Groups().Set; update the panic message to include the group ID (e.g. panic(fmt.Errorf("duplicate group id %s", group.ID))) so duplicates fail loudly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/upgrade/workers_test.go`:
- Around line 123-133: The test currently only checks that activeLeasesResp and
closedLeasesResp are non-nil but doesn't verify that returned leases are
actually filtered; update the assertions after calling
pu.cl.Query().Market().Leases to iterate over activeLeasesResp.Leases and assert
each lease.State == "active", and likewise iterate over closedLeasesResp.Leases
and assert each lease.State == "closed"; apply the same state-value assertions
for the other similar block (the one around lines 162-168) so every returned
lease from Query().Market().Leases is validated against the requested Filter
state.
In `@upgrades/software/v1.2.0/deployment.go`:
- Around line 65-77: Replace the panic-prone cdc.MustUnmarshal calls in the
migration loops with cdc.Unmarshal and propagate errors: in the deployment loop
where MustUnmarshal(diter.Value(), &deployment) is used, call
cdc.Unmarshal(diter.Value(), &deployment), check the returned error and return a
wrapped error (e.g., fmt.Errorf("failed to unmarshal deployment %s: %w",
deployment.ID, err)); do the same in the group loop (replace MustUnmarshal on
the group iterator value into the group variable), returning a descriptive
wrapped error on failure so corrupted legacy bytes fail the migration cleanly
instead of panicking.
In `@x/deployment/genesis.go`:
- Around line 36-37: InitGenesis currently does an unsafe cast k :=
kpr.(*keeper.Keeper) which will panic for any non-*keeper.Keeper implementation;
replace this by either changing the function signature to accept *keeper.Keeper
directly (if the interface abstraction isn't needed) or, preferably, perform a
guarded type assertion: do k, ok := kpr.(*keeper.Keeper) and handle the !ok case
(return early with an error/log or convert to using only IKeeper methods);
alternatively ensure IKeeper exposes the needed methods (Deployments(),
Groups()) and use them instead so no concrete cast is necessary.
In `@x/deployment/keeper/grpc_query_test.go`:
- Around line 239-244: The pagination offset test uses QueryDeploymentsRequest
with PageRequest{Offset:1, Limit:1} but only asserts len==1, so update the test
in grpc_query_test.go to validate offset behavior by first calling
QueryDeploymentsRequest with Offset:0, Limit:1 to capture the first deployment's
unique identifier (e.g., resp.Items[0].Id or resp.Items[0].DeploymentId), then
call the existing Offset:1 request and assert that the returned deployment's
identifier is different from the Offset:0 result (or equals the second expected
record), ensuring QueryDeploymentsRequest, v1beta4.DeploymentFilters and
sdkquery.PageRequest honor Offset; apply the same stronger assertion to the
other case flagged (lines 257-260).
---
Duplicate comments:
In `@upgrades/software/v1.2.0/deployment.go`:
- Line 43: Replace the unguarded type assertion on skey so it cannot panic: when
calling runtime.NewKVStoreService(skey.(*storetypes.KVStoreKey)), first perform
a comma-ok type assertion on skey (from m.StoreKey()) to check it is a
*storetypes.KVStoreKey, handle the false case by logging/returning an error (or
falling back) and only call runtime.NewKVStoreService when the assertion
succeeds; update references around skey, m.StoreKey(), runtime.NewKVStoreService
and storetypes.KVStoreKey accordingly.
---
Nitpick comments:
In `@x/deployment/genesis.go`:
- Around line 52-63: The groups loop in genesis (iterating record.Groups and
using keys.GroupIDToKey + k.Groups().Set) lacks the duplicate-existence guard
that deployments use; add a Has check before Set by calling k.Groups().Has(ctx,
gpk) and if it returns true panic with a clear duplicate-id error (mirroring the
deployment guard pattern), otherwise proceed to k.Groups().Set; update the panic
message to include the group ID (e.g. panic(fmt.Errorf("duplicate group id %s",
group.ID))) so duplicates fail loudly.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (22)
app/types/app.gogo.modtests/upgrade/test-cases.jsontests/upgrade/workers_test.goupgrades/CHANGELOG.mdupgrades/software/v1.2.0/deployment.goupgrades/software/v1.2.0/init.gox/deployment/genesis.gox/deployment/keeper/grpc_query.gox/deployment/keeper/grpc_query_test.gox/deployment/keeper/indexes.gox/deployment/keeper/keeper.gox/deployment/keeper/key.gox/deployment/keeper/keys/deployment.gox/deployment/keeper/keys/group.gox/deployment/keeper/keys/key.gox/deployment/keeper/keys/types.gox/deployment/module.gox/deployment/simulation/genesis.gox/deployment/simulation/proposals.gox/market/handler/handler_test.gox/market/keeper/grpc_query.go
💤 Files with no reviewable changes (2)
- app/types/app.go
- x/deployment/keeper/key.go
🚧 Files skipped from review as they are similar to previous changes (4)
- x/market/handler/handler_test.go
- x/deployment/keeper/keys/key.go
- x/deployment/simulation/proposals.go
- x/deployment/keeper/indexes.go
2d55114 to
7b53354
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
upgrades/CHANGELOG.md (1)
7-19:⚠️ Potential issue | 🟡 Minor"Current module consensus versions" table is stale.
After the full upgrade chain (v0.38.0 → v1.0.0 → v1.2.0), the current versions are deployment = 6 and market = 8, but the table still shows 4 and 6.
📝 Suggested table fix
-| deployment | 4 | +| deployment | 6 | ... -| market | 6 | +| market | 8 |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/CHANGELOG.md` around lines 7 - 19, The "Current module consensus versions" table is stale: update the entries for the deployment and market rows so deployment shows 6 and market shows 8 (leave other module rows unchanged); locate the table in CHANGELOG.md (the rows containing "deployment" and "market") and replace the version numbers accordingly so the table reflects the post-upgrade chain (v0.38.0 → v1.0.0 → v1.2.0).
♻️ Duplicate comments (6)
tests/upgrade/workers_test.go (2)
128-139: Per-item state assertions are present — previous feedback addressed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/upgrade/workers_test.go` around lines 128 - 139, Duplicate review comment found noting that per-item state assertions are present in tests; no code change required. Resolve by removing the duplicate review remark (or mark it as resolved) in the PR, and ensure the tests in workers_test.go still perform per-item assertions on activeLeasesResp.Leases using mv1.LeaseActive and closedLeasesResp.Leases using mv1.LeaseClosed so the checks in the for-loops remain intact.
178-188: Pagination comment no longer mentions CountTotal — previous feedback addressed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/upgrade/workers_test.go` around lines 178 - 188, The existing test comment is stale about CountTotal—update or remove it to match the code which queries deployments with a PageRequest containing only Limit=1; specifically, edit the comment around the call to pu.cl.Query().Deployment().Deployments (and references to pagedResp and sdkquery.PageRequest) to either remove mention of CountTotal or state explicitly that CountTotal is not requested/checked when using Limit=1, keeping assertions that pagedResp.Pagination is non-nil and that len(pagedResp.Deployments)==1.x/deployment/keeper/grpc_query.go (1)
42-44: Key-based pagination rejection is a client-facing breaking change; document it explicitly.Line 43 now hard-fails requests that previously may have relied on
Pagination.Key. Please include migration guidance in upgrade notes/changelog.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/deployment/keeper/grpc_query.go` around lines 42 - 44, The change rejecting key-based pagination (the check on req.Pagination.Key returning status.Error(codes.InvalidArgument, "key-based pagination is not supported")) is a client-facing breaking change; update the upgrade notes/changelog to explicitly document this behavior, describe the migration path (e.g., switch clients to offset/limit pagination or remove use of Pagination.Key), show an example request transformation, and mark the release as containing a breaking change so integrators know to update code that previously relied on Pagination.Key.x/deployment/genesis.go (1)
37-37:⚠️ Potential issue | 🔴 CriticalUnsafe concrete type assertion can panic in
InitGenesis.Line 37 force-casts
keeper.IKeeperto*keeper.Keeper. Any alternateIKeeperimplementation (tests/decorators/future refactors) will panic at runtime.Suggested fix
-func InitGenesis(ctx sdk.Context, kpr keeper.IKeeper, data *v1beta4.GenesisState) { - k := kpr.(*keeper.Keeper) +func InitGenesis(ctx sdk.Context, kpr keeper.IKeeper, data *v1beta4.GenesisState) { + k, ok := kpr.(*keeper.Keeper) + if !ok { + panic(fmt.Errorf("InitGenesis requires *keeper.Keeper, got %T", kpr)) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/deployment/genesis.go` at line 37, The code in InitGenesis currently does an unsafe concrete type assertion (k := kpr.(*keeper.Keeper)) which can panic for alternative IKeeper implementations; change this to handle the interface safely by either using the IKeeper methods only (avoid casting) or perform a checked type assertion (v, ok := kpr.(*keeper.Keeper)) and handle the !ok case (return error or fallback) or use a type switch to support different implementations; locate the assertion in InitGenesis and update it to a safe checked assertion or refactor the function to work with keeper.IKeeper methods instead.upgrades/software/v1.2.0/deployment.go (2)
43-43:⚠️ Potential issue | 🟡 MinorGuard
StoreKeytype assertion to prevent upgrade-time panic.Line 43 assumes
m.StoreKey()is always*storetypes.KVStoreKey; if not, migration panics instead of returning a controlled error.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/software/v1.2.0/deployment.go` at line 43, The code assumes m.StoreKey() yields *storetypes.KVStoreKey and does an unchecked type assertion when creating ssvc via runtime.NewKVStoreService(skey.(*storetypes.KVStoreKey)); add a guarded assertion: first retrieve skey := m.StoreKey(), then do skeyTyped, ok := skey.(*storetypes.KVStoreKey) and if !ok return a controlled error from the upgrade (or wrap it with context) instead of panicking, and pass skeyTyped into runtime.NewKVStoreService when ok.
65-67:⚠️ Potential issue | 🟠 MajorAvoid
MustUnmarshalin migration loops; return decode errors instead of panicking.Line 67 and Line 87 can panic on malformed legacy bytes and abort the entire upgrade without graceful error propagation.
Suggested fix
- cdc.MustUnmarshal(diter.Value(), &deployment) + if err := cdc.Unmarshal(diter.Value(), &deployment); err != nil { + return fmt.Errorf("failed to unmarshal deployment at key %X: %w", diter.Key(), err) + } ... - cdc.MustUnmarshal(giter.Value(), &group) + if err := cdc.Unmarshal(giter.Value(), &group); err != nil { + return fmt.Errorf("failed to unmarshal group at key %X: %w", giter.Key(), err) + }Also applies to: 85-87
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/software/v1.2.0/deployment.go` around lines 65 - 67, The loop currently calls cdc.MustUnmarshal(diter.Value(), &deployment) which can panic on malformed bytes; replace MustUnmarshal with an error-returning unmarshal and propagate a descriptive error instead of panicking (e.g. if err := cdc.Unmarshal(diter.Value(), &deployment); err != nil { return fmt.Errorf("failed to unmarshal deployment key %s: %w", diter.Key(), err) }). Do the same for the second occurrence referenced around lines 85-87 (the other cdc.MustUnmarshal call) so both uses (the deployment variable and the other unmarshaled struct) validate errors and return them instead of calling MustUnmarshal.
🧹 Nitpick comments (3)
x/market/keeper/grpc_query.go (1)
57-61: Reverse state ordering only reverses bucket order, not global key order within each bucket.When
Reverseis true and no state filter is specified, items are returned descending within each state bucket, but cross-bucket ordering is only as consistent as the bucket order reversal. This is an inherent limitation of the multi-state iteration approach. Worth documenting in a comment so callers understand the ordering guarantee.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/market/keeper/grpc_query.go` around lines 57 - 61, The handling of req.Pagination.Reverse in grpc_query.go currently reverses the states slice but does not preserve global key ordering across state buckets; document this limitation by adding a clear comment near the Reverse handling (the block that flips the states slice) explaining that when Reverse is true and no specific state filter is applied, items are returned descending within each state bucket but cross-bucket ordering is not globally sorted due to the multi-state iteration approach; mention the affected symbols (req.Pagination.Reverse, states slice, and the multi-state iteration logic) so callers know the ordering guarantee and can adjust their queries if they require global ordering.x/deployment/keeper/indexes.go (1)
76-78: Preferkeys.DeploymentIDToKeyover inlinecollections.Joinfor consistency.The extractor at Line 77 duplicates the logic of
keys.DeploymentIDToKey:return collections.Join(group.ID.Owner, group.ID.DSeq), nilUsing the dedicated helper keeps the index consistent with the rest of the key-conversion layer:
♻️ Suggested refactor
- func(_ keys.GroupPrimaryKey, group types.Group) (keys.DeploymentPrimaryKey, error) { - return collections.Join(group.ID.Owner, group.ID.DSeq), nil - }, + func(_ keys.GroupPrimaryKey, group types.Group) (keys.DeploymentPrimaryKey, error) { + return keys.DeploymentIDToKey(v1.DeploymentID{Owner: group.ID.Owner, DSeq: group.ID.DSeq}), nil + },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/deployment/keeper/indexes.go` around lines 76 - 78, Replace the inline collections.Join call in the extractor func(_ keys.GroupPrimaryKey, group types.Group) (keys.DeploymentPrimaryKey, error) with the existing helper keys.DeploymentIDToKey to keep key conversion consistent; update the return to call keys.DeploymentIDToKey(group.ID) (or the correct DeploymentID argument form) and return its result (ensuring the DeploymentPrimaryKey and error types match), and remove the direct collections.Join usage so the extractor uses the centralized key helper.tests/upgrade/workers_test.go (1)
215-216: Unsafe bare type assertions throughouttestLeaseClosedReason.Lines 215, 237, 287, 311, 322, and 331 all use
res.(*sdk.TxResponse)without anokcheck. A type mismatch panics with a runtime error rather than a cleartestifyfailure message.💡 Safer alternative (apply to each assertion site)
- txResp := res.(*sdk.TxResponse) + txResp, ok := res.(*sdk.TxResponse) + require.True(t, ok, "expected *sdk.TxResponse from BroadcastMsgs")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/upgrade/workers_test.go` around lines 215 - 216, In testLeaseClosedReason replace unsafe bare assertions like txResp := res.(*sdk.TxResponse) with a safe comma-ok check at each site (lines referenced in the comment) — e.g., do a type assertion with txResp, ok := res.(*sdk.TxResponse) and then call require.True(t, ok, "unexpected response type: %T", res) (or require.IsType(t, &sdk.TxResponse{}, res)) before using txResp; apply this change to every occurrence in testLeaseClosedReason to avoid panics on type mismatches.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@upgrades/CHANGELOG.md`:
- Line 47: The markdown has a heading-level jump: the setext divider "-----"
creates an implicit h2 that makes the "##### v1.2.0" entry skip levels under the
existing "#### Upgrades" heading; fix by either removing the setext line (the
"-----" separator) so the hierarchy is determined by ATX headings, or by
demoting the release entries (change "##### v1.2.0" to match a consistent child
level under "#### Upgrades", e.g., "#####") and ensure all upgrade entries use
the same heading level; locate the offending strings "-----", "#### Upgrades",
and "##### v1.2.0" in the changelog to apply the change.
In `@x/deployment/keeper/keeper.go`:
- Around line 167-179: GetGroups currently panics on iterator/scan failures
(calls to k.groups.Indexes.Deployment.MatchExact and indexes.ScanValues); change
it to return an error instead of panicking and propagate that error to callers
(including grpc query handler and WithDeployments). Specifically, update
GetGroups to return ([]types.Group, error), replace panic(fmt.Sprintf(...)) with
returning nil and the wrapped error, and update callers (e.g., the grpc query
function and WithDeployments) to handle and propagate the error (or convert to
the appropriate SDK/query error response) rather than allowing a node crash.
---
Outside diff comments:
In `@upgrades/CHANGELOG.md`:
- Around line 7-19: The "Current module consensus versions" table is stale:
update the entries for the deployment and market rows so deployment shows 6 and
market shows 8 (leave other module rows unchanged); locate the table in
CHANGELOG.md (the rows containing "deployment" and "market") and replace the
version numbers accordingly so the table reflects the post-upgrade chain
(v0.38.0 → v1.0.0 → v1.2.0).
---
Duplicate comments:
In `@tests/upgrade/workers_test.go`:
- Around line 128-139: Duplicate review comment found noting that per-item state
assertions are present in tests; no code change required. Resolve by removing
the duplicate review remark (or mark it as resolved) in the PR, and ensure the
tests in workers_test.go still perform per-item assertions on
activeLeasesResp.Leases using mv1.LeaseActive and closedLeasesResp.Leases using
mv1.LeaseClosed so the checks in the for-loops remain intact.
- Around line 178-188: The existing test comment is stale about
CountTotal—update or remove it to match the code which queries deployments with
a PageRequest containing only Limit=1; specifically, edit the comment around the
call to pu.cl.Query().Deployment().Deployments (and references to pagedResp and
sdkquery.PageRequest) to either remove mention of CountTotal or state explicitly
that CountTotal is not requested/checked when using Limit=1, keeping assertions
that pagedResp.Pagination is non-nil and that len(pagedResp.Deployments)==1.
In `@upgrades/software/v1.2.0/deployment.go`:
- Line 43: The code assumes m.StoreKey() yields *storetypes.KVStoreKey and does
an unchecked type assertion when creating ssvc via
runtime.NewKVStoreService(skey.(*storetypes.KVStoreKey)); add a guarded
assertion: first retrieve skey := m.StoreKey(), then do skeyTyped, ok :=
skey.(*storetypes.KVStoreKey) and if !ok return a controlled error from the
upgrade (or wrap it with context) instead of panicking, and pass skeyTyped into
runtime.NewKVStoreService when ok.
- Around line 65-67: The loop currently calls cdc.MustUnmarshal(diter.Value(),
&deployment) which can panic on malformed bytes; replace MustUnmarshal with an
error-returning unmarshal and propagate a descriptive error instead of panicking
(e.g. if err := cdc.Unmarshal(diter.Value(), &deployment); err != nil { return
fmt.Errorf("failed to unmarshal deployment key %s: %w", diter.Key(), err) }). Do
the same for the second occurrence referenced around lines 85-87 (the other
cdc.MustUnmarshal call) so both uses (the deployment variable and the other
unmarshaled struct) validate errors and return them instead of calling
MustUnmarshal.
In `@x/deployment/genesis.go`:
- Line 37: The code in InitGenesis currently does an unsafe concrete type
assertion (k := kpr.(*keeper.Keeper)) which can panic for alternative IKeeper
implementations; change this to handle the interface safely by either using the
IKeeper methods only (avoid casting) or perform a checked type assertion (v, ok
:= kpr.(*keeper.Keeper)) and handle the !ok case (return error or fallback) or
use a type switch to support different implementations; locate the assertion in
InitGenesis and update it to a safe checked assertion or refactor the function
to work with keeper.IKeeper methods instead.
In `@x/deployment/keeper/grpc_query.go`:
- Around line 42-44: The change rejecting key-based pagination (the check on
req.Pagination.Key returning status.Error(codes.InvalidArgument, "key-based
pagination is not supported")) is a client-facing breaking change; update the
upgrade notes/changelog to explicitly document this behavior, describe the
migration path (e.g., switch clients to offset/limit pagination or remove use of
Pagination.Key), show an example request transformation, and mark the release as
containing a breaking change so integrators know to update code that previously
relied on Pagination.Key.
---
Nitpick comments:
In `@tests/upgrade/workers_test.go`:
- Around line 215-216: In testLeaseClosedReason replace unsafe bare assertions
like txResp := res.(*sdk.TxResponse) with a safe comma-ok check at each site
(lines referenced in the comment) — e.g., do a type assertion with txResp, ok :=
res.(*sdk.TxResponse) and then call require.True(t, ok, "unexpected response
type: %T", res) (or require.IsType(t, &sdk.TxResponse{}, res)) before using
txResp; apply this change to every occurrence in testLeaseClosedReason to avoid
panics on type mismatches.
In `@x/deployment/keeper/indexes.go`:
- Around line 76-78: Replace the inline collections.Join call in the extractor
func(_ keys.GroupPrimaryKey, group types.Group) (keys.DeploymentPrimaryKey,
error) with the existing helper keys.DeploymentIDToKey to keep key conversion
consistent; update the return to call keys.DeploymentIDToKey(group.ID) (or the
correct DeploymentID argument form) and return its result (ensuring the
DeploymentPrimaryKey and error types match), and remove the direct
collections.Join usage so the extractor uses the centralized key helper.
In `@x/market/keeper/grpc_query.go`:
- Around line 57-61: The handling of req.Pagination.Reverse in grpc_query.go
currently reverses the states slice but does not preserve global key ordering
across state buckets; document this limitation by adding a clear comment near
the Reverse handling (the block that flips the states slice) explaining that
when Reverse is true and no specific state filter is applied, items are returned
descending within each state bucket but cross-bucket ordering is not globally
sorted due to the multi-state iteration approach; mention the affected symbols
(req.Pagination.Reverse, states slice, and the multi-state iteration logic) so
callers know the ordering guarantee and can adjust their queries if they require
global ordering.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (22)
app/types/app.gogo.modtests/upgrade/test-cases.jsontests/upgrade/workers_test.goupgrades/CHANGELOG.mdupgrades/software/v1.2.0/deployment.goupgrades/software/v1.2.0/init.gox/deployment/genesis.gox/deployment/keeper/grpc_query.gox/deployment/keeper/grpc_query_test.gox/deployment/keeper/indexes.gox/deployment/keeper/keeper.gox/deployment/keeper/key.gox/deployment/keeper/keys/deployment.gox/deployment/keeper/keys/group.gox/deployment/keeper/keys/key.gox/deployment/keeper/keys/types.gox/deployment/module.gox/deployment/simulation/genesis.gox/deployment/simulation/proposals.gox/market/handler/handler_test.gox/market/keeper/grpc_query.go
💤 Files with no reviewable changes (2)
- x/deployment/keeper/key.go
- app/types/app.go
✅ Files skipped from review due to trivial changes (1)
- x/deployment/simulation/proposals.go
🚧 Files skipped from review as they are similar to previous changes (6)
- x/deployment/module.go
- x/deployment/keeper/keys/key.go
- x/market/handler/handler_test.go
- x/deployment/keeper/keys/types.go
- x/deployment/keeper/grpc_query_test.go
- tests/upgrade/test-cases.json
Signed-off-by: Artur Troian <troian@users.noreply.github.com>
7b53354 to
e45940f
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (4)
x/deployment/genesis.go (1)
37-37:⚠️ Potential issue | 🔴 CriticalAvoid unsafe concrete cast in
InitGenesis.
Line 37 can panic at runtime ifkpris not exactly*keeper.Keeper. This repeats a previously raised concern and still breaks theIKeeperabstraction.🔧 Proposed guarded assertion
func InitGenesis(ctx sdk.Context, kpr keeper.IKeeper, data *v1beta4.GenesisState) { - k := kpr.(*keeper.Keeper) + k, ok := kpr.(*keeper.Keeper) + if !ok { + panic(fmt.Errorf("InitGenesis requires *keeper.Keeper, got %T", kpr)) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/deployment/genesis.go` at line 37, InitGenesis currently does an unsafe concrete cast "k := kpr.(*keeper.Keeper)" which can panic if kpr isn't exactly *keeper.Keeper; replace it with a guarded approach: either use the IKeeper interface type (call the methods you need on kpr as IKeeper) to avoid casting at all, or perform a checked type assertion (e.g., k, ok := kpr.(*keeper.Keeper}) and handle the !ok case with a clear error or controlled panic that mentions InitGenesis and the expected type; update any callers or tests accordingly so InitGenesis relies on the IKeeper abstraction or fails fast with a descriptive message when the concrete type mismatches.upgrades/CHANGELOG.md (1)
47-58:⚠️ Potential issue | 🟡 MinorFix heading hierarchy (
MD001) for upgrade entries.
The heading jump remains and still triggers markdownlint around Line 47. This was already reported previously.🔧 Suggested fix
-Add new upgrades after this line based on the template above ------ +Add new upgrades after this line based on the template above(Alternatively keep the separator but insert a blank line before it so it is treated as a thematic break, not a setext heading underline.)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/CHANGELOG.md` around lines 47 - 58, The MD001 heading hierarchy error is caused by the jump around the "##### v1.2.0" / "##### v1.1.0" entries; fix it by making the heading levels consistent or by inserting a blank line before the separator/thematic break so the parser doesn't treat the underline as a setext heading; specifically update the "##### v1.2.0" block (and the following "##### v1.1.0" header) so both use consistent heading levels (e.g., change to "#### v1.2.0" or ensure a blank line precedes any thematic break) to resolve the MD001 lint.upgrades/software/v1.2.0/deployment.go (2)
65-77:⚠️ Potential issue | 🟠 Major
MustUnmarshalpanics on decode error — useUnmarshalwith error propagation.In a migration handler, a panic aborts the entire chain upgrade without a recoverable error. Any legacy entry with corrupted bytes will crash the node. The same applies to line 87 for groups.
Proposed fix
for ; diter.Valid(); diter.Next() { var deployment dv1.Deployment - cdc.MustUnmarshal(diter.Value(), &deployment) + if err := cdc.Unmarshal(diter.Value(), &deployment); err != nil { + return fmt.Errorf("failed to unmarshal deployment at key %X: %w", diter.Key(), err) + }for ; giter.Valid(); giter.Next() { var group dtypes.Group - cdc.MustUnmarshal(giter.Value(), &group) + if err := cdc.Unmarshal(giter.Value(), &group); err != nil { + return fmt.Errorf("failed to unmarshal group at key %X: %w", giter.Key(), err) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/software/v1.2.0/deployment.go` around lines 65 - 77, Replace calls to cdc.MustUnmarshal in the migration loop with cdc.Unmarshal and propagate decode errors instead of panicking: for the iterator loop using diter and variable deployment, call cdc.Unmarshal(diter.Value(), &deployment) and if it returns an error, return a formatted error (e.g., "failed to unmarshal deployment %s: %w") so the migration handler fails gracefully; apply the same change to the analogous group decode at the other site (where MustUnmarshal is used for group bytes) so both deployment and group unmarshalling use Unmarshal with error checks before calling deployments.Set, store.Delete, and incrementing deploymentCount.
43-43:⚠️ Potential issue | 🟡 MinorUnguarded type assertion on
StoreKeymay panic.
m.StoreKey()returnsstoretypes.StoreKey(an interface). The assertionskey.(*storetypes.KVStoreKey)will panic at runtime if it's not a*KVStoreKey.Proposed fix
- ssvc := runtime.NewKVStoreService(skey.(*storetypes.KVStoreKey)) + kvKey, ok := skey.(*storetypes.KVStoreKey) + if !ok { + return fmt.Errorf("expected *storetypes.KVStoreKey, got %T", skey) + } + ssvc := runtime.NewKVStoreService(kvKey)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/software/v1.2.0/deployment.go` at line 43, The code does an unguarded type assertion on m.StoreKey() (skey := m.StoreKey()) when calling runtime.NewKVStoreService(skey.(*storetypes.KVStoreKey)) which can panic if skey isn't a *storetypes.KVStoreKey; replace it with a safe assertion (e.g., kvKey, ok := skey.(*storetypes.KVStoreKey)) and handle the false branch (log and return/error out) before calling runtime.NewKVStoreService(kvKey) so NewKVStoreService only receives a valid *storetypes.KVStoreKey.
🧹 Nitpick comments (3)
x/market/keeper/grpc_query.go (2)
75-82: Consider extracting a shared reverse/forward iterator helper.The same reverse-vs-forward iterator construction pattern is repeated in five places. A small helper would reduce drift risk and simplify future pagination/index changes.
Also applies to: 180-187, 240-247, 358-365, 418-425
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/market/keeper/grpc_query.go` around lines 75 - 82, Extract a small helper that centralizes the reverse vs forward iterator construction used in the Order queries: create a function (e.g., buildOrderStateIterator(ctx, state int, reverse bool) (indexes.MultiIterator[int32, keys.OrderPrimaryKey], error)) that encapsulates the logic currently using k.orders.Indexes.State.Iterate(...collections.NewPrefixedPairRange[int32, keys.OrderPrimaryKey](int32(state)).Descending()) versus k.orders.Indexes.State.MatchExact(ctx, int32(state)); replace the five duplicated blocks that inspect req.Pagination.Reverse with calls to this helper so callers simply pass ctx, int(state), and req.Pagination.Reverse and handle the returned iterator/error.
157-161: Avoid unnecessary state reversal in provider-search paths.In
BidsandLeases, the reversedstatesorder is not used whenproviderSearchis true (the logic usesstateSet), so this adds noise without effect.♻️ Suggested cleanup
- if req.Pagination.Reverse { + if req.Pagination.Reverse && !providerSearch { for i, j := 0, len(states)-1; i < j; i, j = i+1, j-1 { states[i], states[j] = states[j], states[i] } }Also applies to: 335-339
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/market/keeper/grpc_query.go` around lines 157 - 161, The reversal of the local variable states inside the Bids and Leases handlers is unnecessary when providerSearch is true because the subsequent logic uses stateSet instead of the states slice; remove the unconditional reversal (the for loop that swaps states[i], states[j]) or wrap it with a guard so it only runs when providerSearch is false and the code actually iterates over the states slice; adjust both places (the reversal near the Bids handler and the similar block used near Leases) referencing the variables states, providerSearch and stateSet to ensure the reversal has effect only when the states slice is consumed.upgrades/software/v1.2.0/deployment.go (1)
56-97: Consider refactoring to collect keys then delete after iteration, but note this pattern is established throughout the codebase.Lines 74 and 94 delete store keys while iterating. While this pattern is used consistently across multiple migration files and in
app/testnet.go, it remains less defensive than collecting keys for deletion after the iterator closes. The Cosmos SDK'sKVStorePrefixIteratoroperates on a snapshot and tolerates concurrent deletes, but refactoring to decouple iteration from deletion would reduce fragility if the underlying store implementation changes.Safer pattern (deployments example)
+ var deploymentKeysToDelete [][]byte for ; diter.Valid(); diter.Next() { var deployment dv1.Deployment cdc.MustUnmarshal(diter.Value(), &deployment) pk := dkeys.DeploymentIDToKey(deployment.ID) if err := deployments.Set(ctx, pk, deployment); err != nil { return fmt.Errorf("failed to migrate deployment %s: %w", deployment.ID, err) } - store.Delete(diter.Key()) + deploymentKeysToDelete = append(deploymentKeysToDelete, diter.Key()) deploymentCount++ } + for _, key := range deploymentKeysToDelete { + store.Delete(key) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/software/v1.2.0/deployment.go` around lines 56 - 97, Refactor the migration loop to avoid deleting while iterating: instead of calling store.Delete(diter.Key()) and store.Delete(giter.Key()) inside the iteration over diter and giter, collect the keys (e.g., append diter.Key() and giter.Key() to a slice) while unmarshalling and calling deployments.Set(ctx, pk, deployment) and groups.Set(ctx, pk, group), close the iterators, then loop over the collected key slices and call store.Delete on each; update symbols referenced: diter, giter, deployments.Set, groups.Set, store.Delete, dkeys.DeploymentPrefix, dkeys.GroupPrefix so the behavior remains identical but deletion happens after iterator.Close().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/upgrade/workers_test.go`:
- Around line 200-223: The test hardcodes the keyring alias "provider" which can
cause "key already exists" on reruns; generate a unique per-run alias (e.g.,
using timestamp or test name + random suffix) and use that variable wherever
"provider" is used: pass it to kr.NewMnemonic(...) instead of the literal, and
replace the literals in providerCctx.WithFromName(...).WithFrom(...) (and any
other WithFrom*/GetAddress usages) with the new alias variable so the mnemonic
creation and client context consistently use the generated alias.
In `@upgrades/CHANGELOG.md`:
- Line 60: Replace the incorrect contraction "it’s" with the possessive "its" in
the changelog sentence "Fix overdrawn escrow accounts that prevent deployment
from being closed by resolving account state and it’s associated objects into
correct overdrawn state and close related deployments" so it reads "its
associated objects"; update the single-line entry accordingly.
In `@x/deployment/genesis.go`:
- Around line 59-62: Genesis group entries are written without checking for
existing keys, so duplicate group IDs can silently overwrite prior entries;
before calling k.Groups().Set(ctx, gpk, group) (where gpk :=
keys.GroupIDToKey(group.ID)), check for an existing entry using k.Groups().Get
or a key existence lookup and return or panic with a clear error (including the
duplicate group ID) if a record already exists, ensuring duplicates are detected
during genesis import rather than overwritten.
---
Duplicate comments:
In `@upgrades/CHANGELOG.md`:
- Around line 47-58: The MD001 heading hierarchy error is caused by the jump
around the "##### v1.2.0" / "##### v1.1.0" entries; fix it by making the heading
levels consistent or by inserting a blank line before the separator/thematic
break so the parser doesn't treat the underline as a setext heading;
specifically update the "##### v1.2.0" block (and the following "##### v1.1.0"
header) so both use consistent heading levels (e.g., change to "#### v1.2.0" or
ensure a blank line precedes any thematic break) to resolve the MD001 lint.
In `@upgrades/software/v1.2.0/deployment.go`:
- Around line 65-77: Replace calls to cdc.MustUnmarshal in the migration loop
with cdc.Unmarshal and propagate decode errors instead of panicking: for the
iterator loop using diter and variable deployment, call
cdc.Unmarshal(diter.Value(), &deployment) and if it returns an error, return a
formatted error (e.g., "failed to unmarshal deployment %s: %w") so the migration
handler fails gracefully; apply the same change to the analogous group decode at
the other site (where MustUnmarshal is used for group bytes) so both deployment
and group unmarshalling use Unmarshal with error checks before calling
deployments.Set, store.Delete, and incrementing deploymentCount.
- Line 43: The code does an unguarded type assertion on m.StoreKey() (skey :=
m.StoreKey()) when calling
runtime.NewKVStoreService(skey.(*storetypes.KVStoreKey)) which can panic if skey
isn't a *storetypes.KVStoreKey; replace it with a safe assertion (e.g., kvKey,
ok := skey.(*storetypes.KVStoreKey)) and handle the false branch (log and
return/error out) before calling runtime.NewKVStoreService(kvKey) so
NewKVStoreService only receives a valid *storetypes.KVStoreKey.
In `@x/deployment/genesis.go`:
- Line 37: InitGenesis currently does an unsafe concrete cast "k :=
kpr.(*keeper.Keeper)" which can panic if kpr isn't exactly *keeper.Keeper;
replace it with a guarded approach: either use the IKeeper interface type (call
the methods you need on kpr as IKeeper) to avoid casting at all, or perform a
checked type assertion (e.g., k, ok := kpr.(*keeper.Keeper}) and handle the !ok
case with a clear error or controlled panic that mentions InitGenesis and the
expected type; update any callers or tests accordingly so InitGenesis relies on
the IKeeper abstraction or fails fast with a descriptive message when the
concrete type mismatches.
---
Nitpick comments:
In `@upgrades/software/v1.2.0/deployment.go`:
- Around line 56-97: Refactor the migration loop to avoid deleting while
iterating: instead of calling store.Delete(diter.Key()) and
store.Delete(giter.Key()) inside the iteration over diter and giter, collect the
keys (e.g., append diter.Key() and giter.Key() to a slice) while unmarshalling
and calling deployments.Set(ctx, pk, deployment) and groups.Set(ctx, pk, group),
close the iterators, then loop over the collected key slices and call
store.Delete on each; update symbols referenced: diter, giter, deployments.Set,
groups.Set, store.Delete, dkeys.DeploymentPrefix, dkeys.GroupPrefix so the
behavior remains identical but deletion happens after iterator.Close().
In `@x/market/keeper/grpc_query.go`:
- Around line 75-82: Extract a small helper that centralizes the reverse vs
forward iterator construction used in the Order queries: create a function
(e.g., buildOrderStateIterator(ctx, state int, reverse bool)
(indexes.MultiIterator[int32, keys.OrderPrimaryKey], error)) that encapsulates
the logic currently using
k.orders.Indexes.State.Iterate(...collections.NewPrefixedPairRange[int32,
keys.OrderPrimaryKey](int32(state)).Descending()) versus
k.orders.Indexes.State.MatchExact(ctx, int32(state)); replace the five
duplicated blocks that inspect req.Pagination.Reverse with calls to this helper
so callers simply pass ctx, int(state), and req.Pagination.Reverse and handle
the returned iterator/error.
- Around line 157-161: The reversal of the local variable states inside the Bids
and Leases handlers is unnecessary when providerSearch is true because the
subsequent logic uses stateSet instead of the states slice; remove the
unconditional reversal (the for loop that swaps states[i], states[j]) or wrap it
with a guard so it only runs when providerSearch is false and the code actually
iterates over the states slice; adjust both places (the reversal near the Bids
handler and the similar block used near Leases) referencing the variables
states, providerSearch and stateSet to ensure the reversal has effect only when
the states slice is consumed.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (38)
app/types/app.gogo.modtests/upgrade/test-cases.jsontests/upgrade/workers_test.goupgrades/CHANGELOG.mdupgrades/software/v1.0.0/audit.goupgrades/software/v1.0.0/cert.goupgrades/software/v1.0.0/deployment.goupgrades/software/v1.0.0/escrow.goupgrades/software/v1.0.0/init.goupgrades/software/v1.0.0/market.goupgrades/software/v1.0.0/provider.goupgrades/software/v1.0.0/take.goupgrades/software/v1.0.0/upgrade.goupgrades/software/v1.1.0/init.goupgrades/software/v1.1.0/upgrade.goupgrades/software/v1.2.0/deployment.goupgrades/software/v1.2.0/init.gox/deployment/genesis.gox/deployment/handler/handler_test.gox/deployment/keeper/grpc_query.gox/deployment/keeper/grpc_query_test.gox/deployment/keeper/indexes.gox/deployment/keeper/keeper.gox/deployment/keeper/keeper_test.gox/deployment/keeper/key.gox/deployment/keeper/keys/deployment.gox/deployment/keeper/keys/group.gox/deployment/keeper/keys/key.gox/deployment/keeper/keys/types.gox/deployment/module.gox/deployment/simulation/genesis.gox/deployment/simulation/operations.gox/deployment/simulation/proposals.gox/market/handler/handler_test.gox/market/hooks/external.gox/market/hooks/hooks.gox/market/keeper/grpc_query.go
💤 Files with no reviewable changes (13)
- upgrades/software/v1.0.0/deployment.go
- upgrades/software/v1.0.0/market.go
- upgrades/software/v1.0.0/audit.go
- app/types/app.go
- x/deployment/keeper/key.go
- upgrades/software/v1.0.0/upgrade.go
- upgrades/software/v1.0.0/provider.go
- upgrades/software/v1.1.0/upgrade.go
- upgrades/software/v1.1.0/init.go
- upgrades/software/v1.0.0/init.go
- upgrades/software/v1.0.0/cert.go
- upgrades/software/v1.0.0/escrow.go
- upgrades/software/v1.0.0/take.go
🚧 Files skipped from review as they are similar to previous changes (3)
- upgrades/software/v1.2.0/init.go
- x/deployment/keeper/keys/types.go
- x/deployment/simulation/proposals.go
| kinfo, _, err := kr.NewMnemonic("provider", keyring.English, sdk.FullFundraiserPath, keyring.DefaultBIP39Passphrase, hd.Secp256k1) | ||
| require.NoError(t, err) | ||
|
|
||
| providerAddr, err := kinfo.GetAddress() | ||
| require.NoError(t, err) | ||
|
|
||
| fundMsg := banktypes.NewMsgSend( | ||
| params.FromAddress, | ||
| providerAddr, | ||
| sdk.NewCoins(sdk.NewCoin("uakt", sdkmath.NewInt(10000000))), | ||
| ) | ||
|
|
||
| res, err := pu.cl.Tx().BroadcastMsgs(ctx, []sdk.Msg{fundMsg}) | ||
| require.NoError(t, err) | ||
| require.NotNil(t, res) | ||
| txResp := res.(*sdk.TxResponse) | ||
| require.Equal(t, uint32(0), txResp.Code, "fund provider failed: %s", txResp.RawLog) | ||
|
|
||
| // Step 2: Create provider client | ||
| providerCctx := cctx. | ||
| WithFromAddress(providerAddr). | ||
| WithFromName("provider"). | ||
| WithFrom("provider") | ||
|
|
There was a problem hiding this comment.
Avoid a fixed keyring alias in E2E setup.
Using a constant "provider" key name can fail on reruns against the same keyring/home (key already exists). Generate a per-run alias and reuse it in mnemonic creation and client context.
🛠️ Proposed fix
import (
"context"
"crypto/sha256"
+ "fmt"
"testing"
+ "time"
@@
- kinfo, _, err := kr.NewMnemonic("provider", keyring.English, sdk.FullFundraiserPath, keyring.DefaultBIP39Passphrase, hd.Secp256k1)
+ providerName := fmt.Sprintf("provider-%d", time.Now().UnixNano())
+ kinfo, _, err := kr.NewMnemonic(providerName, keyring.English, sdk.FullFundraiserPath, keyring.DefaultBIP39Passphrase, hd.Secp256k1)
@@
providerCctx := cctx.
WithFromAddress(providerAddr).
- WithFromName("provider").
- WithFrom("provider")
+ WithFromName(providerName).
+ WithFrom(providerName)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| kinfo, _, err := kr.NewMnemonic("provider", keyring.English, sdk.FullFundraiserPath, keyring.DefaultBIP39Passphrase, hd.Secp256k1) | |
| require.NoError(t, err) | |
| providerAddr, err := kinfo.GetAddress() | |
| require.NoError(t, err) | |
| fundMsg := banktypes.NewMsgSend( | |
| params.FromAddress, | |
| providerAddr, | |
| sdk.NewCoins(sdk.NewCoin("uakt", sdkmath.NewInt(10000000))), | |
| ) | |
| res, err := pu.cl.Tx().BroadcastMsgs(ctx, []sdk.Msg{fundMsg}) | |
| require.NoError(t, err) | |
| require.NotNil(t, res) | |
| txResp := res.(*sdk.TxResponse) | |
| require.Equal(t, uint32(0), txResp.Code, "fund provider failed: %s", txResp.RawLog) | |
| // Step 2: Create provider client | |
| providerCctx := cctx. | |
| WithFromAddress(providerAddr). | |
| WithFromName("provider"). | |
| WithFrom("provider") | |
| providerName := fmt.Sprintf("provider-%d", time.Now().UnixNano()) | |
| kinfo, _, err := kr.NewMnemonic(providerName, keyring.English, sdk.FullFundraiserPath, keyring.DefaultBIP39Passphrase, hd.Secp256k1) | |
| require.NoError(t, err) | |
| providerAddr, err := kinfo.GetAddress() | |
| require.NoError(t, err) | |
| fundMsg := banktypes.NewMsgSend( | |
| params.FromAddress, | |
| providerAddr, | |
| sdk.NewCoins(sdk.NewCoin("uakt", sdkmath.NewInt(10000000))), | |
| ) | |
| res, err := pu.cl.Tx().BroadcastMsgs(ctx, []sdk.Msg{fundMsg}) | |
| require.NoError(t, err) | |
| require.NotNil(t, res) | |
| txResp := res.(*sdk.TxResponse) | |
| require.Equal(t, uint32(0), txResp.Code, "fund provider failed: %s", txResp.RawLog) | |
| // Step 2: Create provider client | |
| providerCctx := cctx. | |
| WithFromAddress(providerAddr). | |
| WithFromName(providerName). | |
| WithFrom(providerName) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/upgrade/workers_test.go` around lines 200 - 223, The test hardcodes the
keyring alias "provider" which can cause "key already exists" on reruns;
generate a unique per-run alias (e.g., using timestamp or test name + random
suffix) and use that variable wherever "provider" is used: pass it to
kr.NewMnemonic(...) instead of the literal, and replace the literals in
providerCctx.WithFromName(...).WithFrom(...) (and any other WithFrom*/GetAddress
usages) with the new alias variable so the mnemonic creation and client context
consistently use the generated alias.
|
|
||
| ##### v1.1.0 | ||
|
|
||
| 1. Fix overdrawn escrow accounts that prevent deployment from being closed by resolving account state and it’s associated objects into correct overdrawn state and close related deployments |
There was a problem hiding this comment.
Typo in changelog text.
Line 60 should use “its associated objects” (possessive), not “it’s associated objects”.
✏️ Proposed text fix
-1. Fix overdrawn escrow accounts that prevent deployment from being closed by resolving account state and it’s associated objects into correct overdrawn state and close related deployments
+1. Fix overdrawn escrow accounts that prevent deployment from being closed by resolving account state and its associated objects into correct overdrawn state and close related deployments📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 1. Fix overdrawn escrow accounts that prevent deployment from being closed by resolving account state and it’s associated objects into correct overdrawn state and close related deployments | |
| 1. Fix overdrawn escrow accounts that prevent deployment from being closed by resolving account state and its associated objects into correct overdrawn state and close related deployments |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@upgrades/CHANGELOG.md` at line 60, Replace the incorrect contraction "it’s"
with the possessive "its" in the changelog sentence "Fix overdrawn escrow
accounts that prevent deployment from being closed by resolving account state
and it’s associated objects into correct overdrawn state and close related
deployments" so it reads "its associated objects"; update the single-line entry
accordingly.
| gpk := keys.GroupIDToKey(group.ID) | ||
| if err := k.Groups().Set(ctx, gpk, group); err != nil { | ||
| panic(fmt.Errorf("deployment genesis groups init. group id %s: %w", group.ID, err)) | ||
| } |
There was a problem hiding this comment.
Add duplicate-key detection for groups during genesis import.
Unlike deployments, groups are inserted without a pre-check. Duplicate group IDs in genesis can overwrite prior entries silently.
🔧 Suggested parity check before group insert
gpk := keys.GroupIDToKey(group.ID)
+ hasGroup, err := k.Groups().Has(ctx, gpk)
+ if err != nil {
+ panic(fmt.Errorf("deployment genesis groups init. group id %s: %w", group.ID, err))
+ }
+ if hasGroup {
+ panic(fmt.Errorf("deployment genesis groups init. group id %s already exists", group.ID))
+ }
if err := k.Groups().Set(ctx, gpk, group); err != nil {
panic(fmt.Errorf("deployment genesis groups init. group id %s: %w", group.ID, err))
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| gpk := keys.GroupIDToKey(group.ID) | |
| if err := k.Groups().Set(ctx, gpk, group); err != nil { | |
| panic(fmt.Errorf("deployment genesis groups init. group id %s: %w", group.ID, err)) | |
| } | |
| gpk := keys.GroupIDToKey(group.ID) | |
| hasGroup, err := k.Groups().Has(ctx, gpk) | |
| if err != nil { | |
| panic(fmt.Errorf("deployment genesis groups init. group id %s: %w", group.ID, err)) | |
| } | |
| if hasGroup { | |
| panic(fmt.Errorf("deployment genesis groups init. group id %s already exists", group.ID)) | |
| } | |
| if err := k.Groups().Set(ctx, gpk, group); err != nil { | |
| panic(fmt.Errorf("deployment genesis groups init. group id %s: %w", group.ID, err)) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@x/deployment/genesis.go` around lines 59 - 62, Genesis group entries are
written without checking for existing keys, so duplicate group IDs can silently
overwrite prior entries; before calling k.Groups().Set(ctx, gpk, group) (where
gpk := keys.GroupIDToKey(group.ID)), check for an existing entry using
k.Groups().Get or a key existence lookup and return or panic with a clear error
(including the duplicate group ID) if a record already exists, ensuring
duplicates are detected during genesis import rather than overwritten.
Description
Closes: #XXXX
Author Checklist
All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow-up issues.
I have...
!to the type prefix if API or client breaking changeCHANGELOG.md