optimize storage and fix revert reason#27
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds archive pruning and configurable compression/retention; surfaces optional Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
crates/rpc/types/src/receipt.rs (1)
88-116: Consider addingrevert_reasonparameter tofailure()constructor.The
failure()constructor always setsrevert_reason: None, but callers creating failure receipts may have a reason available. Consider adding an optional parameter or awith_revert_reasonbuilder method for ergonomics.This is optional since the primary path populates
revert_reasonviaStoredReceipt::to_rpc_receipt().🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rpc/types/src/receipt.rs` around lines 88 - 116, Add an optional revert_reason parameter to the failure() constructor so callers can supply a failure message: update the function signature of failure(tx_hash, tx_index, block_hash, block_number, from, to, gas_used, cumulative_gas_used, revert_reason: Option<String>) and set the struct field revert_reason to that parameter inside the returned Self; ensure any internal call sites that construct failure() are updated to pass None where no reason exists (or thread through an existing reason). Alternatively, if you prefer a builder style, add a Receipt::with_revert_reason(self, reason: String) -> Self that sets the revert_reason field and keep failure() unchanged. Ensure you reference the failure() function and the revert_reason field in tests and usages so compilation succeeds.crates/app/node/src/lib.rs (1)
243-243: Consider edge case: first prune after restart.When a node restarts with existing archive data,
block_numberin this callback starts from where the node resumed. If the node was down for a period longer thanretention_blocks, the first successful prune might not happen immediately since the conditionblock_number % prune_interval == 0must also be met.This is likely acceptable behavior, but worth documenting that pruning catches up at the next section boundary.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/app/node/src/lib.rs` at line 243, The prune condition `if retention > 0 && block_number > retention && block_number % prune_interval == 0` can skip the first prune after a restart if the resumed block_number doesn't fall on a prune boundary; update the logic in the pruning callback to track the last pruned block (e.g., last_pruned) and trigger pruning when either the boundary is hit OR the gap since last_pruned >= prune_interval (i.e., (block_number % prune_interval == 0) || (block_number - last_pruned >= prune_interval)); also add a short comment near the `block_number`, `retention`, and `prune_interval` logic documenting this edge case so future readers understand pruning will catch up after a restart.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/app/node/src/lib.rs`:
- Around line 241-248: The pruning logic currently only calls
store.prune(min_block) which leaves PersistentChainIndex
(transactions/receipts/logs) intact and causes RPCs to return transactions whose
parent blocks are pruned; add a matching prune operation on the chain index or
otherwise reconcile RPC behavior: implement a prune(min_block) async method on
the PersistentChainIndex type (or its concrete implementation) and invoke
chain_index.prune(min_block) alongside store.prune(min_block) in the pruning
block, ensuring it deletes/marks transactions, receipts, and logs older than
min_block; alternatively, if the separation is intentional, update RPC handlers
to detect pruned parent blocks and return BlockNotFound or another appropriate
error and add documentation describing the retention mismatch so callers are
aware.
In `@crates/rpc/chain-index/src/index.rs`:
- Line 209: The receipts schema change adds a new column that will break
existing databases when get_receipt() expects column index 12; fix by performing
a safe schema migration before schema init: run "ALTER TABLE receipts ADD COLUMN
IF NOT EXISTS revert_reason TEXT" (or equivalent via rusqlite) during
startup/initialization (prior to any get_receipt() calls) or implement a
schema-version check and migration path in the same init routine; update the
initialization code that creates the receipts table to first detect/alter the
existing receipts table (or bump and migrate a stored schema version) so
existing DBs gain the revert_reason column before get_receipt() is used.
---
Nitpick comments:
In `@crates/app/node/src/lib.rs`:
- Line 243: The prune condition `if retention > 0 && block_number > retention &&
block_number % prune_interval == 0` can skip the first prune after a restart if
the resumed block_number doesn't fall on a prune boundary; update the logic in
the pruning callback to track the last pruned block (e.g., last_pruned) and
trigger pruning when either the boundary is hit OR the gap since last_pruned >=
prune_interval (i.e., (block_number % prune_interval == 0) || (block_number -
last_pruned >= prune_interval)); also add a short comment near the
`block_number`, `retention`, and `prune_interval` logic documenting this edge
case so future readers understand pruning will catch up after a restart.
In `@crates/rpc/types/src/receipt.rs`:
- Around line 88-116: Add an optional revert_reason parameter to the failure()
constructor so callers can supply a failure message: update the function
signature of failure(tx_hash, tx_index, block_hash, block_number, from, to,
gas_used, cumulative_gas_used, revert_reason: Option<String>) and set the struct
field revert_reason to that parameter inside the returned Self; ensure any
internal call sites that construct failure() are updated to pass None where no
reason exists (or thread through an existing reason). Alternatively, if you
prefer a builder style, add a Receipt::with_revert_reason(self, reason: String)
-> Self that sets the revert_reason field and keep failure() unchanged. Ensure
you reference the failure() function and the revert_reason field in tests and
usages so compilation succeeds.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 470ed849-8eab-473a-97eb-0b4f3caeab27
📒 Files selected for processing (8)
crates/app/node/src/lib.rscrates/rpc/chain-index/src/index.rscrates/rpc/chain-index/src/integration.rscrates/rpc/chain-index/src/provider.rscrates/rpc/chain-index/src/types.rscrates/rpc/types/src/receipt.rscrates/storage/src/block_store.rscrates/storage/src/types.rs
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
crates/app/tx/eth/src/mempool.rs (1)
491-503:⚠️ Potential issue | 🟠 MajorCanonicalize EOA
sender_keyinstead of trusting wire input.Line 502 still persists
wire.sender_keyeven for EOA payloads. That lets a validly signed sender choose arbitrary mempool sender keys, which can split nonce/replacement behavior per fake key. For EOA, derivesender_keyfrom the verified sender address (or reject mismatch) before constructingTxContext.🔧 Proposed fix
- return Ok(Self { + let sender_key = if wire.sender_type == sender_type::EOA_SECP256K1 { + let address = wire.sender_eth_address.ok_or(ERR_TX_DECODE)?; + Address::from(address).as_slice().to_vec() + } else { + if SenderKey::new(&wire.sender_key).is_none() { + return Err(ERR_TX_DECODE); + } + wire.sender_key + }; + + return Ok(Self { payload, sender_type: wire.sender_type, invoke_request: wire.invoke_request, effective_gas_price: wire.effective_gas_price, tx_hash: B256::from(wire.tx_hash), gas_limit: wire.gas_limit, nonce: wire.nonce, chain_id: wire.chain_id, sender_resolution, recipient_resolution, - sender_key: wire.sender_key, + sender_key, authentication_payload: wire.authentication_payload, sender_eth_address: wire.sender_eth_address, recipient_eth_address: wire.recipient_eth_address, funds: wire.funds, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/app/tx/eth/src/mempool.rs` around lines 491 - 503, When constructing the TxContext in mempool.rs (the Self { ... } block that currently sets sender_key: wire.sender_key), do not trust wire.sender_key for EOA payloads: detect EOA cases (e.g., via sender_type or payload kind), derive the canonical sender_key from the verified sender address/signature (use the verified sender from sender_resolution or derive from authentication_payload/signature verification), and set sender_key to that derived value (or reject with an error if the provided wire.sender_key mismatches); for non-EOA cases keep using wire.sender_key. Replace the direct assignment sender_key: wire.sender_key with this conditional derivation/validation logic so nonce/replacement behavior cannot be split by a fake wire.sender_key.
♻️ Duplicate comments (1)
crates/rpc/chain-index/src/index.rs (1)
158-231:⚠️ Potential issue | 🔴 CriticalUpgrades still need a
receipts.revert_reasonmigration.
init_schema()only creates the column on fresh databases. On an existing SQLite file,get_receipt()selectsr.revert_reasonandinsert_receipt()writes it, so upgraded nodes will start failing at runtime until thereceiptstable is altered before these paths run.Also applies to: 512-523, 682-699
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rpc/chain-index/src/index.rs` around lines 158 - 231, init_schema() only creates receipts.revert_reason for fresh DBs but not existing DBs, so add a migration step that checks for and alters the receipts table to add the revert_reason TEXT column if missing before any code paths that call get_receipt() or insert_receipt(); implement this in the same module (chain-index/src/index.rs) alongside init_schema() and the startup migration logic (the code paths that open the writer lock and call init_schema()/migrations), e.g., detect the column via PRAGMA table_info('receipts') and run "ALTER TABLE receipts ADD COLUMN revert_reason TEXT" when absent, ensuring the migration runs early during initialization so get_receipt()/insert_receipt() will not fail on upgraded databases.
🧹 Nitpick comments (8)
crates/storage/src/qmdb_impl.rs (1)
457-458: Misleading comment and unnecessary explicitdrop.Iterating with
for key in &keys_to_invalidateborrows theVec, it does not consume it. TheVecremains valid after the loop. The explicitdrop(keys_to_invalidate)is redundant since the variable would be dropped automatically at the end of the function scope.🧹 Suggested cleanup
} - // keys_to_invalidate dropped here; the Vec was consumed by the reference loop above. - drop(keys_to_invalidate);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/storage/src/qmdb_impl.rs` around lines 457 - 458, The comment and explicit drop of keys_to_invalidate are misleading and unnecessary: the loop uses for key in &keys_to_invalidate which borrows the Vec rather than consuming it, so remove the explicit drop(keys_to_invalidate) call and replace the comment with a concise note that the Vec is only borrowed by the loop and will be dropped automatically at scope end; update references around the for key in &keys_to_invalidate iteration in qmdb_impl.rs accordingly.crates/rpc/eth-jsonrpc/src/server.rs (1)
572-579: Consider consolidating request size limits into one shared constant.
128 * 1024is now repeated across RPC methods; lifting it to a module-level constant would prevent drift.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rpc/eth-jsonrpc/src/server.rs` around lines 572 - 579, Extract the repeated literal 128 * 1024 into a single module-level constant (e.g. const MAX_RPC_INPUT_SIZE: usize = 128 * 1024) and replace the local MAX_SHA3_INPUT and any other occurrences of the literal in RPC handlers with that constant; update the check in the sha3 handler (the block using data.len() > MAX_SHA3_INPUT) to use MAX_RPC_INPUT_SIZE and change other RPC methods that repeat the value to reference the new constant so all request-size limits stay consolidated.crates/storage/src/cache.rs (1)
535-555: Add a direct unit test for theErr(_)prefetch path.Current coverage only asserts the
Ok(Some(_))branch in this module. A local test forErr(_) => cache misswill lock this contract at the cache API level.Proposed test addition
+ #[test] + fn test_sharded_cache_prefetch_error_does_not_cache_key() { + let cache = ShardedDbCache::with_defaults(); + let key = b"\x01fault_key".to_vec(); + let keys = vec![key.clone()]; + + cache.prefetch_keys(&keys, |_k| -> Result<Option<Vec<u8>>, ()> { Err(()) }); + + assert!( + cache.get(&key).is_none(), + "prefetch error should not create absent/present cache entry" + ); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/storage/src/cache.rs` around lines 535 - 555, Add a unit test that asserts prefetching which returns Err results in a cache miss: create a ShardedDbCache via ShardedDbCache::with_defaults, prepare keys (e.g., same shard and different shard), call cache.prefetch_keys(&keys, |key| -> Result<Option<Vec<u8>>, ()> { Err(()) }) to simulate the error path, and then assert that cache.get(key).is_none() for each key to lock the contract that Err(_) yields no cached entry; reference ShardedDbCache::with_defaults, prefetch_keys, and get in the test.crates/rpc/chain-index/src/index.rs (1)
158-233:init_schemais past the repo’s complexity limit.It currently mixes migrations, table creation, and index creation in one large batch. Splitting migration logic from fresh-schema setup will make storage changes like
revert_reasonsafer to evolve.As per coding guidelines,
Keep functions to less than 70 lines to maintain bounded complexity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rpc/chain-index/src/index.rs` around lines 158 - 233, The init_schema function is too large and mixes fresh-schema creation with migration steps; split it into smaller functions such as create_tables, create_indexes, and run_migrations and have init_schema simply call them. Move the big SQL batch out of init_schema (e.g., into create_tables and create_indexes as separate execute_batch calls or string constants), then implement run_migrations to apply incremental ALTER TABLEs (for example adding the revert_reason column) so schema evolution is separate from initial creation; update references to self.init_schema to call the new methods (look for the init_schema function, and create new methods named create_tables, create_indexes, run_migrations on the same impl to keep locking via self.writer.lock().unwrap() consistent).crates/rpc/chain-index/src/provider.rs (1)
599-700: Extract the log-filter helpers before this grows further.
get_logs_for_block_rangenow mixes validation, block traversal, address matching, topic matching, and result limiting in one place. Pulling the predicates and per-block scan into helpers will make the new guardrails easier to test and keeps the touched function within the repo limit.As per coding guidelines,
Keep functions to less than 70 lines to maintain bounded complexity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rpc/chain-index/src/provider.rs` around lines 599 - 700, The function get_logs_for_block_range is doing too many things; extract small helpers for clarity and testability: create matches_address(filter: &LogFilter, stored_log: &StoredLog) -> bool to encapsulate the FilterAddress matching, matches_topics(filter: &LogFilter, stored_log: &StoredLog) -> bool to encapsulate FilterTopic logic, and a per-block scanner collect_logs_for_block(index: &IndexType, block_num: u64, block_hash: H256, tx_hashes: &[TxHash], filter: &LogFilter, max_logs: usize, result: &mut Vec<RpcLog>) -> Result<(), RpcError> (or similar) to iterate receipts/transactions, apply the two predicate helpers, enforce MAX_LOGS and push RpcLog entries; then simplify get_logs_for_block_range to validate range, iterate blocks, call get_block/get_block_transactions and delegate the per-block work to collect_logs_for_block so the main function stays small and the predicates are individually testable.crates/app/stf/src/handlers.rs (1)
123-125: Preallocate storage-key buffers in request hot paths.These key builders can avoid reallocations by reserving exact capacity before extending.
♻️ Suggested allocation optimization
- let mut key = vec![ACCOUNT_STORAGE_PREFIX]; - key.extend_from_slice(&invoker.whoami.as_bytes()); + let whoami = invoker.whoami.as_bytes(); + let mut key = Vec::with_capacity(1 + whoami.len() + storage_set.key.len()); + key.push(ACCOUNT_STORAGE_PREFIX); + key.extend_from_slice(&whoami); key.extend(storage_set.key); ... - let mut key = vec![ACCOUNT_STORAGE_PREFIX]; - key.extend_from_slice(&invoker.whoami.as_bytes()); + let whoami = invoker.whoami.as_bytes(); + let mut key = Vec::with_capacity(1 + whoami.len() + storage_remove.key.len()); + key.push(ACCOUNT_STORAGE_PREFIX); + key.extend_from_slice(&whoami); key.extend(storage_remove.key); ... - let mut key = vec![ACCOUNT_STORAGE_PREFIX]; - key.extend_from_slice(&storage_get.account_id.as_bytes()); + let account_id = storage_get.account_id.as_bytes(); + let mut key = Vec::with_capacity(1 + account_id.len() + storage_get.key.len()); + key.push(ACCOUNT_STORAGE_PREFIX); + key.extend_from_slice(&account_id); key.extend(storage_get.key);As per coding guidelines: "Minimize memory allocation, preallocate when possible, and avoid hot-loop allocations in Rust code".
Also applies to: 137-139, 156-158
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/app/stf/src/handlers.rs` around lines 123 - 125, The key builders allocate repeatedly; preallocate exact capacity for the Vec before pushing/appending to avoid reallocations: compute total_len = 1 (for ACCOUNT_STORAGE_PREFIX) + invoker.whoami.as_bytes().len() + storage_set.key.len(), then create key = Vec::with_capacity(total_len) and push ACCOUNT_STORAGE_PREFIX, extend_from_slice(invoker.whoami.as_bytes()), and extend(storage_set.key); apply the same pattern for the other occurrences that build keys (the blocks referencing invoker.whoami and storage_set.key at the later sites noted).crates/app/tx/eth/tests/proptest_tests.rs (1)
42-46: Also run the model-preservation test throughTxKind::Create.The roundtrip properties cover both branches now, but
prop_model_preservationstill hardcodes an address and only asserts theSome(Address)path. Reusingarb_tx_kind()there would keep theNone/creation case under the model-based assertions too.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/app/tx/eth/tests/proptest_tests.rs` around lines 42 - 46, The test prop_model_preservation currently hardcodes an address and only asserts the Some(Address) path; update it to draw TxKind values from the existing arb_tx_kind() strategy so the Create (None address) case is also exercised. Locate prop_model_preservation and replace the hardcoded TxKind/address input with a generated one via arb_tx_kind(), then update the assertions to handle both variants of TxKind (match on TxKind::Call(addr) to assert Some(addr) behavior and TxKind::Create to assert the None/creation behavior) so model-preservation is validated for both branches.crates/app/sdk/testing/src/lib.rs (1)
118-145: Centralize the namespaced storage-key builder.The
ACCOUNT_STORAGE_PREFIX + account_id + keylayout is repeated in set/remove/query here, and rebuilt again in STF tests. One helper keeps those call sites from drifting the next time the storage key shape changes.Refactor sketch
impl MockEnv { + fn account_storage_key(account_id: AccountId, key: &[u8]) -> Vec<u8> { + let mut out = vec![ACCOUNT_STORAGE_PREFIX]; + out.extend_from_slice(&account_id.as_bytes()); + out.extend_from_slice(key); + out + } + pub fn new(whoami: AccountId, sender: AccountId) -> Self { MockEnv { whoami, @@ StorageSetRequest::FUNCTION_IDENTIFIER => { let storage_set: StorageSetRequest = request.get()?; - - let mut key = vec![ACCOUNT_STORAGE_PREFIX]; - key.extend_from_slice(&self.whoami.as_bytes()); - key.extend(storage_set.key); + let key = Self::account_storage_key(self.whoami, &storage_set.key); self.state.insert(key, storage_set.value.as_vec()?); @@ StorageRemoveRequest::FUNCTION_IDENTIFIER => { let storage_remove: StorageRemoveRequest = request.get()?; - let mut key = vec![ACCOUNT_STORAGE_PREFIX]; - key.extend_from_slice(&self.whoami.as_bytes()); - key.extend(storage_remove.key); + let key = Self::account_storage_key(self.whoami, &storage_remove.key); self.state.remove(&key); @@ StorageGetRequest::FUNCTION_IDENTIFIER => { let storage_get: StorageGetRequest = request.get()?; - - let mut key = vec![ACCOUNT_STORAGE_PREFIX]; - key.extend_from_slice(&storage_get.account_id.as_bytes()); - key.extend(storage_get.key); + let key = Self::account_storage_key(storage_get.account_id, &storage_get.key); let value = self.state.get(&key).cloned();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/app/sdk/testing/src/lib.rs` around lines 118 - 145, Extract the repeated ACCOUNT_STORAGE_PREFIX + account_id + key construction into a single helper function (e.g., fn build_account_storage_key(account_id: &AccountId, key: &[u8]) -> Vec<u8>) and use it everywhere the key is built: the StorageSetRequest handler (where self.state.insert is called), the StorageRemoveRequest handler (where self.state.remove is called), and handle_storage_query (where storage_get key is built); also update the STF tests to call the new helper so all call sites use the same namespaced layout and avoid drift if the prefix/shape changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/rpc/chain-index/src/provider.rs`:
- Around line 616-619: The clamp for the scan window currently sets actual_to =
to_block.min(from_block.saturating_add(max_blocks)) but the subsequent inclusive
loop over from_block..=actual_to scans 1001 blocks when max_blocks == 1000;
change the clamp to subtract one from max_blocks (e.g., use
from_block.saturating_add(max_blocks.saturating_sub(1)) or otherwise compute
actual_to = to_block.min(from_block + (max_blocks - 1))) so that the inclusive
range contains at most max_blocks blocks; update any related comments
referencing the 1000-block ceiling accordingly and keep the constants
(max_blocks) and the inclusive loop semantics intact.
In `@crates/rpc/eth-jsonrpc/src/server.rs`:
- Around line 518-527: The code now enforces MAX_FEE_HISTORY_BLOCKS = 1024 by
returning an error when block_count > 1024, so update the failing test
test_fee_history_capped_at_1024 to reflect the new contract: change the
expectation for requests with block_count > 1024 to assert an InvalidParams/RPC
error (matching the error from the block_count check), and add/ensure a passing
test case that requests block_count == 1024 still succeeds; reference the
constants/logic around MAX_FEE_HISTORY_BLOCKS and the block_count validation in
fee_history to guide the test changes.
In `@crates/rpc/evnode/src/server.rs`:
- Around line 92-93: The env var reading for EVOLVE_EVNODE_AUTH_TOKEN currently
uses std::env::var(...).ok(), which leaves Some("") for empty strings and lets
the server start with an empty token; change the initialization of auth_token
(and any similar reads used in serve_with_shutdown and the other noted
locations) to normalize empty strings to None by mapping empty -> None (e.g.,
fetch var, trim, then if trimmed.is_empty() return None else Some(value));
ensure the startup guards and require_auth logic check for None to enforce a
non-empty secret and apply the same normalization in the serve_with_shutdown
code paths referenced.
- Around line 65-82: EvnodeServerConfig currently derives Debug and will leak
the shared-secret in auth_token; replace the derived Debug with a custom
fmt::Debug implementation for EvnodeServerConfig that omits or redacts
auth_token (e.g., show "<REDACTED>" or only a boolean indicating presence) while
preserving debug output for the other fields, ensuring the auth_token field is
never printed when EvnodeServerConfig is formatted.
In `@crates/rpc/grpc/src/conversion.rs`:
- Around line 324-343: The conversion incorrectly treats a present-but-malformed
req.from as absent; update proto_to_call_request so that req.from is validated
and any malformed encoding causes the whole conversion to fail (return None)
instead of silently becoming None. Specifically, in proto_to_call_request change
the from handling (currently using req.from.as_ref().and_then(proto_to_address))
to explicitly check req.from: if None then set RpcCallRequest.from to None, but
if Some(_) pass it to proto_to_address and propagate None from proto_to_address
to cause proto_to_call_request to return None; reference proto_to_call_request,
RpcCallRequest and req.from when making the change.
In `@crates/rpc/grpc/src/services/execution.rs`:
- Around line 347-348: The current chain collapses missing vs malformed call
requests; first check for presence and return a distinct missing-request error
(e.g., use ok_or_else(|| GrpcError::InvalidArgument("Missing call
request".to_string())) on the Option), then in a separate step run
proto_to_call_request and map any parsing failures to a
malformed/invalid-request error (e.g., map_err to
GrpcError::InvalidArgument("Invalid call request".to_string())). Apply the same
two-step split in both the call handler and estimate_gas handler (referencing
proto_to_call_request, GrpcError::InvalidArgument, and the estimate_gas method)
so tests like test_call_rejects_missing_call_request continue to pass.
In `@crates/storage/src/warming.rs`:
- Around line 335-342: The fault injector is returning an out-of-range error
code (ErrorCode::new(0xFF)) inside the ReadonlyKV impl for FaultingStorage;
change the injected error to a system-range code such as ErrorCode::new(0x40) in
the get method (the branch where key == self.fault_key.as_slice()) so the test
uses a valid system-range error; alternatively, register a named system error
using the define_error! macro and return that error from FaultingStorage::get.
---
Outside diff comments:
In `@crates/app/tx/eth/src/mempool.rs`:
- Around line 491-503: When constructing the TxContext in mempool.rs (the Self {
... } block that currently sets sender_key: wire.sender_key), do not trust
wire.sender_key for EOA payloads: detect EOA cases (e.g., via sender_type or
payload kind), derive the canonical sender_key from the verified sender
address/signature (use the verified sender from sender_resolution or derive from
authentication_payload/signature verification), and set sender_key to that
derived value (or reject with an error if the provided wire.sender_key
mismatches); for non-EOA cases keep using wire.sender_key. Replace the direct
assignment sender_key: wire.sender_key with this conditional
derivation/validation logic so nonce/replacement behavior cannot be split by a
fake wire.sender_key.
---
Duplicate comments:
In `@crates/rpc/chain-index/src/index.rs`:
- Around line 158-231: init_schema() only creates receipts.revert_reason for
fresh DBs but not existing DBs, so add a migration step that checks for and
alters the receipts table to add the revert_reason TEXT column if missing before
any code paths that call get_receipt() or insert_receipt(); implement this in
the same module (chain-index/src/index.rs) alongside init_schema() and the
startup migration logic (the code paths that open the writer lock and call
init_schema()/migrations), e.g., detect the column via PRAGMA
table_info('receipts') and run "ALTER TABLE receipts ADD COLUMN revert_reason
TEXT" when absent, ensuring the migration runs early during initialization so
get_receipt()/insert_receipt() will not fail on upgraded databases.
---
Nitpick comments:
In `@crates/app/sdk/testing/src/lib.rs`:
- Around line 118-145: Extract the repeated ACCOUNT_STORAGE_PREFIX + account_id
+ key construction into a single helper function (e.g., fn
build_account_storage_key(account_id: &AccountId, key: &[u8]) -> Vec<u8>) and
use it everywhere the key is built: the StorageSetRequest handler (where
self.state.insert is called), the StorageRemoveRequest handler (where
self.state.remove is called), and handle_storage_query (where storage_get key is
built); also update the STF tests to call the new helper so all call sites use
the same namespaced layout and avoid drift if the prefix/shape changes.
In `@crates/app/stf/src/handlers.rs`:
- Around line 123-125: The key builders allocate repeatedly; preallocate exact
capacity for the Vec before pushing/appending to avoid reallocations: compute
total_len = 1 (for ACCOUNT_STORAGE_PREFIX) + invoker.whoami.as_bytes().len() +
storage_set.key.len(), then create key = Vec::with_capacity(total_len) and push
ACCOUNT_STORAGE_PREFIX, extend_from_slice(invoker.whoami.as_bytes()), and
extend(storage_set.key); apply the same pattern for the other occurrences that
build keys (the blocks referencing invoker.whoami and storage_set.key at the
later sites noted).
In `@crates/app/tx/eth/tests/proptest_tests.rs`:
- Around line 42-46: The test prop_model_preservation currently hardcodes an
address and only asserts the Some(Address) path; update it to draw TxKind values
from the existing arb_tx_kind() strategy so the Create (None address) case is
also exercised. Locate prop_model_preservation and replace the hardcoded
TxKind/address input with a generated one via arb_tx_kind(), then update the
assertions to handle both variants of TxKind (match on TxKind::Call(addr) to
assert Some(addr) behavior and TxKind::Create to assert the None/creation
behavior) so model-preservation is validated for both branches.
In `@crates/rpc/chain-index/src/index.rs`:
- Around line 158-233: The init_schema function is too large and mixes
fresh-schema creation with migration steps; split it into smaller functions such
as create_tables, create_indexes, and run_migrations and have init_schema simply
call them. Move the big SQL batch out of init_schema (e.g., into create_tables
and create_indexes as separate execute_batch calls or string constants), then
implement run_migrations to apply incremental ALTER TABLEs (for example adding
the revert_reason column) so schema evolution is separate from initial creation;
update references to self.init_schema to call the new methods (look for the
init_schema function, and create new methods named create_tables,
create_indexes, run_migrations on the same impl to keep locking via
self.writer.lock().unwrap() consistent).
In `@crates/rpc/chain-index/src/provider.rs`:
- Around line 599-700: The function get_logs_for_block_range is doing too many
things; extract small helpers for clarity and testability: create
matches_address(filter: &LogFilter, stored_log: &StoredLog) -> bool to
encapsulate the FilterAddress matching, matches_topics(filter: &LogFilter,
stored_log: &StoredLog) -> bool to encapsulate FilterTopic logic, and a
per-block scanner collect_logs_for_block(index: &IndexType, block_num: u64,
block_hash: H256, tx_hashes: &[TxHash], filter: &LogFilter, max_logs: usize,
result: &mut Vec<RpcLog>) -> Result<(), RpcError> (or similar) to iterate
receipts/transactions, apply the two predicate helpers, enforce MAX_LOGS and
push RpcLog entries; then simplify get_logs_for_block_range to validate range,
iterate blocks, call get_block/get_block_transactions and delegate the per-block
work to collect_logs_for_block so the main function stays small and the
predicates are individually testable.
In `@crates/rpc/eth-jsonrpc/src/server.rs`:
- Around line 572-579: Extract the repeated literal 128 * 1024 into a single
module-level constant (e.g. const MAX_RPC_INPUT_SIZE: usize = 128 * 1024) and
replace the local MAX_SHA3_INPUT and any other occurrences of the literal in RPC
handlers with that constant; update the check in the sha3 handler (the block
using data.len() > MAX_SHA3_INPUT) to use MAX_RPC_INPUT_SIZE and change other
RPC methods that repeat the value to reference the new constant so all
request-size limits stay consolidated.
In `@crates/storage/src/cache.rs`:
- Around line 535-555: Add a unit test that asserts prefetching which returns
Err results in a cache miss: create a ShardedDbCache via
ShardedDbCache::with_defaults, prepare keys (e.g., same shard and different
shard), call cache.prefetch_keys(&keys, |key| -> Result<Option<Vec<u8>>, ()> {
Err(()) }) to simulate the error path, and then assert that
cache.get(key).is_none() for each key to lock the contract that Err(_) yields no
cached entry; reference ShardedDbCache::with_defaults, prefetch_keys, and get in
the test.
In `@crates/storage/src/qmdb_impl.rs`:
- Around line 457-458: The comment and explicit drop of keys_to_invalidate are
misleading and unnecessary: the loop uses for key in &keys_to_invalidate which
borrows the Vec rather than consuming it, so remove the explicit
drop(keys_to_invalidate) call and replace the comment with a concise note that
the Vec is only borrowed by the loop and will be dropped automatically at scope
end; update references around the for key in &keys_to_invalidate iteration in
qmdb_impl.rs accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: cfea20b3-98b4-43dd-a427-5308f489eef3
📒 Files selected for processing (32)
crates/app/node/src/config.rscrates/app/sdk/collections/Cargo.tomlcrates/app/sdk/collections/src/prop_tests.rscrates/app/sdk/core/src/runtime_api.rscrates/app/sdk/testing/Cargo.tomlcrates/app/sdk/testing/src/lib.rscrates/app/sdk/testing/src/proptest_config.rscrates/app/stf/Cargo.tomlcrates/app/stf/src/execution_state.rscrates/app/stf/src/gas.rscrates/app/stf/src/handlers.rscrates/app/stf/src/lib.rscrates/app/tx/eth/Cargo.tomlcrates/app/tx/eth/src/mempool.rscrates/app/tx/eth/tests/proptest_tests.rscrates/rpc/chain-index/Cargo.tomlcrates/rpc/chain-index/src/index.rscrates/rpc/chain-index/src/provider.rscrates/rpc/eth-jsonrpc/src/lib.rscrates/rpc/eth-jsonrpc/src/server.rscrates/rpc/evnode/examples/run_server.rscrates/rpc/evnode/src/lib.rscrates/rpc/evnode/src/runner.rscrates/rpc/evnode/src/server.rscrates/rpc/evnode/src/service.rscrates/rpc/grpc/src/conversion.rscrates/rpc/grpc/src/services/execution.rscrates/rpc/types/Cargo.tomlcrates/rpc/types/src/lib.rscrates/storage/src/cache.rscrates/storage/src/qmdb_impl.rscrates/storage/src/warming.rs
✅ Files skipped from review due to trivial changes (1)
- crates/rpc/eth-jsonrpc/src/lib.rs
| impl ReadonlyKV for FaultingStorage { | ||
| fn get(&self, key: &[u8]) -> Result<Option<Vec<u8>>, ErrorCode> { | ||
| if key == self.fault_key.as_slice() { | ||
| Err(ErrorCode::new(0xFF)) | ||
| } else { | ||
| self.inner.get(key) | ||
| } | ||
| } |
There was a problem hiding this comment.
Use an in-range error code in the fault injector.
Line 338 uses ErrorCode::new(0xFF), which is outside the configured ranges. Use a system-range value (for example 0x40) for this test fault.
Suggested patch
- Err(ErrorCode::new(0xFF))
+ Err(ErrorCode::new(0x40))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| impl ReadonlyKV for FaultingStorage { | |
| fn get(&self, key: &[u8]) -> Result<Option<Vec<u8>>, ErrorCode> { | |
| if key == self.fault_key.as_slice() { | |
| Err(ErrorCode::new(0xFF)) | |
| } else { | |
| self.inner.get(key) | |
| } | |
| } | |
| impl ReadonlyKV for FaultingStorage { | |
| fn get(&self, key: &[u8]) -> Result<Option<Vec<u8>>, ErrorCode> { | |
| if key == self.fault_key.as_slice() { | |
| Err(ErrorCode::new(0x40)) | |
| } else { | |
| self.inner.get(key) | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/storage/src/warming.rs` around lines 335 - 342, The fault injector is
returning an out-of-range error code (ErrorCode::new(0xFF)) inside the
ReadonlyKV impl for FaultingStorage; change the injected error to a system-range
code such as ErrorCode::new(0x40) in the get method (the branch where key ==
self.fault_key.as_slice()) so the test uses a valid system-range error;
alternatively, register a named system error using the define_error! macro and
return that error from FaultingStorage::get.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/rpc/eth-jsonrpc/src/server.rs`:
- Around line 1226-1233: Formatting is failing; run rustfmt to fix whitespace
around the call to EthApiServer::fee_history (the block that assigns let err =
EthApiServer::fee_history(...).await.unwrap_err();), run cargo fmt --all (or
just cargo fmt --all --edition 2021) then commit only the formatting changes so
the fmt-check in CI passes and "just quality" succeeds.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b8767828-8b96-40c6-8845-1a89338efcc2
📒 Files selected for processing (3)
crates/rpc/eth-jsonrpc/src/server.rscrates/rpc/grpc/src/conversion.rscrates/rpc/grpc/src/services/execution.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- crates/rpc/grpc/src/conversion.rs
- crates/rpc/grpc/src/services/execution.rs
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
crates/rpc/chain-index/src/querier.rs (1)
41-42:⚠️ Potential issue | 🟡 MinorUpdate key-format docs to match the new prefixed layout
The doc comment still describes unprefixed keys, but implementation now uses
ACCOUNT_STORAGE_PREFIX. Please update the nonce/balance format docs to avoid future regressions.Suggested doc fix
-/// - Nonce: `account_id_bytes ++ [0]` (EthEoaAccount storage prefix 0) -/// - Balance: `token_id_bytes ++ [1] ++ encode(account_id)` (Token storage prefix 1) +/// - Nonce: `[ACCOUNT_STORAGE_PREFIX] ++ account_id_bytes ++ [0]` +/// (EthEoaAccount storage prefix 0) +/// - Balance: `[ACCOUNT_STORAGE_PREFIX] ++ token_id_bytes ++ [1] ++ encode(account_id)` +/// (Token storage prefix 1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rpc/chain-index/src/querier.rs` around lines 41 - 42, The doc comment describing nonce/balance key formats is out of date—update the comment near the key-format docs in querier.rs to reflect the new prefixed layout using ACCOUNT_STORAGE_PREFIX (i.e. include the account/storage prefix bytes in the described key formats). Specifically, change the Nonce and Balance descriptions to show that keys are prefixed with ACCOUNT_STORAGE_PREFIX (and any sub-prefix bytes like 0/1 used for EthEoaAccount vs Token) so they match the implementation that builds keys with ACCOUNT_STORAGE_PREFIX when computing nonce and balance keys.crates/app/tx/eth/src/eoa_registry.rs (1)
290-487:⚠️ Potential issue | 🟡 MinorUnit tests in this file don't cover the
_in_storagefunctions.The three
_in_storagefunctions (lookup_account_id_in_storage,lookup_contract_account_id_in_storage,lookup_address_in_storage) useACCOUNT_STORAGE_PREFIXbut have incomplete test coverage:
lookup_account_id_in_storageis exercised in e2e tests (mempool_e2e.rs)lookup_contract_account_id_in_storageandlookup_address_in_storagelack unit or e2e test coverageAdd unit tests in this module to cover the
_in_storagefunctions, particularlylookup_address_in_storageandlookup_contract_account_id_in_storage.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/app/tx/eth/src/eoa_registry.rs` around lines 290 - 487, Add unit tests that directly exercise lookup_address_in_storage and lookup_contract_account_id_in_storage by seeding MockEnv.storage with the expected account-scoped keys using ACCOUNT_STORAGE_PREFIX and the existing key-builder helpers (e.g., id_to_addr_key and the contract-id equivalent) via MockEnv::account_scoped_key, serializing values with Message::new(...).into_bytes(), then calling lookup_address_in_storage(address, &mut env) and lookup_contract_account_id_in_storage(contract_id, &mut env) and asserting Some(expected) for present entries and None for missing entries; include both presence and absence cases to fully cover these functions.
🧹 Nitpick comments (4)
crates/rpc/chain-index/src/querier.rs (1)
57-59: Preallocate key buffers in hot query paths
read_nonceandread_balancerebuild keys on every request; preallocating capacity avoids incremental reallocations.Suggested refactor
fn read_nonce(&self, account_id: AccountId) -> Result<u64, RpcError> { - let mut key = vec![ACCOUNT_STORAGE_PREFIX]; + let mut key = Vec::with_capacity(1 + account_id.as_bytes().len() + 1); + key.push(ACCOUNT_STORAGE_PREFIX); key.extend_from_slice(&account_id.as_bytes()); key.push(0u8); // EthEoaAccount::nonce storage prefix match self .storage .get(&key) @@ fn read_balance(&self, account_id: AccountId) -> Result<u128, RpcError> { - let mut key = vec![ACCOUNT_STORAGE_PREFIX]; + let encoded_account_id = account_id + .encode() + .map_err(|e| RpcError::InternalError(format!("encode account id: {:?}", e)))?; + let mut key = Vec::with_capacity( + 1 + self.token_account_id.as_bytes().len() + 1 + encoded_account_id.len(), + ); + key.push(ACCOUNT_STORAGE_PREFIX); key.extend_from_slice(&self.token_account_id.as_bytes()); key.push(1u8); // Token::balances storage prefix - key.extend( - account_id - .encode() - .map_err(|e| RpcError::InternalError(format!("encode account id: {:?}", e)))?, - ); + key.extend(encoded_account_id); match self .storage .get(&key)As per coding guidelines, "Minimize memory allocation, preallocate when possible, and avoid hot-loop allocations in Rust code."
Also applies to: 73-80
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rpc/chain-index/src/querier.rs` around lines 57 - 59, The key construction in read_nonce and read_balance currently grows the Vec incrementally (starting from vec![ACCOUNT_STORAGE_PREFIX] then extend_from_slice and push) causing reallocations on hot paths; change to preallocate the buffer capacity (e.g., use Vec::with_capacity(1 + account_id.as_bytes().len() + 1)) and then push ACCOUNT_STORAGE_PREFIX, extend_from_slice(account_id.as_bytes()), and push the final 0u8, so the Vec never needs to reallocate; apply the same preallocation pattern for the other key builds referenced around the 73-80 region to eliminate repeated allocations in these hot query paths (referencing functions read_nonce, read_balance and the local variable key and constant ACCOUNT_STORAGE_PREFIX).crates/app/tx/eth/src/eoa_registry.rs (1)
92-103: Storage key construction is consistent with the rest of the codebase.The key layout
[ACCOUNT_STORAGE_PREFIX] + [account_id bytes] + [suffix key]matches the pattern used inhandlers.rsand test scaffolding, ensuring reads align with writes.Consider extracting a helper to reduce duplication across the three
_in_storagefunctions:♻️ Optional: Extract a helper for the repeated key construction
+fn storage_key_for_runtime(suffix: &[u8]) -> Vec<u8> { + let mut full_key = vec![ACCOUNT_STORAGE_PREFIX]; + full_key.extend_from_slice(&RUNTIME_ACCOUNT_ID.as_bytes()); + full_key.extend_from_slice(suffix); + full_key +} + pub fn lookup_account_id_in_storage<S: ReadonlyKV>( storage: &S, address: Address, ) -> SdkResult<Option<AccountId>> { - let mut full_key = vec![ACCOUNT_STORAGE_PREFIX]; - full_key.extend_from_slice(&RUNTIME_ACCOUNT_ID.as_bytes()); - full_key.extend_from_slice(&addr_to_id_key(address)); + let full_key = storage_key_for_runtime(&addr_to_id_key(address)); match storage.get(&full_key)? {Also applies to: 123-134, 136-150
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/app/tx/eth/src/eoa_registry.rs` around lines 92 - 103, The key construction logic ([ACCOUNT_STORAGE_PREFIX] + RUNTIME_ACCOUNT_ID.as_bytes() + addr_to_id_key(address)) is duplicated across the three *_in_storage functions (e.g., lookup_account_id_in_storage); extract a small helper like construct_account_storage_key(address: Address) -> Vec<u8> (or construct_account_addr_key) that returns the full_key Vec<u8>, replace the inline vec![ACCOUNT_STORAGE_PREFIX] + extend_from_slice calls in lookup_account_id_in_storage and the other two _in_storage functions with calls to that helper, and keep the existing storage.get(...) and decode flow unchanged.bin/testapp/src/lib.rs (1)
416-425: Consider extracting this storage-key builder into a shared test helper.The same token-balance key construction is duplicated across test modules; centralizing it reduces future layout-drift bugs.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/testapp/src/lib.rs` around lines 416 - 425, The token-balance storage-key assembly inside read_token_balance is duplicated; extract the key-building logic into a shared test helper (e.g., a function named build_token_balance_key or token_balance_key) that takes token_account_id: AccountId and account_id: AccountId and returns Vec<u8> using evolve_core::runtime_api::ACCOUNT_STORAGE_PREFIX, the token_account_id bytes, the trailing 1u8, and account_id.encode(); replace the inline construction in read_token_balance and other test modules to call this helper so all key layout lives in one place.crates/testing/simulator/src/eth_eoa.rs (1)
39-49: Preallocate key buffers to avoid avoidable reallocations.These keys have fixed final size (
1 + 32 + 1), so reserving upfront is a simple win.Proposed refactor
- let mut nonce_key = vec![ACCOUNT_STORAGE_PREFIX]; + let mut nonce_key = Vec::with_capacity(34); + nonce_key.push(ACCOUNT_STORAGE_PREFIX); nonce_key.extend_from_slice(&account_id.as_bytes()); nonce_key.push(0u8); @@ - let mut addr_key = vec![ACCOUNT_STORAGE_PREFIX]; + let mut addr_key = Vec::with_capacity(34); + addr_key.push(ACCOUNT_STORAGE_PREFIX); addr_key.extend_from_slice(&account_id.as_bytes()); addr_key.push(1u8);As per coding guidelines, "Minimize memory allocation, preallocate when possible, and avoid hot-loop allocations in Rust code."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/testing/simulator/src/eth_eoa.rs` around lines 39 - 49, The nonce_key and addr_key buffers are being grown via repeated pushes/extends causing reallocations; preallocate their capacity before writing to them. Replace the existing creations of nonce_key and addr_key by constructing them with Vec::with_capacity(1 + account_id.as_bytes().len() + 1) (or the concrete fixed size if account_id is always 32 bytes), then extend_from_slice(&account_id.as_bytes()) and push the final byte as before; ensure you still prepend ACCOUNT_STORAGE_PREFIX and produce the same byte sequence for nonce_key and addr_key.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/rpc/eth-jsonrpc/src/server.rs`:
- Around line 38-50: The txload binary currently defaults its signing chain ID
to 1, which mismatches RpcServerConfig::default() (DEFAULT_CHAIN_ID = 900_901);
update the CLI/default for txload's chain_id to 900_901 by changing the argument
attribute default_value_t (the default for the chain_id flag in
bin/txload/src/main.rs) from 1 to 900_901 so signed transactions use the same
chain ID as RpcServerConfig::default(); alternatively, reference the crate
constant DEFAULT_CHAIN_ID where possible to keep them in sync.
---
Outside diff comments:
In `@crates/app/tx/eth/src/eoa_registry.rs`:
- Around line 290-487: Add unit tests that directly exercise
lookup_address_in_storage and lookup_contract_account_id_in_storage by seeding
MockEnv.storage with the expected account-scoped keys using
ACCOUNT_STORAGE_PREFIX and the existing key-builder helpers (e.g.,
id_to_addr_key and the contract-id equivalent) via MockEnv::account_scoped_key,
serializing values with Message::new(...).into_bytes(), then calling
lookup_address_in_storage(address, &mut env) and
lookup_contract_account_id_in_storage(contract_id, &mut env) and asserting
Some(expected) for present entries and None for missing entries; include both
presence and absence cases to fully cover these functions.
In `@crates/rpc/chain-index/src/querier.rs`:
- Around line 41-42: The doc comment describing nonce/balance key formats is out
of date—update the comment near the key-format docs in querier.rs to reflect the
new prefixed layout using ACCOUNT_STORAGE_PREFIX (i.e. include the
account/storage prefix bytes in the described key formats). Specifically, change
the Nonce and Balance descriptions to show that keys are prefixed with
ACCOUNT_STORAGE_PREFIX (and any sub-prefix bytes like 0/1 used for EthEoaAccount
vs Token) so they match the implementation that builds keys with
ACCOUNT_STORAGE_PREFIX when computing nonce and balance keys.
---
Nitpick comments:
In `@bin/testapp/src/lib.rs`:
- Around line 416-425: The token-balance storage-key assembly inside
read_token_balance is duplicated; extract the key-building logic into a shared
test helper (e.g., a function named build_token_balance_key or
token_balance_key) that takes token_account_id: AccountId and account_id:
AccountId and returns Vec<u8> using
evolve_core::runtime_api::ACCOUNT_STORAGE_PREFIX, the token_account_id bytes,
the trailing 1u8, and account_id.encode(); replace the inline construction in
read_token_balance and other test modules to call this helper so all key layout
lives in one place.
In `@crates/app/tx/eth/src/eoa_registry.rs`:
- Around line 92-103: The key construction logic ([ACCOUNT_STORAGE_PREFIX] +
RUNTIME_ACCOUNT_ID.as_bytes() + addr_to_id_key(address)) is duplicated across
the three *_in_storage functions (e.g., lookup_account_id_in_storage); extract a
small helper like construct_account_storage_key(address: Address) -> Vec<u8> (or
construct_account_addr_key) that returns the full_key Vec<u8>, replace the
inline vec![ACCOUNT_STORAGE_PREFIX] + extend_from_slice calls in
lookup_account_id_in_storage and the other two _in_storage functions with calls
to that helper, and keep the existing storage.get(...) and decode flow
unchanged.
In `@crates/rpc/chain-index/src/querier.rs`:
- Around line 57-59: The key construction in read_nonce and read_balance
currently grows the Vec incrementally (starting from
vec![ACCOUNT_STORAGE_PREFIX] then extend_from_slice and push) causing
reallocations on hot paths; change to preallocate the buffer capacity (e.g., use
Vec::with_capacity(1 + account_id.as_bytes().len() + 1)) and then push
ACCOUNT_STORAGE_PREFIX, extend_from_slice(account_id.as_bytes()), and push the
final 0u8, so the Vec never needs to reallocate; apply the same preallocation
pattern for the other key builds referenced around the 73-80 region to eliminate
repeated allocations in these hot query paths (referencing functions read_nonce,
read_balance and the local variable key and constant ACCOUNT_STORAGE_PREFIX).
In `@crates/testing/simulator/src/eth_eoa.rs`:
- Around line 39-49: The nonce_key and addr_key buffers are being grown via
repeated pushes/extends causing reallocations; preallocate their capacity before
writing to them. Replace the existing creations of nonce_key and addr_key by
constructing them with Vec::with_capacity(1 + account_id.as_bytes().len() + 1)
(or the concrete fixed size if account_id is always 32 bytes), then
extend_from_slice(&account_id.as_bytes()) and push the final byte as before;
ensure you still prepend ACCOUNT_STORAGE_PREFIX and produce the same byte
sequence for nonce_key and addr_key.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3edb89b1-619a-4257-8c08-f75df91cab04
📒 Files selected for processing (8)
bin/testapp/src/lib.rsbin/testapp/src/main.rsbin/testapp/tests/mempool_e2e.rscrates/app/tx/eth/src/eoa_registry.rscrates/rpc/chain-index/src/querier.rscrates/rpc/eth-jsonrpc/src/server.rscrates/rpc/grpc/src/services/execution.rscrates/testing/simulator/src/eth_eoa.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- crates/rpc/grpc/src/services/execution.rs
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
crates/rpc/evnode/src/server.rs (1)
108-110:⚠️ Potential issue | 🟠 MajorBlank-token normalization is still incomplete.
EvnodeServerConfig::default()now strips empty env vars, but explicit constructors can still passSome("")(for example incrates/rpc/evnode/src/runner.rs). These guards only checkis_none(), so a whitespace-only token bypasses the fail-fast path and still reachesmake_auth_interceptor.Also applies to: 244-245, 257-257, 289-290, 302-302
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rpc/evnode/src/server.rs` around lines 108 - 110, The auth token handling must normalize whitespace-only strings to None so blank tokens don't bypass checks; update EvnodeServerConfig::default() already using .filter(|t| !t.trim().is_empty()) and apply the same normalization wherever an auth token Option<String> is constructed or inspected (e.g., the explicit constructors in crates/rpc/evnode/src/runner.rs and the locations referenced around make_auth_interceptor and the checks that currently only test is_none()). Implement a small helper (e.g., normalize_auth_token) or replace direct uses with .map(|s| s.trim().to_string()).filter(|t| !t.is_empty()) so Any Some("") or Some(" ") becomes None before reaching make_auth_interceptor.
🧹 Nitpick comments (3)
crates/rpc/chain-index/src/index.rs (1)
158-240: Refactorinit_schemato stay within the function-size guideline.
init_schemanow bundles full DDL + migration probing in one large method, which makes future schema evolution harder to reason about and test.As per coding guidelines "Keep functions to less than 70 lines to maintain bounded complexity".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rpc/chain-index/src/index.rs` around lines 158 - 240, The init_schema function is too large; split its responsibilities by extracting the SQL DDL and the migration/probing logic into small private helpers to keep init_schema under 70 lines. Specifically, keep init_schema minimal (acquire conn via self.writer.lock().unwrap(), call create_tables(&conn) and apply_migrations(&conn) and return Ok(())); move the big CREATE TABLE/CREATE INDEX batch string and conn.execute_batch(...) into a new private method create_tables(&self, conn: &rusqlite::Connection) -> ChainIndexResult<()>, and move the ALTER TABLE migration probe into a separate private method apply_migrations(&self, conn: &rusqlite::Connection) that intentionally ignores the duplicate-column result. Ensure you reference and use the same conn variable and preserve error handling and return types.crates/rpc/chain-index/src/provider.rs (1)
599-700: Splitget_logs_for_block_rangeinto smaller helpers.This function is now well beyond the 70-line limit and mixes range validation, fetching, filtering, and projection in one block.
As per coding guidelines "Keep functions to less than 70 lines to maintain bounded complexity".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rpc/chain-index/src/provider.rs` around lines 599 - 700, get_logs_for_block_range is too large and mixes validation, iteration, filtering and projection; split it into small helpers: extract the range validation into validate_block_range(from_block, to_block) (reuse the existing inverted-range check and max_blocks computation/actual_to), move block/tx/receipt fetching and iteration into iterate_blocks(from_block, actual_to, |block_num, block, tx_hashes| { ... }), implement log_matches_filter(&stored_log, &filter) to encapsulate the address and topics matching logic (use evolve_rpc_types::FilterAddress/FilterTopic and stored_log.topics/address comparisons), and create a push_log_if_within_limit(result: &mut Vec<RpcLog>, stored_log, block_num, block.hash, tx_hash, tx_idx, log_index) that enforces MAX_LOGS and calls stored_log.to_rpc_log; replace the big loop in get_logs_for_block_range with these helpers (keeping names get_logs_for_block_range, MAX_LOGS, stored_log.to_rpc_log, and the filter types to locate code).crates/rpc/evnode/src/server.rs (1)
36-52: Add colocated tests for the auth contract.This file now owns public auth and startup-guard logic, but there’s no
#[cfg(test)]coverage here for valid tokens, missing/mismatched metadata, or blank-token rejection. Please add inline tests incrates/rpc/evnode/src/server.rsso these security-sensitive paths don’t regress silently.As per coding guidelines, "Place unit tests alongside implementation in the same Rust file".
Also applies to: 241-271, 280-323
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rpc/evnode/src/server.rs` around lines 36 - 52, Add colocated unit tests under #[cfg(test)] in this file that exercise make_auth_interceptor and the auth contract using AUTH_TOKEN_METADATA_KEY: create Requests with (1) correct metadata matching expected_token and assert Ok(Request), (2) missing metadata and assert Err(Status::unauthenticated), (3) mismatched token and assert Err, and (4) blank/empty token metadata and assert rejection. In each test construct a Request<()> and attach metadata entries via Request::new + request.metadata().insert(...) or similar, call the returned closure from make_auth_interceptor(expected_token.clone()) and assert the Result variant and Status message where applicable to prevent regressions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/rpc/chain-index/src/index.rs`:
- Around line 233-238: The migration call currently swallows all errors by using
`let _ = conn.execute_batch("ALTER TABLE receipts ADD COLUMN revert_reason
TEXT;");`; instead, check the Result from `conn.execute_batch` and only ignore
the specific SQLite duplicate-column error (or detect presence of the
`revert_reason` column via `PRAGMA table_info('receipts')`) while propagating or
returning any other errors so startup fails properly; update the migration logic
around the `execute_batch` call in index.rs to match on the result (or pre-check
schema), log or return unexpected errors, and ensure `revert_reason` is created
only when needed without hiding I/O/lock/schema failures.
In `@crates/rpc/chain-index/src/provider.rs`:
- Around line 616-620: The code currently silently clamps the requested range by
computing actual_to = to_block.min(from_block.saturating_add(max_blocks - 1)),
which hides when a client requested more than max_blocks; instead detect when
the requested range length
(to_block.saturating_sub(from_block).saturating_add(1)) > max_blocks and return
a clear error (or a specific RPC error variant like RangeTooLarge) rather than
truncating, or implement proper pagination; update the logic around
max_blocks/actual_to (and any callers of the getLogs path) to surface this
error, and add/ reuse an appropriate error type/message so callers know their
requested block range exceeded the 1000 block limit.
---
Duplicate comments:
In `@crates/rpc/evnode/src/server.rs`:
- Around line 108-110: The auth token handling must normalize whitespace-only
strings to None so blank tokens don't bypass checks; update
EvnodeServerConfig::default() already using .filter(|t| !t.trim().is_empty())
and apply the same normalization wherever an auth token Option<String> is
constructed or inspected (e.g., the explicit constructors in
crates/rpc/evnode/src/runner.rs and the locations referenced around
make_auth_interceptor and the checks that currently only test is_none()).
Implement a small helper (e.g., normalize_auth_token) or replace direct uses
with .map(|s| s.trim().to_string()).filter(|t| !t.is_empty()) so Any Some("") or
Some(" ") becomes None before reaching make_auth_interceptor.
---
Nitpick comments:
In `@crates/rpc/chain-index/src/index.rs`:
- Around line 158-240: The init_schema function is too large; split its
responsibilities by extracting the SQL DDL and the migration/probing logic into
small private helpers to keep init_schema under 70 lines. Specifically, keep
init_schema minimal (acquire conn via self.writer.lock().unwrap(), call
create_tables(&conn) and apply_migrations(&conn) and return Ok(())); move the
big CREATE TABLE/CREATE INDEX batch string and conn.execute_batch(...) into a
new private method create_tables(&self, conn: &rusqlite::Connection) ->
ChainIndexResult<()>, and move the ALTER TABLE migration probe into a separate
private method apply_migrations(&self, conn: &rusqlite::Connection) that
intentionally ignores the duplicate-column result. Ensure you reference and use
the same conn variable and preserve error handling and return types.
In `@crates/rpc/chain-index/src/provider.rs`:
- Around line 599-700: get_logs_for_block_range is too large and mixes
validation, iteration, filtering and projection; split it into small helpers:
extract the range validation into validate_block_range(from_block, to_block)
(reuse the existing inverted-range check and max_blocks computation/actual_to),
move block/tx/receipt fetching and iteration into iterate_blocks(from_block,
actual_to, |block_num, block, tx_hashes| { ... }), implement
log_matches_filter(&stored_log, &filter) to encapsulate the address and topics
matching logic (use evolve_rpc_types::FilterAddress/FilterTopic and
stored_log.topics/address comparisons), and create a
push_log_if_within_limit(result: &mut Vec<RpcLog>, stored_log, block_num,
block.hash, tx_hash, tx_idx, log_index) that enforces MAX_LOGS and calls
stored_log.to_rpc_log; replace the big loop in get_logs_for_block_range with
these helpers (keeping names get_logs_for_block_range, MAX_LOGS,
stored_log.to_rpc_log, and the filter types to locate code).
In `@crates/rpc/evnode/src/server.rs`:
- Around line 36-52: Add colocated unit tests under #[cfg(test)] in this file
that exercise make_auth_interceptor and the auth contract using
AUTH_TOKEN_METADATA_KEY: create Requests with (1) correct metadata matching
expected_token and assert Ok(Request), (2) missing metadata and assert
Err(Status::unauthenticated), (3) mismatched token and assert Err, and (4)
blank/empty token metadata and assert rejection. In each test construct a
Request<()> and attach metadata entries via Request::new +
request.metadata().insert(...) or similar, call the returned closure from
make_auth_interceptor(expected_token.clone()) and assert the Result variant and
Status message where applicable to prevent regressions.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4bdd26ac-7b88-4e7e-bf82-39bfa91ce365
📒 Files selected for processing (4)
crates/rpc/chain-index/src/index.rscrates/rpc/chain-index/src/provider.rscrates/rpc/evnode/src/server.rscrates/rpc/grpc/src/conversion.rs
Overview
Summary by CodeRabbit
New Features
Bug Fixes / Hardening
Chores