Conversation
Propagate ErrorCode information from failed TxResult responses through
the receipt pipeline so Ethereum tooling (ethers, viem, Foundry) can
surface revert reasons via the revertReason field.
Changes:
- RpcReceipt: add optional revert_reason field (skip_serializing_if None)
- RpcReceipt::success(): set revert_reason: None
- RpcReceipt::failure(): accept revert_reason: Option<String> parameter
- StoredReceipt: add optional revert_reason field
- StoredReceipt::to_rpc_receipt(): pass through revert_reason
- SQLite receipts table: add revert_reason TEXT column
- insert_receipt(): include revert_reason in INSERT
- get_receipt() SELECT: include receipts.revert_reason (column index 12)
- row_to_stored_receipt(): read revert_reason at index 12
- build_stored_receipt(): format ErrorCode(id=0x{id:04x}, arg={arg}) on Err
- Test helpers updated with revert_reason: None
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughAdds STF query/simulation and RPC execution context plumbing, extends chain-index and StorageStateQuerier for code/storage reads, records revert reasons on receipts, introduces a testapp binary to print a token/runtime address, and adds an end-to-end Viem transfer script and docs updates. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Provider as ChainStateProvider
participant Querier as StorageStateQuerier
participant STF
participant Storage
Client->>Provider: call(request, block?)
Provider->>Provider: ensure_rpc_compatibility()
Provider->>Querier: call(request, block)
Querier->>STF: simulate_call_tx(...) / execute_query(...)
STF->>Storage: read state / code / storage slots
Storage-->>STF: state data
STF-->>Querier: TxResult (or query result)
Querier-->>Provider: Bytes (call result)
Provider-->>Client: Bytes
sequenceDiagram
participant Executor
participant TxFlow as Transaction Execution
participant Builder as Receipt Builder
participant Index as ChainIndex DB
participant DB as Storage
Executor->>TxFlow: process transaction
TxFlow-->>Builder: TxResult (Err / Ok)
Builder->>Builder: derive revert_reason from error (if any)
Builder->>Index: build_stored_receipt(with revert_reason)
Index->>DB: INSERT/REPLACE receipts (revert_reason TEXT)
DB-->>Index: persisted
Estimated code review effort🎯 4 (Complex) | ⏱️ ~65 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 11
🧹 Nitpick comments (1)
bin/testapp/examples/print_token_address.rs (1)
31-51: Remove the unnecessary clone ofdata_dir.The closure pattern used in
Runner::startsupports capturing variables directly for use in async blocks without intermediate cloning. Other usages in the codebase (e.g.,bin/evd/tests/external_consensus_e2e.rs) omit this clone, demonstrating the simpler idiom.♻️ Suggested simplification
Runner::new(runtime_config).start(move |context| { - let data_dir = data_dir.clone(); async move { let storage = QmdbStorage::new( context, StorageConfig { path: data_dir,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/testapp/examples/print_token_address.rs` around lines 31 - 51, The clone of data_dir is unnecessary — remove the intermediate let data_dir = data_dir.clone(); and capture data_dir directly into the closure passed to Runner::new(...).start(...) so the async move block uses the original data_dir when constructing StorageConfig for QmdbStorage::new; no other changes needed to derive_runtime_contract_address(...) or tx.send(...) usage.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/app/node/src/lib.rs`:
- Around line 519-525: The call in run_dev_node_with_rpc builds a
ChainStateProvider via ChainStateProvider::with_account_codes which leaves
mempool and verifier unset, causing
ensure_rpc_startup_compatibility(&state_provider, "run_dev_node_with_rpc") to
always fail; fix by either constructing the provider with a mempool-backed
variant (e.g., use the constructor or builder that wires in the mempool and
verifier before calling ensure_rpc_startup_compatibility) so the provider has
both mempool and verifier, or replace the call to
ensure_rpc_startup_compatibility here with the read-only startup check used
elsewhere for RPC-only runners so you perform a compatibility check that does
not require mempool/verifier when using with_account_codes.
- Around line 109-110: The generic bound currently forces Stf to implement
evolve_chain_index::RpcExecutionContext (via the Stf: StfExecutor<Tx, S, Codes>
+ evolve_chain_index::RpcExecutionContext + Send + Sync + 'static constraint),
which breaks callers that don't use RPC; remove
evolve_chain_index::RpcExecutionContext from the generic bounds on non-RPC
helpers (e.g., build_dev_node_with_mempool() and run_dev_node()) so they only
require StfExecutor<Tx, S, Codes> + Send + Sync + 'static, and instead add the
evolve_chain_index::RpcExecutionContext bound only to the RPC-enabled entry
points that actually use RPC functionality (or add a separate impl/overload that
includes the RpcExecutionContext bound for RPC paths). Ensure references to
StfExecutor, Stf, build_dev_node_with_mempool(), and run_dev_node() are updated
accordingly.
In `@crates/app/stf/src/lib.rs`:
- Around line 106-114: QueryContext::default() must not fabricate system-level
metadata (RUNTIME_ACCOUNT_ID and zeroed BlockContext); change the Default impl
for QueryContext so it does not return a system caller or synthetic block:
either (A) set sender to an unprivileged/neutral value (e.g.,
AccountId::default() or None if sender is Option) and set block to a
neutral/latest BlockContext if a helper (e.g., BlockContext::latest() or
BlockContext::neutral()) exists, or (B) remove/stop deriving Default so callers
must construct QueryContext explicitly; update the Default impl that touches
gas_limit, sender, funds, and block to follow one of these approaches instead of
using RUNTIME_ACCOUNT_ID and BlockContext::default().
In `@crates/rpc/chain-index/src/querier.rs`:
- Around line 345-379: run_query and run_exec currently always use &self.storage
(and get_code/get_storage_at ignore the block), causing historical block queries
to return live state; update both functions to select the correct storage
snapshot for the requested block (e.g., obtain a historic storage handle or
snapshot tied to block via a helper like storage_for_block(block) or similar)
and pass that snapshot into executor.execute_query and executor.simulate_call_tx
instead of &self.storage; if no historic snapshot exists, explicitly return an
error for non-latest block tags rather than falling back to live state; also
update get_code and get_storage_at to accept and use the block argument (or
reject non-latest) so all callers (run_query/run_exec) operate on consistent
block-scoped storage.
In `@crates/rpc/eth-jsonrpc/src/server.rs`:
- Line 933: In the match/arm that constructs RpcError::NotImplemented, remove
the unnecessary dereference of method: instead of returning
RpcError::NotImplemented(*method) use RpcError::NotImplemented(method); update
the match arm that currently reads RpcError::NotImplemented(method) =>
RpcError::NotImplemented(*method) to return the borrowed/coerced method directly
(symbol: RpcError::NotImplemented in server.rs).
In `@crates/rpc/evnode/src/runner.rs`:
- Around line 425-430: The code unconditionally constructs query_executor and
forces the generic bound Stf: RpcExecutionContext (via
StfExecutor/RpcExecutionContext) even when RPC is disabled; move the
query_executor construction and any types requiring RpcExecutionContext behind
the config.rpc.enabled branch (the branch that calls
start_external_consensus_rpc_server()) or extract JSON-RPC startup into a helper
function that carries the stronger bound, so that the runner and other non-RPC
paths do not require RpcExecutionContext; specifically relocate creation of
query_executor and any uses tied to RPC into the RPC-enabled block (the code
that calls start_external_consensus_rpc_server()) or create a new helper (e.g.,
start_jsonrpc_with_query_executor) with the RpcExecutionContext bound and call
it only when RPC is enabled.
In `@docs/scripts/viem-transfer-e2e.ts`:
- Around line 406-468: Move the try so it begins immediately after creating the
sandbox dir (the mkdtempSync(join(tmpdir(), ...)) call) and declare
sandbox-related variables and node in outer scope (e.g., let node: any |
undefined) before the try; then perform all setup (configPath, dataDir,
genesisPath, build/init/run steps, and the startNode call) inside that try,
assigning the result of startNode(...) to node instead of declaring a new const;
finally, consolidate cleanup in a single finally block that checks if node is
set and stops/kills it, and removes the sandboxDir unless KEEP_TEMP is true,
ensuring both partially started child processes and the temp directory are
always cleaned up on errors.
- Around line 84-99: The runCommand function currently only inspects
result.status and omits process spawn errors; update runCommand to check
result.error (the spawnSync Error) when result.status is null/undefined and
include result.error.message (and/or stack) in the thrown Error alongside
stdout/stderr and the label; reference the runCommand function and ensure the
thrown Error reports either the exit code when present or the spawn error
details (result.error) so startup ENOENT/EACCES failures are preserved in the
exception text.
- Around line 416-421: The script currently only builds the example
(TOKEN_ADDRESS_BINARY) but not the testapp binary (TESTAPP_BINARY), causing
execution to fail; update the runCargo invocation(s) in this block (where
runCargo is called) to also build the testapp binary—either add the flag "--bin
testapp" to the existing cargo build args or invoke runCargo a second time to
run `cargo build -p evolve_testapp --bin testapp` so both TOKEN_ADDRESS_BINARY
and TESTAPP_BINARY are produced before execution.
- Around line 215-235: The startNode function currently doesn't attach an
"error" listener to the spawned child, so spawn failures (binary not found,
permissions) aren't surfaced and waitForRpcReady()/waitForFirstBlock() can hang;
update startNode (the NodeHandle creation) to listen for the child's "error"
event and capture/store that Error on the returned NodeHandle (e.g., set a
spawnError property and expose getSpawnError()), and then add a fast-path check
at the top of waitForRpcReady() and waitForFirstBlock() that throws immediately
if node.getSpawnError() is non-null so tests fail fast with the real spawn
error.
---
Nitpick comments:
In `@bin/testapp/examples/print_token_address.rs`:
- Around line 31-51: The clone of data_dir is unnecessary — remove the
intermediate let data_dir = data_dir.clone(); and capture data_dir directly into
the closure passed to Runner::new(...).start(...) so the async move block uses
the original data_dir when constructing StorageConfig for QmdbStorage::new; no
other changes needed to derive_runtime_contract_address(...) or tx.send(...)
usage.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8d710eab-9f98-49bf-9ada-d1dd14fc6a81
⛔ Files ignored due to path filters (1)
docs/bun.lockis excluded by!**/*.lock
📒 Files selected for processing (17)
bin/testapp/examples/print_token_address.rscrates/app/node/src/lib.rscrates/app/stf/src/invoker.rscrates/app/stf/src/lib.rscrates/rpc/chain-index/src/index.rscrates/rpc/chain-index/src/integration.rscrates/rpc/chain-index/src/lib.rscrates/rpc/chain-index/src/provider.rscrates/rpc/chain-index/src/querier.rscrates/rpc/chain-index/src/types.rscrates/rpc/eth-jsonrpc/src/error.rscrates/rpc/eth-jsonrpc/src/server.rscrates/rpc/evnode/src/runner.rscrates/rpc/types/src/receipt.rsdocs/package.jsondocs/scripts/viem-transfer-e2e.tsdocs/vocs.config.ts
💤 Files with no reviewable changes (1)
- docs/vocs.config.ts
| fn run_query( | ||
| &self, | ||
| request: &CallRequest, | ||
| block: Option<u64>, | ||
| ) -> Result<Option<TxResult>, RpcError> { | ||
| let Some(target_account) = self.resolve_account_id(request.to)? else { | ||
| return Ok(None); | ||
| }; | ||
| let sender = self.resolve_sender_account_id(request.from.unwrap_or(Address::ZERO))?; | ||
| let funds = self.call_request_funds(request)?; | ||
| let invoke_request = Self::call_request_to_invoke_request(request); | ||
|
|
||
| Ok(Some(self.executor.execute_query( | ||
| &self.storage, | ||
| self.account_codes.as_ref(), | ||
| target_account, | ||
| &invoke_request, | ||
| QueryContext { | ||
| gas_limit: Some(self.call_request_gas_limit(request)), | ||
| sender, | ||
| funds, | ||
| block: Self::block_context(block), | ||
| }, | ||
| ))) | ||
| } | ||
|
|
||
| fn run_exec(&self, request: &CallRequest, block: Option<u64>) -> Result<TxResult, RpcError> { | ||
| let synthetic_tx = self.build_synthetic_tx(request, block)?; | ||
| Ok(self.executor.simulate_call_tx( | ||
| &self.storage, | ||
| self.account_codes.as_ref(), | ||
| &synthetic_tx, | ||
| Self::block_context(block), | ||
| )) | ||
| } |
There was a problem hiding this comment.
Block-tagged state queries still read the latest storage snapshot.
run_exec() / run_query() always operate on &self.storage, and get_code() / get_storage_at() ignore the block argument entirely. So eth_call, eth_getCode, and eth_getStorageAt for a historical block return current-state answers with spoofed block metadata. If historical snapshots are not available yet, reject non-latest block tags instead of silently returning wrong data.
Also applies to: 446-480
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/rpc/chain-index/src/querier.rs` around lines 345 - 379, run_query and
run_exec currently always use &self.storage (and get_code/get_storage_at ignore
the block), causing historical block queries to return live state; update both
functions to select the correct storage snapshot for the requested block (e.g.,
obtain a historic storage handle or snapshot tied to block via a helper like
storage_for_block(block) or similar) and pass that snapshot into
executor.execute_query and executor.simulate_call_tx instead of &self.storage;
if no historic snapshot exists, explicitly return an error for non-latest block
tags rather than falling back to live state; also update get_code and
get_storage_at to accept and use the block argument (or reject non-latest) so
all callers (run_query/run_exec) operate on consistent block-scoped storage.
| const sandboxDir = mkdtempSync(join(tmpdir(), "evolve-viem-demo-")); | ||
| const configPath = join(sandboxDir, "config.yaml"); | ||
| const dataDir = join(sandboxDir, "data"); | ||
| const genesisPath = join(sandboxDir, "genesis.json"); | ||
| const rpcPort = await findFreePort(); | ||
| const rpcUrl = `http://127.0.0.1:${rpcPort}`; | ||
|
|
||
| buildGenesisFile(genesisPath, sender.address, recipient.address); | ||
|
|
||
| log(`Working directory: ${sandboxDir}`); | ||
| if (!existsSync(TESTAPP_BINARY) || !existsSync(TOKEN_ADDRESS_BINARY)) { | ||
| log("Building testapp binaries..."); | ||
| runCargo( | ||
| ["build", "-p", "evolve_testapp", "--example", "print_token_address"], | ||
| "build testapp binaries" | ||
| ); | ||
| } | ||
| log("Initializing local testapp state..."); | ||
| const initOutput = runCommand( | ||
| TESTAPP_BINARY, | ||
| [ | ||
| "init", | ||
| "--config", | ||
| configPath, | ||
| "--log-level", | ||
| "info", | ||
| "--data-dir", | ||
| dataDir, | ||
| "--genesis-file", | ||
| genesisPath, | ||
| ], | ||
| "testapp init" | ||
| ); | ||
|
|
||
| const tokenAddressOutput = runCommand( | ||
| TOKEN_ADDRESS_BINARY, | ||
| ["--data-dir", dataDir], | ||
| "print token address" | ||
| ); | ||
| const tokenAddress = getAddress( | ||
| tokenAddressOutput.stdout.trim() || parseTokenAddressFromInitOutput(initOutput.combined) | ||
| ); | ||
| const chain = createChain(rpcUrl); | ||
|
|
||
| const node = startNode([ | ||
| "run", | ||
| "--config", | ||
| configPath, | ||
| "--log-level", | ||
| "info", | ||
| "--data-dir", | ||
| dataDir, | ||
| "--genesis-file", | ||
| genesisPath, | ||
| "--rpc-addr", | ||
| `127.0.0.1:${rpcPort}`, | ||
| ]); | ||
|
|
||
| const publicClient = createPublicClient({ | ||
| chain, | ||
| transport: http(rpcUrl), | ||
| }); | ||
|
|
There was a problem hiding this comment.
Wrap the entire sandbox lifecycle in the try/finally.
Anything that throws before the current try block skips cleanup, so KEEP_TEMP === false still leaves the temp sandbox behind on setup failures. Please move the try up to immediately after mkdtempSync(...) and make node optional so both the temp dir and any partially started child are cleaned up from one place.
Also applies to: 555-562
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/scripts/viem-transfer-e2e.ts` around lines 406 - 468, Move the try so it
begins immediately after creating the sandbox dir (the
mkdtempSync(join(tmpdir(), ...)) call) and declare sandbox-related variables and
node in outer scope (e.g., let node: any | undefined) before the try; then
perform all setup (configPath, dataDir, genesisPath, build/init/run steps, and
the startNode call) inside that try, assigning the result of startNode(...) to
node instead of declaring a new const; finally, consolidate cleanup in a single
finally block that checks if node is set and stops/kills it, and removes the
sandboxDir unless KEEP_TEMP is true, ensuring both partially started child
processes and the temp directory are always cleaned up on errors.
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (1)
crates/rpc/evnode/src/runner.rs (1)
425-430:⚠️ Potential issue | 🟠 Major
run_external_consensus_node_ethstill enforcesRpcExecutionContextfor non-RPC runs.Line 425 keeps an unconditional
RpcExecutionContextbound onStf, even though the RPC-only path is runtime-gated at Line 526. This still source-breaks callers that want external-consensus execution without JSON-RPC.#!/bin/bash # Verify unconditional generic bound vs runtime-only RPC branch. rg -n -C3 'run_external_consensus_node_eth|RpcExecutionContext|if config\.rpc\.enabled|start_external_consensus_rpc_server' crates/rpc/evnode/src/runner.rsAlso applies to: 526-540
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rpc/evnode/src/runner.rs` around lines 425 - 430, The function run_external_consensus_node_eth currently forces the RpcExecutionContext trait on the generic Stf (and related bounds like StfExecutor/EvnodeStfExecutor), which breaks non-RPC callers even though the RPC work is only executed at runtime; remove RpcExecutionContext from the function's unconditional generic bounds and instead require it only where the RPC-only code runs (e.g., move the RpcExecutionContext bound into a separate helper or into the block that calls start_external_consensus_rpc_server), or create a small inner function (or helper like start_external_consensus_rpc_server_wrapper) with a where Stf: RpcExecutionContext bound and call that only when config.rpc.enabled; reference run_external_consensus_node_eth, StfExecutor, EvnodeStfExecutor, RpcExecutionContext, and start_external_consensus_rpc_server to locate the changes.
🧹 Nitpick comments (2)
crates/rpc/chain-index/src/querier.rs (1)
842-928: Splitsetup_querierinto smaller helpers.
setup_querieris currently too large and mixes many responsibilities (storage layout, registry keys, code wiring, executor wiring). Breaking it up will keep test setup easier to maintain.As per coding guidelines, "Keep functions to less than 70 lines to maintain bounded complexity".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rpc/chain-index/src/querier.rs` around lines 842 - 928, The setup_querier function is too large—split it into small helpers to separate storage population, code registry setup, and executor wiring: create e.g. build_test_storage() that constructs TestStorage and inserts the keys used (code_id_key, contract_forward_key, contract_reverse_key, eoa_forward_key, eoa_reverse_key, account_slot_key), build_test_codes() that returns an Arc<TestCodeStorage> with DummyEoaAccount and TestContract added, and build_test_executor() that returns the Arc<TestStf> initialized with NoopBegin/NoopEnd/NoopValidator/NoopPostTx and StorageGasConfig::default(); then have setup_querier call those helpers and return StorageStateQuerier::new(storage, token_account_id, codes, executor) along with contract_address and sender_address. Ensure helper names (build_test_storage, build_test_codes, build_test_executor) reference the same key functions and types (AccountId, Address, TestStorage, TestCodeStorage, TestStf, StorageStateQuerier) so the test setup logic is unchanged but modularized.docs/scripts/viem-transfer-e2e.ts (1)
124-128: Prefer string serialization foru128balances.Line 124 and Line 128 convert bigint to
Number, which can silently lose precision if balances grow. Serialize as strings to preserve fullu128fidelity.🛠️ Proposed fix
{ eth_address: sender, - balance: Number(INITIAL_SENDER_BALANCE), + balance: INITIAL_SENDER_BALANCE.toString(), }, { eth_address: recipient, - balance: Number(INITIAL_RECIPIENT_BALANCE), + balance: INITIAL_RECIPIENT_BALANCE.toString(), },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/scripts/viem-transfer-e2e.ts` around lines 124 - 128, The balance fields currently call Number(INITIAL_SENDER_BALANCE) and Number(INITIAL_RECIPIENT_BALANCE) which can lose precision for u128 values; instead serialize the u128 balances to strings (e.g., use INITIAL_SENDER_BALANCE.toString() / String(INITIAL_SENDER_BALANCE) and the recipient equivalent) so the objects written by the script preserve full u128 fidelity and match any API/schema expecting string-serialized bigints; update the two balance assignments where eth_address and balance are set accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/rpc/chain-index/src/provider.rs`:
- Around line 333-341: The code in resolve_block_context currently defaults a
missing indexed block's timestamp to 0, fabricating block metadata; change the
logic so that after resolving the block number via resolve_state_query_block,
you call self.index.get_block(number) and if that returns None return an
Err(RpcError) instead of unwrapping to 0—i.e., map the Option to an error (for
example construct a descriptive RpcError like "block not indexed" or use an
existing RpcError variant) and only call BlockContext::new(number, timestamp)
when a real block timestamp is present; update resolve_block_context to
propagate the error from get_block when the block record is missing.
In `@crates/rpc/chain-index/src/querier.rs`:
- Around line 268-274: Run rustfmt/cargo fmt on the repository and commit the
formatted changes so formatting checks pass; specifically reformat the file
containing the input_bytes function (fn input_bytes(request: &CallRequest) ->
Vec<u8>) in querier.rs to satisfy rustfmt (you can run cargo fmt or rustfmt on
that file), then stage and commit the resulting changes.
In `@docs/scripts/viem-transfer-e2e.ts`:
- Around line 47-50: The file currently hardcodes SENDER_PRIVATE_KEY and
RECIPIENT_PRIVATE_KEY constants; remove those literal private keys and instead
load secrets from a secure source (e.g., process.env variables or a .env via
dotenv) or generate ephemeral test keys at runtime using the viem wallet/test
account helpers; replace references to SENDER_PRIVATE_KEY and
RECIPIENT_PRIVATE_KEY so they read from process.env.SENDER_PRIVATE_KEY /
process.env.RECIPIENT_PRIVATE_KEY (with a clear error if missing) or from the
generated ephemeral keys, and update any tests/scripts that expect the constants
accordingly.
- Around line 259-263: The liveness checks currently only test
node.process.exitCode, which misses signal-based terminations; update each check
that uses "node.process.exitCode !== null" to instead use "node.process.exitCode
!== null || node.process.signalCode !== null", and change the thrown Error
construction (the one using phase and node.getLogs()) to display whichever
terminated the process by choosing node.process.exitCode if non-null otherwise
node.process.signalCode (e.g., `exit ${node.process.exitCode ??
node.process.signalCode}`) so the message shows either the numeric exit code or
the signal name.
---
Duplicate comments:
In `@crates/rpc/evnode/src/runner.rs`:
- Around line 425-430: The function run_external_consensus_node_eth currently
forces the RpcExecutionContext trait on the generic Stf (and related bounds like
StfExecutor/EvnodeStfExecutor), which breaks non-RPC callers even though the RPC
work is only executed at runtime; remove RpcExecutionContext from the function's
unconditional generic bounds and instead require it only where the RPC-only code
runs (e.g., move the RpcExecutionContext bound into a separate helper or into
the block that calls start_external_consensus_rpc_server), or create a small
inner function (or helper like start_external_consensus_rpc_server_wrapper) with
a where Stf: RpcExecutionContext bound and call that only when
config.rpc.enabled; reference run_external_consensus_node_eth, StfExecutor,
EvnodeStfExecutor, RpcExecutionContext, and start_external_consensus_rpc_server
to locate the changes.
---
Nitpick comments:
In `@crates/rpc/chain-index/src/querier.rs`:
- Around line 842-928: The setup_querier function is too large—split it into
small helpers to separate storage population, code registry setup, and executor
wiring: create e.g. build_test_storage() that constructs TestStorage and inserts
the keys used (code_id_key, contract_forward_key, contract_reverse_key,
eoa_forward_key, eoa_reverse_key, account_slot_key), build_test_codes() that
returns an Arc<TestCodeStorage> with DummyEoaAccount and TestContract added, and
build_test_executor() that returns the Arc<TestStf> initialized with
NoopBegin/NoopEnd/NoopValidator/NoopPostTx and StorageGasConfig::default(); then
have setup_querier call those helpers and return
StorageStateQuerier::new(storage, token_account_id, codes, executor) along with
contract_address and sender_address. Ensure helper names (build_test_storage,
build_test_codes, build_test_executor) reference the same key functions and
types (AccountId, Address, TestStorage, TestCodeStorage, TestStf,
StorageStateQuerier) so the test setup logic is unchanged but modularized.
In `@docs/scripts/viem-transfer-e2e.ts`:
- Around line 124-128: The balance fields currently call
Number(INITIAL_SENDER_BALANCE) and Number(INITIAL_RECIPIENT_BALANCE) which can
lose precision for u128 values; instead serialize the u128 balances to strings
(e.g., use INITIAL_SENDER_BALANCE.toString() / String(INITIAL_SENDER_BALANCE)
and the recipient equivalent) so the objects written by the script preserve full
u128 fidelity and match any API/schema expecting string-serialized bigints;
update the two balance assignments where eth_address and balance are set
accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c068426f-79bd-4152-8775-fb809d517907
📒 Files selected for processing (7)
bin/testapp/examples/print_token_address.rscrates/app/node/src/lib.rscrates/app/stf/src/lib.rscrates/rpc/chain-index/src/provider.rscrates/rpc/chain-index/src/querier.rscrates/rpc/evnode/src/runner.rsdocs/scripts/viem-transfer-e2e.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- bin/testapp/examples/print_token_address.rs
| fn resolve_block_context(&self, block: Option<u64>) -> Result<BlockContext, RpcError> { | ||
| let number = self.resolve_state_query_block(block)?; | ||
| let timestamp = self | ||
| .index | ||
| .get_block(number) | ||
| .map_err(RpcError::from)? | ||
| .map(|b| b.timestamp) | ||
| .unwrap_or(0); | ||
| Ok(BlockContext::new(number, timestamp)) |
There was a problem hiding this comment.
Avoid fabricating BlockContext timestamps on missing indexed block data.
At Line 340, defaulting to 0 timestamp silently creates synthetic block metadata. If latest_block_number points to a missing block record, this should error instead of simulating with fake time.
Proposed fix
fn resolve_block_context(&self, block: Option<u64>) -> Result<BlockContext, RpcError> {
let number = self.resolve_state_query_block(block)?;
- let timestamp = self
- .index
- .get_block(number)
- .map_err(RpcError::from)?
- .map(|b| b.timestamp)
- .unwrap_or(0);
+ let timestamp = self
+ .index
+ .get_block(number)
+ .map_err(RpcError::from)?
+ .ok_or(RpcError::BlockNotFound)?
+ .timestamp;
Ok(BlockContext::new(number, timestamp))
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn resolve_block_context(&self, block: Option<u64>) -> Result<BlockContext, RpcError> { | |
| let number = self.resolve_state_query_block(block)?; | |
| let timestamp = self | |
| .index | |
| .get_block(number) | |
| .map_err(RpcError::from)? | |
| .map(|b| b.timestamp) | |
| .unwrap_or(0); | |
| Ok(BlockContext::new(number, timestamp)) | |
| fn resolve_block_context(&self, block: Option<u64>) -> Result<BlockContext, RpcError> { | |
| let number = self.resolve_state_query_block(block)?; | |
| let timestamp = self | |
| .index | |
| .get_block(number) | |
| .map_err(RpcError::from)? | |
| .ok_or(RpcError::BlockNotFound)? | |
| .timestamp; | |
| Ok(BlockContext::new(number, timestamp)) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/rpc/chain-index/src/provider.rs` around lines 333 - 341, The code in
resolve_block_context currently defaults a missing indexed block's timestamp to
0, fabricating block metadata; change the logic so that after resolving the
block number via resolve_state_query_block, you call
self.index.get_block(number) and if that returns None return an Err(RpcError)
instead of unwrapping to 0—i.e., map the Option to an error (for example
construct a descriptive RpcError like "block not indexed" or use an existing
RpcError variant) and only call BlockContext::new(number, timestamp) when a real
block timestamp is present; update resolve_block_context to propagate the error
from get_block when the block record is missing.
| }]) | ||
| } | ||
|
|
||
|
|
||
| fn input_bytes(request: &CallRequest) -> Vec<u8> { | ||
| request.input_data().map(|b| b.to_vec()).unwrap_or_default() | ||
| } |
There was a problem hiding this comment.
cargo fmt is currently failing in this file.
Please run formatting and commit the result so just quality/fmt-check passes.
🧰 Tools
🪛 GitHub Check: cargo fmt
[warning] 268-268:
Diff in /home/runner/work/ev-rs/ev-rs/crates/rpc/chain-index/src/querier.rs
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/rpc/chain-index/src/querier.rs` around lines 268 - 274, Run
rustfmt/cargo fmt on the repository and commit the formatted changes so
formatting checks pass; specifically reformat the file containing the
input_bytes function (fn input_bytes(request: &CallRequest) -> Vec<u8>) in
querier.rs to satisfy rustfmt (you can run cargo fmt or rustfmt on that file),
then stage and commit the resulting changes.
| const SENDER_PRIVATE_KEY = | ||
| "0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80" as const; | ||
| const RECIPIENT_PRIVATE_KEY = | ||
| "0x59c6995e998f97a5a0044966f0945382d7f0f8cb9b8b9c5d91b3ef1c0d8f4a7c" as const; |
There was a problem hiding this comment.
Remove hardcoded private keys from source.
Line 47 and Line 49 embed raw private keys. Even for local demos, this is a security/scanner risk and can lead to accidental reuse.
🛠️ Proposed fix
+import { randomBytes } from "node:crypto";
@@
-const SENDER_PRIVATE_KEY =
- "0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80" as const;
-const RECIPIENT_PRIVATE_KEY =
- "0x59c6995e998f97a5a0044966f0945382d7f0f8cb9b8b9c5d91b3ef1c0d8f4a7c" as const;
+const SENDER_PRIVATE_KEY: Hex = bytesToHex(randomBytes(32));
+const RECIPIENT_PRIVATE_KEY: Hex = bytesToHex(randomBytes(32));📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const SENDER_PRIVATE_KEY = | |
| "0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80" as const; | |
| const RECIPIENT_PRIVATE_KEY = | |
| "0x59c6995e998f97a5a0044966f0945382d7f0f8cb9b8b9c5d91b3ef1c0d8f4a7c" as const; | |
| import { randomBytes } from "node:crypto"; | |
| import { bytesToHex } from "viem"; | |
| import type { Hex } from "viem"; | |
| // ... other code ... | |
| const SENDER_PRIVATE_KEY: Hex = bytesToHex(randomBytes(32)); | |
| const RECIPIENT_PRIVATE_KEY: Hex = bytesToHex(randomBytes(32)); |
🧰 Tools
🪛 Gitleaks (8.30.0)
[high] 47-48: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
[high] 49-50: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/scripts/viem-transfer-e2e.ts` around lines 47 - 50, The file currently
hardcodes SENDER_PRIVATE_KEY and RECIPIENT_PRIVATE_KEY constants; remove those
literal private keys and instead load secrets from a secure source (e.g.,
process.env variables or a .env via dotenv) or generate ephemeral test keys at
runtime using the viem wallet/test account helpers; replace references to
SENDER_PRIVATE_KEY and RECIPIENT_PRIVATE_KEY so they read from
process.env.SENDER_PRIVATE_KEY / process.env.RECIPIENT_PRIVATE_KEY (with a clear
error if missing) or from the generated ephemeral keys, and update any
tests/scripts that expect the constants accordingly.
| if (node.process.exitCode !== null) { | ||
| throw new Error( | ||
| `testapp exited while ${phase} (exit ${node.process.exitCode})\n${node.getLogs().trim()}` | ||
| ); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify current liveness checks only use exitCode and whether signalCode is handled.
rg -nP 'exitCode !== null|signalCode' docs/scripts/viem-transfer-e2e.ts -C2Repository: evstack/ev-rs
Length of output: 487
🌐 Web query:
In Node.js ChildProcess, when a process exits due to a signal, can exitCode remain null while signalCode is set, and should process liveness checks account for both?
💡 Result:
Yes.
- In the
ChildProcess'exit'(and'close') event,codeisnullwhen the child terminated due to a signal, andsignalis then the signal name; Node notes that one of the two will always be non-null. [1] - For the instance properties,
subprocess.exitCodeisnullwhile the child is still running, andsubprocess.signalCodeis the signal (ornullif none was received). [1]
So a liveness check must account for both: treat the child as “still running” only if child.exitCode === null && child.signalCode === null; otherwise it has already exited (including via signal). [1]
Sources:
[1] Node.js child_process documentation (ChildProcess events/properties) (nodejs.org)
Citations:
Account for signal-based process exits in liveness checks.
Lines 259, 319, and 336 check only exitCode !== null to determine if a process has exited. In Node.js, a process terminated by a signal has exitCode === null and signalCode set instead. This causes false "still running" states and unnecessary timeout waits.
Update all three checks to exitCode !== null || signalCode !== null, and adjust error messages to show either the exit code or signal name:
Proposed fix
function assertNodeRunning(node: NodeHandle, phase: string): void {
- if (node.process.exitCode !== null) {
+ if (node.process.exitCode !== null || node.process.signalCode !== null) {
+ const termination =
+ node.process.exitCode !== null
+ ? `exit ${node.process.exitCode}`
+ : `signal ${node.process.signalCode}`;
throw new Error(
- `testapp exited while ${phase} (exit ${node.process.exitCode})\n${node.getLogs().trim()}`
+ `testapp exited while ${phase} (${termination})\n${node.getLogs().trim()}`
);
}
}
- if (node.process.exitCode !== null) {
+ if (node.process.exitCode !== null || node.process.signalCode !== null) {
return;
}
- if (child.exitCode !== null) {
+ if (child.exitCode !== null || child.signalCode !== null) {
resolvePromise(true);
return;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/scripts/viem-transfer-e2e.ts` around lines 259 - 263, The liveness
checks currently only test node.process.exitCode, which misses signal-based
terminations; update each check that uses "node.process.exitCode !== null" to
instead use "node.process.exitCode !== null || node.process.signalCode !==
null", and change the thrown Error construction (the one using phase and
node.getLogs()) to display whichever terminated the process by choosing
node.process.exitCode if non-null otherwise node.process.signalCode (e.g., `exit
${node.process.exitCode ?? node.process.signalCode}`) so the message shows
either the numeric exit code or the signal name.
Overview
Summary by CodeRabbit
New Features
Improvements