feat(rs-sdk): add shielded pool SDK support#3230
Conversation
Adds rs-sdk support for shielded pool queries and transitions: - Fetch impls for shielded pool state, anchors, encrypted notes, nullifiers, trunk/branch state, and recent nullifier changes - Query impls for all shielded request types - Transition helpers for shield, unshield, shielded transfer, shielded withdrawal, and shield from asset lock - Nullifier sync module for privacy-preserving synchronization - Shielded module with Orchard note management utilities Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds Fetch, Query, and MockResponse impls for MostRecentShieldedAnchor so clients can query the latest shielded anchor via the SDK. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds comprehensive shielded functionality and a reusable trunk/branch sync framework to the Rust SDK: new dependency and feature gate, shielded state queries, shield/unshield/transfer/withdraw transitions, shielded notes parallel sync with trial decryption, nullifier synchronization, and a refactored trunk/branch scan reused by address and nullifier sync. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant SDK
participant Shielded as "Shielded Notes Sync"
participant Network as "Platform API"
participant Crypto as "Decryption"
User->>SDK: sync_shielded_notes(ivk, start_index, config)
SDK->>Shielded: start parallel chunk fetches
loop parallel fetches
Shielded->>Network: GetShieldedEncryptedNotes(start, count)
Network-->>Shielded: ShieldedEncryptedNotes + metadata
Shielded->>Crypto: try_decrypt_note(ivk, encrypted_note)
Crypto-->>Shielded: DecryptedNote or None
end
Shielded->>SDK: return ShieldedSyncResult
SDK-->>User: result
sequenceDiagram
actor User
participant SDK
participant NullifierSync
participant Provider
participant Network
participant Drive
User->>SDK: sync_nullifiers(provider, config, last_sync)
SDK->>NullifierSync: decide full-scan or incremental
alt full scan
NullifierSync->>Network: trunk queries
Network-->>NullifierSync: trunk results
loop branches
NullifierSync->>Network: branch query(leaf)
Network-->>NullifierSync: branch result + proof
NullifierSync->>Drive: verify_nullifiers_branch_query(proof)
Drive-->>NullifierSync: validation
end
else incremental
NullifierSync->>Network: compacted & recent changes queries
Network-->>NullifierSync: updates
end
NullifierSync->>Provider: nullifiers_to_check()
Provider-->>NullifierSync: Vec<NullifierKey>
NullifierSync-->>SDK: NullifierSyncResult(found, absent, metrics)
SDK-->>User: result
sequenceDiagram
actor User
participant SDK
participant State
participant Transition
participant Network
User->>SDK: shield_funds(inputs, bundle, ...)
loop per input address
SDK->>State: fetch nonce
State-->>SDK: nonce
SDK->>SDK: increment nonce
end
SDK->>Transition: build ShieldTransition with nonces
Transition->>Transition: validate structure
Transition->>Network: broadcast state transition
Network-->>Transition: response
Transition-->>SDK: result
SDK-->>User: result
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
✅ DashSDKFFI.xcframework built for this PR.
SwiftPM (host the zip at a stable URL, then use): .binaryTarget(
name: "DashSDKFFI",
url: "https://your.cdn.example/DashSDKFFI.xcframework.zip",
checksum: "344b4c4fc7464dd2d8fd02fc3b8532c3b1c8eaa88e26a6384edb03698432f8f3"
)Xcode manual integration:
|
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## v3.1-dev #3230 +/- ##
=============================================
- Coverage 50.39% 30.79% -19.61%
=============================================
Files 3089 105 -2984
Lines 219104 9027 -210077
=============================================
- Hits 110426 2780 -107646
+ Misses 108678 6247 -102431
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
packages/rs-sdk/src/shielded.rs (1)
229-246: Theunwrap_or([0u8; 32])is redundant but safe.Since
try_decrypt_notealready validates thatnullifierandcmxare exactly 32 bytes (lines 62-67 returnNoneotherwise), theunwrap_oron lines 234-235 will never trigger for successfully decrypted notes. This is harmless defensive coding, but if you want to make the logic explicit and catch unexpected invariant violations during development, consider usingexpectinstead:♻️ Optional: Use expect to surface invariant violations
- let nf: [u8; 32] = note.nullifier.as_slice().try_into().unwrap_or([0u8; 32]); - let cmx: [u8; 32] = note.cmx.as_slice().try_into().unwrap_or([0u8; 32]); + let nf: [u8; 32] = note.nullifier.as_slice().try_into() + .expect("nullifier length validated by try_decrypt_note"); + let cmx: [u8; 32] = note.cmx.as_slice().try_into() + .expect("cmx length validated by try_decrypt_note");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/rs-sdk/src/shielded.rs` around lines 229 - 246, In the loop that processes chunk_results (variables chunk_start, notes) after successful try_decrypt_note, replace the defensive unwrap_or on converting note.nullifier and note.cmx to [u8; 32] with explicit asserts using expect so invariant violations surface during development; locate the conversions currently using note.nullifier.as_slice().try_into().unwrap_or([0u8; 32]) and note.cmx.as_slice().try_into().unwrap_or([0u8; 32]) and change them to use try_into().expect(...) with clear messages like "nullifier must be 32 bytes" and "cmx must be 32 bytes" before pushing into decrypted_notes (DecryptedNote).packages/rs-sdk/src/platform/nullifier_sync/mod.rs (1)
626-634: Potential truncation:depthcast fromu32tou8.The
depthparameter isu32(line 579) but is cast tou8on line 631. Ifdepth > 255, this silently truncates. While tree depths > 255 are unlikely in practice, a safe conversion would prevent subtle bugs.♻️ Use try_into for safe conversion
- depth as u8, + u8::try_from(depth).map_err(|_| ExecutionError { + inner: Error::InvalidProvedResponse(format!("depth {} exceeds u8 max", depth)), + address: Some(address.clone()), + retries, + })?,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/rs-sdk/src/platform/nullifier_sync/mod.rs` around lines 626 - 634, The code currently casts depth (u32) to u8 inline when calling Drive::verify_nullifiers_branch_query which can silently truncate values >255; replace the direct cast with a safe conversion using depth.try_into() (or equivalent) to produce a u8_depth and propagate/return an error if the conversion fails, then pass u8_depth to Drive::verify_nullifiers_branch_query so the function call uses a validated u8 instead of a potentially truncated value.packages/rs-sdk/src/platform/nullifier_sync/types.rs (1)
9-52: Consider adding length validation or documentation forpool_identifier.The doc comment on line 33 states this should be a "32-byte identifier", but
Option<Vec<u8>>allows any length. The actual validation happens inexecute_trunk_query(line 414-416 in mod.rs) where it's copied into a[u8; 32]array, which will panic if the length differs.Consider either:
- Changing the type to
Option<[u8; 32]>to enforce the constraint at compile time- Adding a builder method or validation that checks length when setting this field
♻️ Option 1: Use fixed-size array (preferred)
/// Optional 32-byte identifier for individual token pools. /// /// Default: None - pub pool_identifier: Option<Vec<u8>>, + pub pool_identifier: Option<[u8; 32]>,This would also simplify the conversion in
execute_trunk_query:let trunk_query = NullifiersTrunkQuery { pool_type: config.pool_type, - pool_identifier: config.pool_identifier.as_deref().map(|id| { - let mut arr = [0u8; 32]; - arr.copy_from_slice(id); - arr - }), + pool_identifier: config.pool_identifier, };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/rs-sdk/src/platform/nullifier_sync/types.rs` around lines 9 - 52, The docs say NullifierSyncConfig::pool_identifier must be 32 bytes but the type Option<Vec<u8>> allows any length and will panic when execute_trunk_query copies it into a [u8; 32]; fix by either changing NullifierSyncConfig::pool_identifier to Option<[u8; 32]> (preferred) so the compiler enforces length and simplifies the conversion in execute_trunk_query, or add a constructor/builder/validation on NullifierSyncConfig that checks pool_identifier.len() == 32 and returns an error before execute_trunk_query is called; update references to pool_identifier and the conversion logic in execute_trunk_query to match the chosen approach.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/rs-sdk/src/platform/nullifier_sync/mod.rs`:
- Around line 411-418: The current construction of NullifiersTrunkQuery copies
config.pool_identifier into a [u8;32] via arr.copy_from_slice(id), which will
panic if pool_identifier is Some(slice) of incorrect length; update the code
that builds NullifiersTrunkQuery to validate the length of
config.pool_identifier and return an Err (propagate a Result) instead of
panicking, or change NullifierSyncConfig so pool_identifier is Option<[u8; 32]>
and use it directly; specifically, adjust the code around NullifiersTrunkQuery
and the NullifierSyncConfig/type definitions to either perform an explicit
length check on config.pool_identifier (returning a descriptive error) before
copy_from_slice, or convert the config type to Option<[u8;32]> and remove the
runtime copy_from_slice to eliminate the panic risk.
- Around line 543-567: The branch-query error branches currently only log
warnings and leave nullifiers unresolved; add failure tracking so callers can
detect incomplete syncs by: add a field like branch_query_failures (usize) to
NullifierSyncMetrics, increment metrics.branch_query_failures in both Err(e)
arms where futures.next() is matched (the two places that currently call
warn!("Nullifier branch query failed: {:?}", e)), keep pushing successful
results and incrementing metrics.branch_queries as before, and return or
propagate the populated NullifierSyncMetrics back to the caller so callers can
inspect branch_query_failures to detect incomplete syncs.
In `@packages/rs-sdk/src/platform/transition.rs`:
- Around line 12-15: The shielded transition modules (pub mod shield,
shield_from_asset_lock, shielded_transfer, shielded_withdrawal, and unshield)
reference types from dpp::shielded (e.g., OrchardBundleParams) which only exist
when the "shielded" feature is enabled; wrap each of those pub mod declarations
with a feature gate (e.g., add #[cfg(feature = "shielded")] above the pub mod
shield, pub mod shield_from_asset_lock, pub mod shielded_transfer, pub mod
shielded_withdrawal and pub mod unshield) so they are only compiled when the
"shielded" feature is enabled, ensuring the crate compiles when the shielded
feature is disabled.
---
Nitpick comments:
In `@packages/rs-sdk/src/platform/nullifier_sync/mod.rs`:
- Around line 626-634: The code currently casts depth (u32) to u8 inline when
calling Drive::verify_nullifiers_branch_query which can silently truncate values
>255; replace the direct cast with a safe conversion using depth.try_into() (or
equivalent) to produce a u8_depth and propagate/return an error if the
conversion fails, then pass u8_depth to Drive::verify_nullifiers_branch_query so
the function call uses a validated u8 instead of a potentially truncated value.
In `@packages/rs-sdk/src/platform/nullifier_sync/types.rs`:
- Around line 9-52: The docs say NullifierSyncConfig::pool_identifier must be 32
bytes but the type Option<Vec<u8>> allows any length and will panic when
execute_trunk_query copies it into a [u8; 32]; fix by either changing
NullifierSyncConfig::pool_identifier to Option<[u8; 32]> (preferred) so the
compiler enforces length and simplifies the conversion in execute_trunk_query,
or add a constructor/builder/validation on NullifierSyncConfig that checks
pool_identifier.len() == 32 and returns an error before execute_trunk_query is
called; update references to pool_identifier and the conversion logic in
execute_trunk_query to match the chosen approach.
In `@packages/rs-sdk/src/shielded.rs`:
- Around line 229-246: In the loop that processes chunk_results (variables
chunk_start, notes) after successful try_decrypt_note, replace the defensive
unwrap_or on converting note.nullifier and note.cmx to [u8; 32] with explicit
asserts using expect so invariant violations surface during development; locate
the conversions currently using
note.nullifier.as_slice().try_into().unwrap_or([0u8; 32]) and
note.cmx.as_slice().try_into().unwrap_or([0u8; 32]) and change them to use
try_into().expect(...) with clear messages like "nullifier must be 32 bytes" and
"cmx must be 32 bytes" before pushing into decrypted_notes (DecryptedNote).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 11d482ed-6a21-43b5-801c-26b09d8dca64
📒 Files selected for processing (19)
packages/rs-sdk/Cargo.tomlpackages/rs-sdk/src/lib.rspackages/rs-sdk/src/mock/requests.rspackages/rs-sdk/src/platform.rspackages/rs-sdk/src/platform/fetch.rspackages/rs-sdk/src/platform/nullifier_sync/mod.rspackages/rs-sdk/src/platform/nullifier_sync/provider.rspackages/rs-sdk/src/platform/nullifier_sync/types.rspackages/rs-sdk/src/platform/query.rspackages/rs-sdk/src/platform/transition.rspackages/rs-sdk/src/platform/transition/shield.rspackages/rs-sdk/src/platform/transition/shield_from_asset_lock.rspackages/rs-sdk/src/platform/transition/shielded_transfer.rspackages/rs-sdk/src/platform/transition/shielded_withdrawal.rspackages/rs-sdk/src/platform/transition/unshield.rspackages/rs-sdk/src/platform/types.rspackages/rs-sdk/src/platform/types/shielded.rspackages/rs-sdk/src/sdk.rspackages/rs-sdk/src/shielded.rs
- Feature-gate shielded transition modules (shield, unshield, etc.) - Fix clippy::too_many_arguments on branch query functions - Use is_multiple_of() and div_ceil() instead of manual implementations - Remove dead platform-wallet re-export (feature not on v3.1-dev) - Change NullifierSyncConfig.pool_identifier to Option<[u8; 32]> to eliminate copy_from_slice panic risk - Add branch_query_failures counter to NullifierSyncMetrics for callers to detect incomplete syncs Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Nullifier sync and address sync used identical trunk→branch iteration patterns (~70% duplicated code). Extract the shared algorithm into a new trunk_branch_sync module with a TrunkBranchSyncOps trait. - Create trunk_branch_sync module with generic run_full_tree_scan(), privacy-adjusted leaf selection, and parallel branch execution - Move KeyLeafTracker from address_sync to trunk_branch_sync - Refactor nullifier_sync and address_sync to implement the trait - Net reduction of ~445 lines; public API unchanged Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
packages/rs-sdk/src/platform/address_sync/mod.rs (1)
251-253:⚠️ Potential issue | 🟠 MajorSurface branch-query failures for address sync.
Because the shared runner only warns when keys remain unresolved, this no-op means persistent branch failures can still return
Ok(...)with some addresses missing from bothfoundandabsent. Please mirror the nullifier path by recording failures inAddressSyncMetrics, or fail the sync once iteration exhaustion leaves unresolved keys.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/rs-sdk/src/platform/address_sync/mod.rs` around lines 251 - 253, The on_branch_failure function currently no-ops, which lets branch-query failures go unnoticed; update on_branch_failure in mod.rs to mirror the nullifier path by recording failures into AddressSyncMetrics (increment a branch_failure or similar counter) and/or mark the sync as failed so unresolved keys are not silently ignored; locate the on_branch_failure handler for the address sync runner, update it to call the same failure-recording logic used by the nullifier path (or call AddressSyncMetrics::record_branch_failure / increment the metric and set a failure flag on the Context/Sync state) so the shared runner will surface branch-query failures or cause the sync to return Err when iteration exhaustion leaves unresolved keys.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/rs-sdk/src/platform/nullifier_sync/mod.rs`:
- Around line 423-427: When iterating over entries and their entry.nullifiers
(the loops that check nullifier_set and currently do
result.found.insert(*nf_bytes)), also remove that nullifier from result.absent
so it isn't reported as both spent and unspent; i.e., after
result.found.insert(*nf_bytes) call add result.absent.remove(nf_bytes) (or
result.absent.remove(*nf_bytes) to match the insert key type). Apply this change
in both the compacted and recent-changes loops that use nullifier_set,
result.found and result.absent.
- Around line 322-359: The incremental-only branch (when !needs_full_scan)
leaves result empty so NullifierSyncResult becomes a delta; fix by producing a
full classification there: build the nullifier_set from nullifiers (or
provider.nullifiers_to_check()), then call incremental_catch_up(sdk,
&nullifier_set, 0, &mut result, config.request_settings).await? (i.e., use
catch_up_from=0/genesis) so unchanged spent/unspent nullifiers are correctly
placed into result.found/result.absent; update the branch around
needs_full_scan/last_sync_height to populate result before returning.
In `@packages/rs-sdk/src/shielded.rs`:
- Around line 255-261: The current logic rounds raw_next up to the next chunk
boundary using div_ceil, which can skip a partially-filled chunk and miss notes
appended after the scan; change next_start_index calculation so that when
chunk_size > 0 and raw_next is not already aligned, you round down to the start
of the containing chunk (i.e., use floor division:
raw_next.div_floor(chunk_size) * chunk_size or integer division raw_next /
chunk_size * chunk_size) so the next scan reprocesses the partial chunk instead
of advancing past it; keep the branch for chunk_size == 0 unchanged and preserve
behavior when raw_next is already on a boundary.
---
Duplicate comments:
In `@packages/rs-sdk/src/platform/address_sync/mod.rs`:
- Around line 251-253: The on_branch_failure function currently no-ops, which
lets branch-query failures go unnoticed; update on_branch_failure in mod.rs to
mirror the nullifier path by recording failures into AddressSyncMetrics
(increment a branch_failure or similar counter) and/or mark the sync as failed
so unresolved keys are not silently ignored; locate the on_branch_failure
handler for the address sync runner, update it to call the same
failure-recording logic used by the nullifier path (or call
AddressSyncMetrics::record_branch_failure / increment the metric and set a
failure flag on the Context/Sync state) so the shared runner will surface
branch-query failures or cause the sync to return Err when iteration exhaustion
leaves unresolved keys.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 60ab1468-2606-417f-b587-c7e501589212
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (10)
packages/rs-sdk/src/lib.rspackages/rs-sdk/src/platform.rspackages/rs-sdk/src/platform/address_sync/mod.rspackages/rs-sdk/src/platform/nullifier_sync/mod.rspackages/rs-sdk/src/platform/nullifier_sync/types.rspackages/rs-sdk/src/platform/query.rspackages/rs-sdk/src/platform/transition.rspackages/rs-sdk/src/platform/trunk_branch_sync/mod.rspackages/rs-sdk/src/platform/trunk_branch_sync/tracker.rspackages/rs-sdk/src/shielded.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/rs-sdk/src/lib.rs
- packages/rs-sdk/src/platform/transition.rs
…rtial chunk skip - Nullifier incremental catch-up now removes entries from `absent` when they appear in compacted/recent changes, preventing the same nullifier from being reported as both spent and unspent. - Shielded note sync no longer rounds up past a partial buffer chunk. The BulkAppendTree buffer is mutable, so resuming from the partial chunk's start ensures newly appended notes are not skipped. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| sdk: &Sdk, | ||
| settings: RequestSettings, | ||
| context: &mut Self::Context<'_>, | ||
| ) -> Result<(GroveTrunkQueryResult, u64, u64), Error> { |
There was a problem hiding this comment.
Use structs if we have tuple more than 2 elements, especially you have same types there (u64)
| pub async fn sync_nullifiers<P: NullifierProvider>( | ||
| sdk: &Sdk, | ||
| provider: &P, | ||
| config: Option<NullifierSyncConfig>, |
There was a problem hiding this comment.
more clean to always accept config so people pass default here
There was a problem hiding this comment.
I don't actually agree, but we can debate it, a lot of the code uses Option so if we switch we should switch throughout the codebase.
| pool_type: u32, | ||
| pool_identifier: Option<&[u8]>, |
There was a problem hiding this comment.
you better have struct for it, so use use it in trait definition for type everywhere you pass them. intead of two things you will pass 1.
| key: LeafBoundaryKey, | ||
| depth: u32, | ||
| expected_hash: [u8; 32], | ||
| checkpoint_height: u64, |
There was a problem hiding this comment.
let's hava a struct for this four so you don't have too many args problem
| last_sync_height: Option<u64>, | ||
| last_sync_timestamp: Option<u64>, |
There was a problem hiding this comment.
combine these two into struct because you always pass them together so you won't have two optionals in arguments.
packages/rs-sdk/src/shielded.rs
Outdated
| /// Returns `Some((note, address))` if the note decrypts successfully under the | ||
| /// given incoming viewing key, or `None` if it does not belong to the viewer | ||
| /// (including dummy/padding notes). | ||
| pub fn try_decrypt_note( |
There was a problem hiding this comment.
I would put this one nearby with nullifier_sync
maybe src/platform/sheided/nullifier_sync and src/platform/sheided/notes_sync
and gate them both with feature flag, transitions as well
Address review feedback to use structs instead of tuples/many arguments: - TrunkQueryResponse: replaces (GroveTrunkQueryResult, u64, u64) return - BranchQueryParams: replaces key, depth, expected_hash, checkpoint_height args - NullifierPoolConfig: replaces (u32, Option<[u8; 32]>) BranchQueryConfig - NullifierSyncCheckpoint: replaces last_sync_height + last_sync_timestamp pair Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Move logical groups out of the monolithic mod.rs: - decrypt.rs: try_decrypt_note + MIN_ENCRYPTED_NOTE_LEN constant - types.rs: ShieldedSyncConfig, DecryptedNote, ShieldedSyncResult - sync.rs: sync_shielded_notes + fetch_chunk mod.rs now just re-exports public items. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Move logical groups out of the monolithic shielded module: - decrypt.rs: try_decrypt_note + MIN_ENCRYPTED_NOTE_LEN constant - types.rs: ShieldedSyncConfig, DecryptedNote, ShieldedSyncResult - notes_sync.rs: sync_shielded_notes + fetch_chunk - nullifier_sync stays nested under shielded mod.rs now just re-exports public items. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Break notes_sync into focused files: - decrypt.rs: try_decrypt_note - fetch_chunk.rs: single chunk fetch helper - sync_shielded_notes.rs: main sync orchestration - types.rs: ShieldedSyncConfig, DecryptedNote, ShieldedSyncResult Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (5)
packages/rs-sdk/src/platform/address_sync/mod.rs (1)
177-260: Review theunwrap_or(0)fallback for index lookup.At line 186, if a target key isn't found in
key_to_index, the code defaults to index0. Whileafter_branch_iteration(lines 230-232) ensures new keys are added tokey_to_indexbefore being tracked, this fallback could mask bugs if the invariant is violated.Consider using
unwrap_or_else(|| { ... })with a debug assertion or log warning to surface unexpected cases during development:- let index = context.key_to_index.get(&target_key).copied().unwrap_or(0); + let index = context.key_to_index.get(&target_key).copied().unwrap_or_else(|| { + debug_assert!(false, "target_key not in key_to_index: {:?}", target_key); + 0 + });The remaining trait method implementations (
depth_limits,after_branch_iteration, metric hooks) are correctly implemented.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/rs-sdk/src/platform/address_sync/mod.rs` around lines 177 - 260, In process_branch_result, avoid silently defaulting index to 0 when retrieving from context.key_to_index: replace the get(&target_key).copied().unwrap_or(0) usage in process_branch_result with an unwrap_or_else that either calls debug_assert!(false, ...) or logs a warning including the missing key (use context.key_to_index, target_key, and index variable names) so unexpected missing mappings surface during development; ensure behavior still returns a valid index in release builds (e.g., keep returning 0 as a fallback) but fail fast in debug via debug_assert or a clear log message.packages/rs-sdk/src/platform/shielded/nullifier_sync/types.rs (1)
91-146: Consider derivingCloneonNullifierSyncResult.
NullifierSyncResultcontains onlyBTreeSet<[u8; 32]>and primitive fields, all of which areClone. DerivingClonewould make it easier for callers to store or pass around results.♻️ Suggested change
/// Result of nullifier synchronization. -#[derive(Debug)] +#[derive(Debug, Clone)] pub struct NullifierSyncResult {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/rs-sdk/src/platform/shielded/nullifier_sync/types.rs` around lines 91 - 146, The struct NullifierSyncResult should derive Clone so callers can easily copy results; update the declaration of NullifierSyncResult to add Clone to the derived traits (alongside Debug) and ensure the Default impl and new() remain valid (they will continue to work for a derived Clone). Target the struct named NullifierSyncResult in types.rs and add Clone to its derives.packages/rs-sdk/src/platform/shielded/notes_sync/types.rs (2)
11-16: Consider derivingDebugandCloneonShieldedSyncConfig.Adding common derives improves debuggability and usability of the configuration struct.
♻️ Suggested change
/// Configuration for [`sync_shielded_notes`](super::sync_shielded_notes). +#[derive(Debug, Clone)] pub struct ShieldedSyncConfig {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/rs-sdk/src/platform/shielded/notes_sync/types.rs` around lines 11 - 16, The ShieldedSyncConfig struct lacks common derives; update the struct declaration for ShieldedSyncConfig to derive Debug and Clone (in addition to any existing derives) so the config can be easily cloned and logged for debugging; locate the pub struct ShieldedSyncConfig and add the derives attribute with Debug and Clone to the declaration.
41-54: Consider derivingDebugonShieldedSyncResult.This would enable logging and inspection of sync results.
♻️ Suggested change
/// Result of [`sync_shielded_notes`](super::sync_shielded_notes). +#[derive(Debug)] pub struct ShieldedSyncResult {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/rs-sdk/src/platform/shielded/notes_sync/types.rs` around lines 41 - 54, The ShieldedSyncResult struct lacks a Debug implementation which makes logging/inspection harder; update the struct declaration for ShieldedSyncResult (the public struct in types.rs) to derive Debug (e.g., add #[derive(Debug)] above the pub struct ShieldedSyncResult) so instances can be formatted in logs and debug output.packages/rs-sdk/src/platform/shielded/mod.rs (1)
8-13: Consider re-exportingnullifier_syncpublic items for API consistency.The
nullifier_syncmodule is declared aspub modbut none of its types are re-exported here, unlikenotes_synctypes. Users would need to accessnullifier_sync::*directly whilenotes_syncitems are available at theshielded::level. If this asymmetry is intentional (e.g., nullifier sync is lower-level), document it; otherwise, consider adding re-exports.♻️ Optional: Add nullifier_sync re-exports for consistency
pub use notes_sync::decrypt::try_decrypt_note; pub use notes_sync::sync_shielded_notes::sync_shielded_notes; pub use notes_sync::types::{DecryptedNote, ShieldedSyncConfig, ShieldedSyncResult}; + +pub use nullifier_sync::{ + sync_nullifiers, NullifierKey, NullifierPoolConfig, NullifierProvider, + NullifierSyncCheckpoint, NullifierSyncConfig, NullifierSyncMetrics, NullifierSyncResult, +};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/rs-sdk/src/platform/shielded/mod.rs` around lines 8 - 13, The nullifier_sync module is declared pub but none of its public symbols are re-exported like notes_sync's items; update the shielded top-level exports to expose the relevant nullifier_sync public API (for example re-export the primary sync function(s) and types from nullifier_sync) so consumers can access them from shielded:: rather than shielded::nullifier_sync::; locate the nullifier_sync module and add pub use nullifier_sync::... lines analogous to the existing pub use notes_sync::... exports (or, if nullifier_sync is intentionally lower-level, add a short doc comment explaining that design choice).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/rs-sdk/src/platform/shielded/notes_sync/fetch_chunk.rs`:
- Around line 19-22: ShieldedEncryptedNotesQuery is being constructed with
count: chunk_size as u32 which can silently truncate if chunk_size > u32::MAX;
change the construction to convert chunk_size using try_into() (or checked cast)
and return or propagate an error if the conversion fails (e.g., map_err/ok_or
with a descriptive message) so that count is guaranteed to fit a u32 before
creating ShieldedEncryptedNotesQuery.
In `@packages/rs-sdk/src/platform/shielded/notes_sync/sync_shielded_notes.rs`:
- Around line 43-56: The code allows chunk_size (from
sdk.version().drive_abci.query.shielded_queries.max_encrypted_notes_per_query)
to be zero which causes an infinite re-enqueue loop; update the validation near
the start_index check to explicitly reject chunk_size == 0 and return an error
(e.g., Err(Error::Generic(...))) before scheduling work so subsequent logic that
advances next_chunk_index or classifies partial responses never runs with a zero
chunk size; ensure you reference and guard the chunk_size variable and preserve
the existing is_multiple_of alignment check for non-zero values.
- Line 68: The code currently resumes from the latest partial chunk seen
(risking skipping older partials); introduce a min_partial_start (initialize to
the current next_start_index) and, whenever a response marks a chunk as partial
(inspect chunk_start from that response), set min_partial_start =
min(min_partial_start, chunk_start); after processing all parallel responses set
next_start_index = min_partial_start (not the last-seen chunk_start) and keep
reached_end logic unchanged; apply the same min-tracking pattern wherever chunk
partials are handled (the other occurrences around the areas that set
next_start_index / check reached_end).
- Around line 113-115: The code currently zero-fills malformed note metadata by
using unwrap_or([0u8;32]) when converting note.nullifier and note.cmx, which can
produce bogus DecryptedNote entries; change the logic in the block that calls
try_decrypt_note (the branch where let Some((decrypted, address)) =
try_decrypt_note(ivk, note)) to validate the conversions instead of defaulting
to zeros: attempt note.nullifier.as_slice().try_into() and
note.cmx.as_slice().try_into(), and if either conversion fails return or
propagate an error (or skip this note) so you do not construct a DecryptedNote
with zeroed nullifier/cmx—ensure the error path uses the existing error
type/Result flow used by the surrounding function rather than silently
continuing.
In `@packages/rs-sdk/src/platform/shielded/nullifier_sync/mod.rs`:
- Around line 548-552: BranchQueryParams.depth is u32 but
Drive::verify_nullifiers_branch_query expects u8, so the current cast depth as
u8 can silently truncate; change the call site (the invocation of
Drive::verify_nullifiers_branch_query in mod.rs) to defensively convert depth
using try_into() (or an explicit range check) and return or propagate an error
if depth > 255, ensuring you reference BranchQueryParams.depth for the source
and Drive::verify_nullifiers_branch_query for the target to locate the change.
- Around line 93-96: Remove the redundant unsafe impl Send for
NullifierSyncContext<'_> — delete the unsafe impl Send block (and its SAFETY
comment if desired) because NullifierSyncContext<'_> already auto-implements
Send: its fields (NullifierSyncConfig, the slice of NullifierKey, and
NullifierSyncResult) are all Send so no unsafe manual impl is needed; simply
remove the unsafe impl to let the compiler verify Send automatically.
---
Nitpick comments:
In `@packages/rs-sdk/src/platform/address_sync/mod.rs`:
- Around line 177-260: In process_branch_result, avoid silently defaulting index
to 0 when retrieving from context.key_to_index: replace the
get(&target_key).copied().unwrap_or(0) usage in process_branch_result with an
unwrap_or_else that either calls debug_assert!(false, ...) or logs a warning
including the missing key (use context.key_to_index, target_key, and index
variable names) so unexpected missing mappings surface during development;
ensure behavior still returns a valid index in release builds (e.g., keep
returning 0 as a fallback) but fail fast in debug via debug_assert or a clear
log message.
In `@packages/rs-sdk/src/platform/shielded/mod.rs`:
- Around line 8-13: The nullifier_sync module is declared pub but none of its
public symbols are re-exported like notes_sync's items; update the shielded
top-level exports to expose the relevant nullifier_sync public API (for example
re-export the primary sync function(s) and types from nullifier_sync) so
consumers can access them from shielded:: rather than
shielded::nullifier_sync::; locate the nullifier_sync module and add pub use
nullifier_sync::... lines analogous to the existing pub use notes_sync::...
exports (or, if nullifier_sync is intentionally lower-level, add a short doc
comment explaining that design choice).
In `@packages/rs-sdk/src/platform/shielded/notes_sync/types.rs`:
- Around line 11-16: The ShieldedSyncConfig struct lacks common derives; update
the struct declaration for ShieldedSyncConfig to derive Debug and Clone (in
addition to any existing derives) so the config can be easily cloned and logged
for debugging; locate the pub struct ShieldedSyncConfig and add the derives
attribute with Debug and Clone to the declaration.
- Around line 41-54: The ShieldedSyncResult struct lacks a Debug implementation
which makes logging/inspection harder; update the struct declaration for
ShieldedSyncResult (the public struct in types.rs) to derive Debug (e.g., add
#[derive(Debug)] above the pub struct ShieldedSyncResult) so instances can be
formatted in logs and debug output.
In `@packages/rs-sdk/src/platform/shielded/nullifier_sync/types.rs`:
- Around line 91-146: The struct NullifierSyncResult should derive Clone so
callers can easily copy results; update the declaration of NullifierSyncResult
to add Clone to the derived traits (alongside Debug) and ensure the Default impl
and new() remain valid (they will continue to work for a derived Clone). Target
the struct named NullifierSyncResult in types.rs and add Clone to its derives.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 45ad67f1-cb41-4fa5-a7b2-a5f88879358f
📒 Files selected for processing (13)
packages/rs-sdk/src/lib.rspackages/rs-sdk/src/platform.rspackages/rs-sdk/src/platform/address_sync/mod.rspackages/rs-sdk/src/platform/shielded/mod.rspackages/rs-sdk/src/platform/shielded/notes_sync/decrypt.rspackages/rs-sdk/src/platform/shielded/notes_sync/fetch_chunk.rspackages/rs-sdk/src/platform/shielded/notes_sync/mod.rspackages/rs-sdk/src/platform/shielded/notes_sync/sync_shielded_notes.rspackages/rs-sdk/src/platform/shielded/notes_sync/types.rspackages/rs-sdk/src/platform/shielded/nullifier_sync/mod.rspackages/rs-sdk/src/platform/shielded/nullifier_sync/provider.rspackages/rs-sdk/src/platform/shielded/nullifier_sync/types.rspackages/rs-sdk/src/platform/trunk_branch_sync/mod.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/rs-sdk/src/lib.rs
| let query = ShieldedEncryptedNotesQuery { | ||
| start_index: chunk_start, | ||
| count: chunk_size as u32, | ||
| }; |
There was a problem hiding this comment.
Potential truncation when casting chunk_size to u32.
The chunk_size parameter is u64, but ShieldedEncryptedNotesQuery.count is u32. The cast chunk_size as u32 will silently truncate values exceeding u32::MAX. Consider using try_into() or documenting that chunk_size must fit in u32.
🛡️ Proposed fix with validation
pub async fn fetch_chunk(
sdk: &Sdk,
chunk_start: u64,
chunk_size: u64,
settings: RequestSettings,
) -> Result<(u64, Vec<ShieldedEncryptedNote>, u64), Error> {
+ let count = u32::try_from(chunk_size).map_err(|_| {
+ Error::InvalidProvedResponse(format!("chunk_size {} exceeds u32 max", chunk_size))
+ })?;
let query = ShieldedEncryptedNotesQuery {
start_index: chunk_start,
- count: chunk_size as u32,
+ count,
};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/rs-sdk/src/platform/shielded/notes_sync/fetch_chunk.rs` around lines
19 - 22, ShieldedEncryptedNotesQuery is being constructed with count: chunk_size
as u32 which can silently truncate if chunk_size > u32::MAX; change the
construction to convert chunk_size using try_into() (or checked cast) and return
or propagate an error if the conversion fails (e.g., map_err/ok_or with a
descriptive message) so that count is guaranteed to fit a u32 before creating
ShieldedEncryptedNotesQuery.
| let chunk_size = sdk | ||
| .version() | ||
| .drive_abci | ||
| .query | ||
| .shielded_queries | ||
| .max_encrypted_notes_per_query as u64; | ||
|
|
||
| // Validate alignment | ||
| if chunk_size > 0 && !start_index.is_multiple_of(chunk_size) { | ||
| return Err(Error::Generic(format!( | ||
| "start_index {} is not chunk-aligned; must be a multiple of {}", | ||
| start_index, chunk_size | ||
| ))); | ||
| } |
There was a problem hiding this comment.
Reject max_encrypted_notes_per_query == 0 before scheduling work.
If chunk_size is zero, Line 73 and Line 97 never advance next_chunk_index, and Line 86 can never classify a response as partial, so this loop keeps re-enqueueing the same chunk forever. Fail fast here instead of trying to special-case zero later.
🔧 Proposed fix
let chunk_size = sdk
.version()
.drive_abci
.query
.shielded_queries
.max_encrypted_notes_per_query as u64;
+
+ if chunk_size == 0 {
+ return Err(Error::Generic(
+ "max_encrypted_notes_per_query must be greater than 0".to_string(),
+ ));
+ }
// Validate alignment
- if chunk_size > 0 && !start_index.is_multiple_of(chunk_size) {
+ if !start_index.is_multiple_of(chunk_size) {
return Err(Error::Generic(format!(
"start_index {} is not chunk-aligned; must be a multiple of {}",
start_index, chunk_sizeAlso applies to: 71-73, 84-86, 96-97
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/rs-sdk/src/platform/shielded/notes_sync/sync_shielded_notes.rs`
around lines 43 - 56, The code allows chunk_size (from
sdk.version().drive_abci.query.shielded_queries.max_encrypted_notes_per_query)
to be zero which causes an infinite re-enqueue loop; update the validation near
the start_index check to explicitly reject chunk_size == 0 and return an error
(e.g., Err(Error::Generic(...))) before scheduling work so subsequent logic that
advances next_chunk_index or classifies partial responses never runs with a zero
chunk size; ensure you reference and guard the chunk_size variable and preserve
the existing is_multiple_of alignment check for non-zero values.
| // Each future fetches one chunk and returns (chunk_start_index, notes, block_height). | ||
| let mut futures: FuturesUnordered<ChunkFuture> = FuturesUnordered::new(); | ||
| let mut next_chunk_index = start_index; | ||
| let mut reached_end = false; |
There was a problem hiding this comment.
Resume from the earliest partial chunk, not the latest one.
These requests run in parallel and can resolve at different heights. If an older response marks chunk A partial and a newer response returns data for chunk B > A, the scan at Lines 135-143 resumes from B and can skip notes later appended into A. Track the minimum partial chunk_start seen during the run and use that as next_start_index.
🔧 Proposed fix
let mut futures: FuturesUnordered<ChunkFuture> = FuturesUnordered::new();
let mut next_chunk_index = start_index;
let mut reached_end = false;
+ let mut earliest_partial_chunk_start: Option<u64> = None;
@@
if is_partial {
reached_end = true;
+ earliest_partial_chunk_start = Some(
+ earliest_partial_chunk_start
+ .map_or(chunk_idx, |earliest| earliest.min(chunk_idx)),
+ );
}
@@
- let next_start_index = if chunk_size > 0 {
- let last_partial = chunk_results
- .iter()
- .rev()
- .find(|(_, notes)| !notes.is_empty())
- .filter(|(_, notes)| (notes.len() as u64) < chunk_size);
-
- match last_partial {
- Some((&chunk_start, _)) => chunk_start,
- None => start_index + total_notes_scanned,
- }
- } else {
- start_index + total_notes_scanned
- };
+ let next_start_index =
+ earliest_partial_chunk_start.unwrap_or(start_index + total_notes_scanned);Also applies to: 90-92, 134-147
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/rs-sdk/src/platform/shielded/notes_sync/sync_shielded_notes.rs` at
line 68, The code currently resumes from the latest partial chunk seen (risking
skipping older partials); introduce a min_partial_start (initialize to the
current next_start_index) and, whenever a response marks a chunk as partial
(inspect chunk_start from that response), set min_partial_start =
min(min_partial_start, chunk_start); after processing all parallel responses set
next_start_index = min_partial_start (not the last-seen chunk_start) and keep
reached_end logic unchanged; apply the same min-tracking pattern wherever chunk
partials are handled (the other occurrences around the areas that set
next_start_index / check reached_end).
| if let Some((decrypted, address)) = try_decrypt_note(ivk, note) { | ||
| let nf: [u8; 32] = note.nullifier.as_slice().try_into().unwrap_or([0u8; 32]); | ||
| let cmx: [u8; 32] = note.cmx.as_slice().try_into().unwrap_or([0u8; 32]); |
There was a problem hiding this comment.
Don’t synthesize zero hashes for malformed note metadata.
The fallback on Lines 114-115 turns an invalid nullifier or cmx into an apparently valid DecryptedNote, which can corrupt downstream matching. Return an error instead of silently zero-filling those fields.
🔧 Proposed fix
if let Some((decrypted, address)) = try_decrypt_note(ivk, note) {
- let nf: [u8; 32] = note.nullifier.as_slice().try_into().unwrap_or([0u8; 32]);
- let cmx: [u8; 32] = note.cmx.as_slice().try_into().unwrap_or([0u8; 32]);
+ let nf: [u8; 32] = note
+ .nullifier
+ .as_slice()
+ .try_into()
+ .map_err(|_| {
+ Error::Generic(format!(
+ "invalid nullifier length for shielded note at position {}",
+ position
+ ))
+ })?;
+ let cmx: [u8; 32] = note
+ .cmx
+ .as_slice()
+ .try_into()
+ .map_err(|_| {
+ Error::Generic(format!(
+ "invalid cmx length for shielded note at position {}",
+ position
+ ))
+ })?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/rs-sdk/src/platform/shielded/notes_sync/sync_shielded_notes.rs`
around lines 113 - 115, The code currently zero-fills malformed note metadata by
using unwrap_or([0u8;32]) when converting note.nullifier and note.cmx, which can
produce bogus DecryptedNote entries; change the logic in the block that calls
try_decrypt_note (the branch where let Some((decrypted, address)) =
try_decrypt_note(ivk, note)) to validate the conversions instead of defaulting
to zeros: attempt note.nullifier.as_slice().try_into() and
note.cmx.as_slice().try_into(), and if either conversion fails return or
propagate an error (or skip this note) so you do not construct a DecryptedNote
with zeroed nullifier/cmx—ensure the error path uses the existing error
type/Result flow used by the surrounding function rather than silently
continuing.
| // SAFETY: All fields are Send when their referents are Send. | ||
| // NullifierSyncConfig, NullifierKey ([u8; 32]), and NullifierSyncResult are all Send. | ||
| unsafe impl Send for NullifierSyncContext<'_> {} | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify that the struct components are Send
cat << 'EOF' > /tmp/send_check.rs
// This is a conceptual check - NullifierSyncContext should be Send automatically
// since all its fields are references to Send types
fn assert_send<T: Send>() {}
fn main() {
// If these compile, the struct would be Send
assert_send::<&u64>(); // config fields are primitives
assert_send::<&[u8; 32]>(); // NullifierKey
assert_send::<&mut std::collections::BTreeSet<[u8; 32]>>(); // result fields
}
EOF
echo "Conceptual check created - the struct should auto-derive Send"Repository: dashpay/platform
Length of output: 121
🏁 Script executed:
# First, let's examine the NullifierSyncContext struct definition
cd packages/rs-sdk && head -100 src/platform/shielded/nullifier_sync/mod.rsRepository: dashpay/platform
Length of output: 3745
🏁 Script executed:
# Check the types used in the struct
cd packages/rs-sdk && grep -n "struct NullifierSyncContext\|struct NullifierSyncConfig\|type NullifierKey\|struct NullifierSyncResult" src/platform/shielded/nullifier_sync/mod.rsRepository: dashpay/platform
Length of output: 96
🏁 Script executed:
# Search for NullifierSyncConfig, NullifierKey, and NullifierSyncResult definitions
cd packages/rs-sdk && rg "struct NullifierSyncConfig|type NullifierKey|struct NullifierSyncResult" -A 5Repository: dashpay/platform
Length of output: 1613
🏁 Script executed:
# Get the full definitions of NullifierSyncConfig and NullifierSyncResult
cd packages/rs-sdk && sed -n '1,200p' src/platform/shielded/nullifier_sync/types.rsRepository: dashpay/platform
Length of output: 6043
🏁 Script executed:
# Check RequestSettings to see if it's Send
cd packages/rs-sdk && rg "pub struct RequestSettings|impl Send for RequestSettings" -A 10Repository: dashpay/platform
Length of output: 42
🏁 Script executed:
# Also check where RequestSettings comes from and its full definition
cd packages/rs-sdk && rg "RequestSettings" src/platform/shielded/nullifier_sync/types.rs -B 2Repository: dashpay/platform
Length of output: 390
🏁 Script executed:
# Check Cargo.toml for rs_dapi_client version
cd packages/rs-sdk && grep -A 2 "rs_dapi_client" Cargo.tomlRepository: dashpay/platform
Length of output: 42
🏁 Script executed:
# Search for RequestSettings usage elsewhere to understand its properties
cd packages/rs-sdk && rg "RequestSettings" -B 2 -A 2 | head -50Repository: dashpay/platform
Length of output: 1911
🏁 Script executed:
# Check if there are any other Send/Sync trait implementations or comments about RequestSettings
cd packages/rs-sdk && rg "Send|Sync" src/platform/shielded/nullifier_sync/ -B 1 -A 1Repository: dashpay/platform
Length of output: 10823
🏁 Script executed:
# Let's check if there's a Cargo.lock or check the version of rs_dapi_client
cd packages/rs-sdk && grep -i "rs.dapi.client" Cargo.toml Cargo.lock 2>/dev/null | head -20Repository: dashpay/platform
Length of output: 276
🏁 Script executed:
# Try to find any Send/Sync bounds or constraints on RequestSettings in the codebase
cd packages/rs-sdk && rg "RequestSettings" -B 3 -A 3 | grep -i "send\|sync" -B 2 -A 2Repository: dashpay/platform
Length of output: 23668
🏁 Script executed:
# Check if NullifierSyncConfig itself is used in Send contexts without issues
cd packages/rs-sdk && rg "NullifierSyncConfig" -B 1 -A 1 | grep -i "send\|spawn\|thread" -B 2 -A 2Repository: dashpay/platform
Length of output: 562
🏁 Script executed:
# Check if RequestSettings is defined in rs-dapi-client and if it's Send
find packages/rs-dapi-client -name "*.rs" -type f | xargs grep -l "RequestSettings" | head -5Repository: dashpay/platform
Length of output: 260
🏁 Script executed:
# Look for RequestSettings definition in rs-dapi-client
find packages/rs-dapi-client -name "*.rs" -type f -exec grep -l "struct RequestSettings\|pub RequestSettings" {} \;Repository: dashpay/platform
Length of output: 107
🏁 Script executed:
# Search for Send/Sync bounds on RequestSettings in rs-dapi-client
find packages/rs-dapi-client -name "*.rs" -type f -exec grep -B 5 -A 10 "struct RequestSettings" {} \;Repository: dashpay/platform
Length of output: 857
🏁 Script executed:
# Get the full RequestSettings definition
cat packages/rs-dapi-client/src/request_settings.rsRepository: dashpay/platform
Length of output: 4487
🏁 Script executed:
# Check if RequestSettings has any explicit Send/Sync trait bounds or derives
grep -n "impl.*Send\|impl.*Sync" packages/rs-dapi-client/src/request_settings.rsRepository: dashpay/platform
Length of output: 42
Remove the unnecessary unsafe impl Send — the struct auto-derives Send.
NullifierSyncContext contains only references to types that are Send. Specifically:
NullifierSyncConfigholds onlySendtypes (primitives,Option<[u8; 32]>, andRequestSettingswhich isCopy)[NullifierKey](slice of[u8; 32]) isSendNullifierSyncResultcontains onlySendtypes (BTreeSet<[u8; 32]>, primitives, andNullifierSyncMetrics)
Since all referenced types are Send, the struct automatically implements Send without an explicit unsafe impl. The unsafe declaration is redundant and bypasses the compiler's verification.
♻️ Suggested removal
struct NullifierSyncContext<'a> {
config: &'a NullifierSyncConfig,
nullifiers: &'a [NullifierKey],
result: &'a mut NullifierSyncResult,
}
-
-// SAFETY: All fields are Send when their referents are Send.
-// NullifierSyncConfig, NullifierKey ([u8; 32]), and NullifierSyncResult are all Send.
-unsafe impl Send for NullifierSyncContext<'_> {}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/rs-sdk/src/platform/shielded/nullifier_sync/mod.rs` around lines 93
- 96, Remove the redundant unsafe impl Send for NullifierSyncContext<'_> —
delete the unsafe impl Send block (and its SAFETY comment if desired) because
NullifierSyncContext<'_> already auto-implements Send: its fields
(NullifierSyncConfig, the slice of NullifierKey, and NullifierSyncResult) are
all Send so no unsafe manual impl is needed; simply remove the unsafe impl to
let the compiler verify Send automatically.
| pool_id_owned.as_deref(), | ||
| key, | ||
| depth as u8, | ||
| expected_hash, | ||
| platform_version, |
There was a problem hiding this comment.
Potential truncation when casting depth to u8.
BranchQueryParams.depth is u32, but Drive::verify_nullifiers_branch_query expects u8. The cast depth as u8 will silently truncate if depth > 255. While platform version constraints likely keep depths small, an explicit check or using try_into() would be safer.
🛡️ Proposed defensive cast
let branch_result = Drive::verify_nullifiers_branch_query(
&proof_bytes,
pool_type,
pool_id_owned.as_deref(),
key,
- depth as u8,
+ u8::try_from(depth).map_err(|_| ExecutionError {
+ inner: Error::InvalidProvedResponse(format!("depth {} exceeds u8 max", depth)),
+ address: Some(address.clone()),
+ retries,
+ })?,
expected_hash,
platform_version,
)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/rs-sdk/src/platform/shielded/nullifier_sync/mod.rs` around lines 548
- 552, BranchQueryParams.depth is u32 but Drive::verify_nullifiers_branch_query
expects u8, so the current cast depth as u8 can silently truncate; change the
call site (the invocation of Drive::verify_nullifiers_branch_query in mod.rs) to
defensively convert depth using try_into() (or an explicit range check) and
return or propagate an error if depth > 255, ensuring you reference
BranchQueryParams.depth for the source and Drive::verify_nullifiers_branch_query
for the target to locate the change.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/rs-sdk/src/platform/trunk_branch_sync/mod.rs`:
- Around line 205-211: The code currently writes the checkpoint
(Ops::set_checkpoint_height(context, checkpoint_height)) immediately after
receiving a TrunkQueryResponse from Ops::execute_trunk_query, which can persist
partial/failed scans; change the flow so you only call
Ops::set_checkpoint_height once the scan is complete and all branch queries
succeeded and the tracker has no pending keys (i.e., after resolution logic that
inspects branch results and tracker state), and in the branches where a branch
query failed or keys remain pending (similar call sites around the code that
handle branch failures at the blocks covering lines referenced in this review)
return an Err instead of logging a warning and returning Ok, or otherwise
surface an error to avoid committing the checkpoint prematurely; update all
analogous spots (including the other Trunk/branch handling blocks noted) to
follow the same pattern and ensure checkpoint writes occur only after full
resolution.
- Around line 295-317: Change seen_ancestors from a BTreeSet<LeafBoundaryKey> to
a map (e.g., BTreeMap<LeafBoundaryKey, usize>) that records the maximum clamped
depth per ancestor instead of first-seen; when iterating active_leaves, compute
the clamped depth for the leaf (tree_depth.clamp(...)) or for the ancestor
(tree_depth.saturating_sub(levels_up).clamp(...)) and insert_or_update the map
entry using max(existing_depth, new_depth); do not push into result inside the
loop—after the loop iterate the map entries and build the final result tuples
(for leaves use the original LeafInfo from active_leaves, for ancestors recreate
LeafInfo with ancestor_hash/ancestor_count as you do now) so every ancestor uses
the deepest required query depth (update references to seen_ancestors,
LeafBoundaryKey, LeafInfo, active_leaves, trunk_result.get_ancestor, and result
accordingly).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9701d6d9-1e66-4cc1-a464-a39e37f46126
📒 Files selected for processing (1)
packages/rs-sdk/src/platform/trunk_branch_sync/mod.rs
When multiple leaves are privacy-adjusted to the same ancestor, the previous code used first-seen depth. If a later leaf required a deeper query from that ancestor, its depth was silently dropped, potentially leaving keys unresolved. Now track max(depth) per deduplicated ancestor key using a BTreeMap, so every ancestor is queried at the deepest depth any of its leaves requires. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Issue being fixed or feature implemented
Adds rs-sdk support for querying and interacting with the shielded pool, extracted from
feat/zk. This builds on the query layer added in #3228.What was done?
Query support (Fetch + Query impls)
ShieldedPoolState— shielded pool total balanceShieldedAnchors— valid anchors for spend proofsMostRecentShieldedAnchor— latest shielded anchorShieldedEncryptedNotes— paginated encrypted note retrievalShieldedNullifierStatuses— nullifier spend status checkNullifiersTrunkState— top-level nullifier tree for privacy-preserving syncRecentNullifierChanges— per-block nullifier insertionsRecentCompactedNullifierChanges— compacted nullifier change rangesTransition helpers
shield— shield credits into the shielded poolshield_from_asset_lock— shield from an asset lockunshield— unshield credits back to transparent balanceshielded_transfer— transfer within the shielded poolshielded_withdrawal— withdraw from shielded pool to L1Nullifier sync
Other
How Has This Been Tested?
cargo check -p dash-sdkpasses cleanlyBreaking Changes
None
Checklist:
🤖 Generated with Claude Code
Summary by CodeRabbit