feat: add mempool support#558
feat: add mempool support#558xdustinface wants to merge 1 commit intofeat/transaction-status-eventsfrom
Conversation
📝 WalkthroughWalkthroughThis PR introduces comprehensive mempool transaction synchronization functionality to the Dash SPV client, adding transaction pool monitoring, per-peer relay coordination, bloom filter-based filtering, InstantSend integration, wallet event emission, and FFI bindings for external language interoperability. Changes
Sequence DiagramsequenceDiagram
participant Peer
participant MempoolManager
participant WalletInterface
participant SyncCoordinator
participant WalletEventChannel
activate MempoolManager
MempoolManager->>Peer: request_mempool()
Peer-->>MempoolManager: send Inv(txids)
MempoolManager->>MempoolManager: filter_inv(bloom_filter)
MempoolManager->>Peer: send getdata(eligible_txids)
Peer-->>MempoolManager: send Tx(transaction)
MempoolManager->>WalletInterface: process_mempool_transaction(tx, is_instant_send)
activate WalletInterface
WalletInterface->>WalletInterface: check_relevance(outpoints, addresses)
WalletInterface-->>MempoolManager: MempoolTransactionResult{is_relevant, net_amount, new_addresses}
deactivate WalletInterface
MempoolManager->>MempoolManager: update_progress(counters)
alt is_relevant && new_addresses
MempoolManager->>MempoolManager: rebuild_filter(wallet_data)
MempoolManager->>Peer: send FilterLoad(new_filter)
end
MempoolManager->>WalletEventChannel: emit TransactionReceived(Mempool)
MempoolManager->>SyncCoordinator: emit SyncEvent::TransactionReceived
deactivate MempoolManager
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
36f64ee to
1106a91
Compare
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## feat/transaction-status-events #558 +/- ##
==================================================================
+ Coverage 66.43% 67.36% +0.92%
==================================================================
Files 312 316 +4
Lines 65131 66887 +1756
==================================================================
+ Hits 43270 45058 +1788
+ Misses 21861 21829 -32
|
|
This PR has merge conflicts with the base branch. Please rebase or merge the base branch into your branch to resolve them. |
1106a91 to
c022a39
Compare
|
@CodeRabbit review |
✅ Actions performedReview triggered.
|
|
@CodeRabbit review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
dash-spv-ffi/src/callbacks.rs (1)
157-170:⚠️ Potential issue | 🟡 MinorDocument the
confirmed_txidsborrowed buffer lifetime in the callback doc comment.The
OnBlockProcessedCallbackdocumentation describes thehashpointer lifetime but omitsconfirmed_txids. This parameter is also a borrowed pointer—it points to a temporary buffer that is destroyed immediately when the callback returns (see dispatch code at line 362). The caller must copy the array if it needs to retain the data beyond the callback. Update the doc comment to document this alongside thehashcontract, specifying thatconfirmed_txidsreferencesconfirmed_txid_counthashes and is only valid for the callback duration.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/src/callbacks.rs` around lines 157 - 170, Update the OnBlockProcessedCallback doc comment to document that confirmed_txids is a borrowed pointer (like hash) that references confirmed_txid_count consecutive 32-byte txid entries and is only valid for the lifetime of the callback; callers must memcpy/duplicate the confirmed_txids buffer if they need to retain it after the callback returns. Mention both confirmed_txids and confirmed_txid_count along with the existing hash lifetime contract so the caller knows to copy both the single hash and the array of confirmed txids before returning.
🧹 Nitpick comments (6)
dash-spv/src/test_utils/node.rs (1)
377-382: Consider using exact address matching for peer verification.The current IP-prefix check (
starts_with(&addr.ip().to_string())) would match any peer on the same IP regardless of port. In test scenarios with multiple nodes on localhost (different ports), this could incorrectly report success when connecting to a different node than intended.♻️ Proposed fix for exact address matching
for _ in 0..30 { let peers = client.get_peer_info().expect("failed to get peer info"); - if peers.iter().any(|p| p.addr.to_string().starts_with(&addr.ip().to_string())) { + if peers.iter().any(|p| p.addr == addr.to_string()) { tracing::info!("Connected to node {}", addr); return; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/test_utils/node.rs` around lines 377 - 382, The peer check is using an IP-prefix match which can falsely match different ports; update the loop that calls client.get_peer_info() and inspects peers (the variable peers and each p.addr) to perform an exact address comparison including port — e.g., compare p.addr.to_string() == addr.to_string() or compare the socket-address types directly rather than using starts_with(&addr.ip().to_string()); keep the existing logging/tracing and return behavior unchanged.dash-spv/src/sync/mempool/mod.rs (1)
9-11: Consider making bloom false-positive rate configurable in a follow-up.The TODO is on point; exposing this via config (or privacy profiles) would make mempool behavior easier to tune per deployment.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/sync/mempool/mod.rs` around lines 9 - 11, The constant BLOOM_FALSE_POSITIVE_RATE is hard-coded; make it configurable by wiring a config or privacy profile into the mempool module, e.g., replace the const BLOOM_FALSE_POSITIVE_RATE with a value loaded from a config struct or settings object passed into the mempool (or a MempoolConfig/PrivacyLevel enum) and use that value wherever BLOOM_FALSE_POSITIVE_RATE is referenced (update constructors/functions in mod.rs that create the bloom filter to accept the configured rate); ensure sensible defaults (0.0005) remain if config is absent and validate the rate is within (0,1).dash-spv/src/sync/progress.rs (1)
37-39: Clarify aggregate semantics for mempool progress.
mempoolis now stored, displayed, and queryable, but aggregate methods (state(),is_synced(),percentage()) still ignore it. If this is intentional (non-blocking monitor), add explicit docs; otherwise wire it into aggregation for consistency.Also applies to: 161-165, 221-227, 254-256
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/sync/progress.rs` around lines 37 - 39, The mempool Option<MempoolProgress> is currently collected but not considered by the aggregate methods; if this is intentional, add explicit docs stating that mempool is a non-blocking/auxiliary monitor and is deliberately excluded from aggregation. Update the doc comment on the mempool field (MempoolProgress) and the docstrings for state(), is_synced(), and percentage() to say they ignore mempool, and mirror the same clarification wherever those aggregates are re-exported or called (the other referenced locations that read or display mempool). If instead you want mempool included, modify state() to combine mempool.state() with the existing aggregate, change is_synced() to require mempool.is_synced() when Some(...), and incorporate mempool.percentage() into percentage() (handle None as neutral), but choose one approach and make the docs reflect that change.dash-spv/src/sync/mempool/progress.rs (1)
144-154: Consider strengthening the timing assertion.The test relies on a 1ms sleep to ensure time difference, but the assertion
p.last_activity() >= beforewould pass even iflast_activitywasn't updated (since the newInstant::now()would always be >= before). A stricter check would bep.last_activity() > before, though this assumes the sleep is long enough to guarantee distinctInstantvalues.🔧 Optional: Use stricter assertion
- assert!(p.last_activity() >= before); + assert!(p.last_activity() > before, "last_activity should be updated after mutation");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/sync/mempool/progress.rs` around lines 144 - 154, The test test_last_activity_updated_on_mutation currently asserts p.last_activity() >= before which can pass even if last_activity wasn't updated; change the assertion to p.last_activity() > before and increase the sleep to a slightly larger duration (e.g., from 1ms to 2–5ms) to reliably ensure distinct Instant values when calling MempoolProgress::default(), then p.add_received(1) so last_activity() is actually newer than before.dash-spv/tests/dashd_sync/tests_mempool.rs (1)
21-49: Consider adding#[ignore]attribute for network-dependent tests.Per coding guidelines: "Mark network-dependent or long-running tests with
#[ignore]attribute; run with-- --ignored". These integration tests require a running dashd node and have a 30-second timeout, making them candidates for the#[ignore]attribute.However, the early-return pattern when
TestContext::new()returnsNoneorsupports_miningis false provides a reasonable fallback, so this is a soft recommendation.🔧 Optional: Add #[ignore] for CI-friendly behavior
+#[ignore] #[tokio::test] async fn test_mempool_detects_incoming_tx() {Apply similarly to other tests in this file.
As per coding guidelines: "Mark network-dependent or long-running tests with
#[ignore]attribute; run with-- --ignored"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/tests/dashd_sync/tests_mempool.rs` around lines 21 - 49, The test test_mempool_detects_incoming_tx is network-dependent and should be marked with #[ignore] so it only runs when explicitly requested; add the #[ignore] attribute above the async fn test_mempool_detects_incoming_tx() declaration and apply the same attribute to other integration tests in this module (e.g., other top-level #[tokio::test] functions) that require a running dashd or are long-running, keeping the existing early-return guards intact.dash-spv/src/sync/mempool/manager.rs (1)
532-1663: Add two regression tests for uncovered edge paths.Coverage is strong, but adding tests for (1) empty-wallet bloom activation followed by later address/outpoint appearance, and (2) exceeding
max_transactionsfrom already in-flight responses would lock down the two high-risk paths above.As per coding guidelines "Write unit tests for new functionality in Rust code".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/sync/mempool/manager.rs` around lines 532 - 1663, Add two regression tests inside the existing tests module: (1) a Bloom-empty-then-later-change test that uses create_bloom_manager() to activate a peer with an empty wallet, then mutates manager.wallet by adding addresses and/or outpoints and calls check_filter_staleness(&requests) to assert manager.filter_loaded becomes true, filter_address_count/filter_outpoint_count updated, and that a FilterLoad/ MemPool message was sent; (2) an in-flight-excess-response test that fills manager.pending_requests to MAX_IN_FLIGHT, enqueues additional txids on an activated peer via manager.peers (Some(VecDeque::...)), then simulates many transaction responses (add transactions into manager.mempool_state or remove pending entries) and calls send_queued(&requests) to assert pending_requests never exceeds MAX_IN_FLIGHT and peer queues are drained/unchanged as expected; reference MempoolManager methods activate_peer, check_filter_staleness, send_queued and fields pending_requests, peers, filter_loaded, filter_address_count, filter_outpoint_count, and MAX_IN_FLIGHT when locating code to extend.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dash-spv/ARCHITECTURE.md`:
- Around line 1124-1131: The fenced code block showing the sync/mempool
directory listing (the block containing "sync/mempool/" and its subitems) lacks
a language tag; update that fence to include an explicit language (e.g., text)
and do the same for the other similar fenced block referenced around lines
1144-1169 so markdownlint stops flagging them. Locate the fences in
ARCHITECTURE.md that wrap the directory listings (the blocks containing
"sync/mempool/" and the second listing) and prepend the opening ``` with an
appropriate language identifier such as ```text.
- Around line 1194-1200: The current staleness check only compares
address/outpoint counts and misses content changes; update the manager to track
a deterministic fingerprint/revision (e.g., a stable hash) of the actual watched
sets whenever the set changes and include that fingerprint in the filter
snapshot used by check_filter_staleness(); change handle_tx(), the
BlockProcessed path that passes new_addresses, and any code that mutates the
watched sets to update/set needs_filter_rebuild and the new fingerprint, and
make rebuild_filter() refresh the snapshot fingerprint after installing the new
filter so subsequent check_filter_staleness() compares fingerprints (not just
counts) to decide rebuilds.
In `@dash-spv/src/sync/mempool/bloom.rs`:
- Around line 14-23: The function address_payload_bytes currently swallows
unsupported Address payloads by returning None, which silently produces partial
BloomFilter coverage; change it to fail fast by returning a Result and propagate
errors so callers can choose FetchAll or abort: update address_payload_bytes to
return Result<Vec<u8>, AddressPayloadError> (or similar), replace the current
'_' arm to return an Err containing the address and a clear variant (e.g.,
UnsupportedAddressType), and adjust callers of address_payload_bytes (and any
code constructing BloomFilter) to handle the Err by falling back to FetchAll or
returning an error upstream; reference symbols: address_payload_bytes, Address,
Payload, and the BloomFilter construction call-sites that consume this function.
In `@dash-spv/src/sync/mempool/manager.rs`:
- Around line 146-149: When load_bloom_filter sees addresses.is_empty() &&
outpoints.is_empty() it currently returns early without updating filter state,
which lets check_filter_staleness ignore rebuilds because filter_loaded stays
false; change load_bloom_filter so that the empty-case still updates the bloom
filter lifecycle (set filter_loaded = true, update last_loaded/last_rebuilt
timestamps, and store/assign an empty/default Bloom filter state) so future
transitions out of the empty state trigger normal rebuilds. Apply the same
behavior to the other early-returns referenced (the similar checks in
load_bloom_filter and any early-exits around the rebuild logic) and ensure
check_filter_staleness relies on those lifecycle fields (filter_loaded,
last_loaded/last_rebuilt) to decide staleness for functions load_bloom_filter
and check_filter_staleness.
- Around line 329-343: The code currently always inserts relevant transactions
into the mempool (via UnconfirmedTransaction::new and state.add_transaction
inside the block that locks self.mempool_state), which allows tracked entries to
exceed the configured max_transactions when concurrent responses finish; modify
the insertion path to check the configured capacity before adding: read the cap
(e.g., self.max_transactions or equivalent), acquire the write lock on
self.mempool_state, and only call state.add_transaction(unconfirmed_tx) and
update self.progress.set_tracked(...) if state.transactions.len() is below the
cap (or evict oldest/lowest-priority entries to make room if your policy
requires), otherwise drop/skip the insertion and ensure progress/tracking
reflects the capped size. Ensure the same guard is applied where
UnconfirmedTransaction::new is constructed and before state.add_transaction so
racey completion cannot exceed the cap.
In `@dash-spv/src/sync/sync_coordinator.rs`:
- Around line 81-82: SyncCoordinator::new currently seeds initial_progress from
the older manager set but ignores Managers.mempool, causing inconsistent first
reads; update SyncCoordinator::new to include the mempool's current progress
when Managers.mempool is Some by extracting the mempool manager's
snapshot/progress and merging it into initial_progress (i.e., call the mempool
manager's current snapshot/progress accessor and include those fields in the
ProgressSnapshot used to seed initial_progress).
---
Outside diff comments:
In `@dash-spv-ffi/src/callbacks.rs`:
- Around line 157-170: Update the OnBlockProcessedCallback doc comment to
document that confirmed_txids is a borrowed pointer (like hash) that references
confirmed_txid_count consecutive 32-byte txid entries and is only valid for the
lifetime of the callback; callers must memcpy/duplicate the confirmed_txids
buffer if they need to retain it after the callback returns. Mention both
confirmed_txids and confirmed_txid_count along with the existing hash lifetime
contract so the caller knows to copy both the single hash and the array of
confirmed txids before returning.
---
Nitpick comments:
In `@dash-spv/src/sync/mempool/manager.rs`:
- Around line 532-1663: Add two regression tests inside the existing tests
module: (1) a Bloom-empty-then-later-change test that uses
create_bloom_manager() to activate a peer with an empty wallet, then mutates
manager.wallet by adding addresses and/or outpoints and calls
check_filter_staleness(&requests) to assert manager.filter_loaded becomes true,
filter_address_count/filter_outpoint_count updated, and that a FilterLoad/
MemPool message was sent; (2) an in-flight-excess-response test that fills
manager.pending_requests to MAX_IN_FLIGHT, enqueues additional txids on an
activated peer via manager.peers (Some(VecDeque::...)), then simulates many
transaction responses (add transactions into manager.mempool_state or remove
pending entries) and calls send_queued(&requests) to assert pending_requests
never exceeds MAX_IN_FLIGHT and peer queues are drained/unchanged as expected;
reference MempoolManager methods activate_peer, check_filter_staleness,
send_queued and fields pending_requests, peers, filter_loaded,
filter_address_count, filter_outpoint_count, and MAX_IN_FLIGHT when locating
code to extend.
In `@dash-spv/src/sync/mempool/mod.rs`:
- Around line 9-11: The constant BLOOM_FALSE_POSITIVE_RATE is hard-coded; make
it configurable by wiring a config or privacy profile into the mempool module,
e.g., replace the const BLOOM_FALSE_POSITIVE_RATE with a value loaded from a
config struct or settings object passed into the mempool (or a
MempoolConfig/PrivacyLevel enum) and use that value wherever
BLOOM_FALSE_POSITIVE_RATE is referenced (update constructors/functions in mod.rs
that create the bloom filter to accept the configured rate); ensure sensible
defaults (0.0005) remain if config is absent and validate the rate is within
(0,1).
In `@dash-spv/src/sync/mempool/progress.rs`:
- Around line 144-154: The test test_last_activity_updated_on_mutation currently
asserts p.last_activity() >= before which can pass even if last_activity wasn't
updated; change the assertion to p.last_activity() > before and increase the
sleep to a slightly larger duration (e.g., from 1ms to 2–5ms) to reliably ensure
distinct Instant values when calling MempoolProgress::default(), then
p.add_received(1) so last_activity() is actually newer than before.
In `@dash-spv/src/sync/progress.rs`:
- Around line 37-39: The mempool Option<MempoolProgress> is currently collected
but not considered by the aggregate methods; if this is intentional, add
explicit docs stating that mempool is a non-blocking/auxiliary monitor and is
deliberately excluded from aggregation. Update the doc comment on the mempool
field (MempoolProgress) and the docstrings for state(), is_synced(), and
percentage() to say they ignore mempool, and mirror the same clarification
wherever those aggregates are re-exported or called (the other referenced
locations that read or display mempool). If instead you want mempool included,
modify state() to combine mempool.state() with the existing aggregate, change
is_synced() to require mempool.is_synced() when Some(...), and incorporate
mempool.percentage() into percentage() (handle None as neutral), but choose one
approach and make the docs reflect that change.
In `@dash-spv/src/test_utils/node.rs`:
- Around line 377-382: The peer check is using an IP-prefix match which can
falsely match different ports; update the loop that calls client.get_peer_info()
and inspects peers (the variable peers and each p.addr) to perform an exact
address comparison including port — e.g., compare p.addr.to_string() ==
addr.to_string() or compare the socket-address types directly rather than using
starts_with(&addr.ip().to_string()); keep the existing logging/tracing and
return behavior unchanged.
In `@dash-spv/tests/dashd_sync/tests_mempool.rs`:
- Around line 21-49: The test test_mempool_detects_incoming_tx is
network-dependent and should be marked with #[ignore] so it only runs when
explicitly requested; add the #[ignore] attribute above the async fn
test_mempool_detects_incoming_tx() declaration and apply the same attribute to
other integration tests in this module (e.g., other top-level #[tokio::test]
functions) that require a running dashd or are long-running, keeping the
existing early-return guards intact.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 628b1a04-8991-4882-91ad-09805746e821
📒 Files selected for processing (34)
dash-spv-ffi/FFI_API.mddash-spv-ffi/include/dash_spv_ffi.hdash-spv-ffi/src/bin/ffi_cli.rsdash-spv-ffi/src/callbacks.rsdash-spv-ffi/src/types.rsdash-spv-ffi/tests/dashd_sync/callbacks.rsdash-spv/ARCHITECTURE.mddash-spv/src/client/lifecycle.rsdash-spv/src/network/manager.rsdash-spv/src/network/mod.rsdash-spv/src/sync/blocks/manager.rsdash-spv/src/sync/events.rsdash-spv/src/sync/identifier.rsdash-spv/src/sync/mempool/bloom.rsdash-spv/src/sync/mempool/manager.rsdash-spv/src/sync/mempool/mod.rsdash-spv/src/sync/mempool/progress.rsdash-spv/src/sync/mempool/sync_manager.rsdash-spv/src/sync/mod.rsdash-spv/src/sync/progress.rsdash-spv/src/sync/sync_coordinator.rsdash-spv/src/sync/sync_manager.rsdash-spv/src/test_utils/node.rsdash-spv/src/types.rsdash-spv/tests/dashd_sync/helpers.rsdash-spv/tests/dashd_sync/main.rsdash-spv/tests/dashd_sync/setup.rsdash-spv/tests/dashd_sync/tests_mempool.rskey-wallet-manager/src/lib.rskey-wallet-manager/src/test_utils/wallet.rskey-wallet-manager/src/wallet_interface.rskey-wallet-manager/src/wallet_manager/event_tests.rskey-wallet-manager/src/wallet_manager/mod.rskey-wallet-manager/src/wallet_manager/process_block.rs
See the changes in `ARCHITECTURE.md` for a detailed description. - Add `MempoolManager` that activates after initial sync to monitor unconfirmed transactions via BIP37 bloom filters or local address matching. Includes peer relay management, dedup tracking, IS lock handling, and auto-rebuilding filters on address pool changes. - Extend `WalletInterface` with `MempoolTransactionResult` return type and `watched_outpoints()` for bloom filter construction. Wire mempool manager into `SyncCoordinator` and propagate `confirmed_txids` through `BlockProcessed` events for mempool eviction. - Add FFI bindings, dashd integration tests, and wallet unit tests.
c022a39 to
1af96dd
Compare
|
This PR has merge conflicts with the base branch. Please rebase or merge the base branch into your branch to resolve them. |
92cf7db to
39d06fb
Compare
|
@CodeRabbit review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
dash-spv/src/client/lifecycle.rs (1)
164-164: Address TODO: Populate monitored addresses for bloom filter construction.The empty
HashSetpassed here means the mempool filter won't match any wallet addresses until it's rebuilt. This should be populated from the wallet's monitored addresses to enable bloom filter matching on initial startup.Would you like me to help implement address population from the wallet, or should this be tracked as a follow-up issue?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/client/lifecycle.rs` at line 164, The empty HashSet passed to the mempool bloom filter (currently HashSet::new()) must be replaced with the wallet's monitored addresses: locate the bloom/mempool filter construction in lifecycle.rs where HashSet::new() is used, call the wallet API that returns monitored addresses (e.g., wallet.monitored_addresses() or wallet.get_monitored_addresses()), convert that collection into a HashSet of the expected address type (e.g., monitored_addresses.iter().cloned().collect::<HashSet<_>>()), and use that HashSet in place of HashSet::new(); also handle the wallet method returning an Option or Result by defaulting to an empty set on error so startup remains robust.dash-spv-ffi/src/callbacks.rs (1)
157-171: Consider updating doc comment to document new parameters.The
OnBlockProcessedCallbackdoc comment only mentions thehashpointer lifetime. The newconfirmed_txidspointer has the same borrowed lifetime semantics.📝 Suggested doc comment update
/// Callback for SyncEvent::BlockProcessed /// -/// The `hash` pointer is borrowed and only valid for the duration of the -/// callback. Callers must memcpy/duplicate it to retain the value after -/// the callback returns. +/// The `hash` and `confirmed_txids` pointers are borrowed and only valid for +/// the duration of the callback. Callers must memcpy/duplicate them to retain +/// the values after the callback returns.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/src/callbacks.rs` around lines 157 - 171, Update the doc comment for OnBlockProcessedCallback to document that the confirmed_txids pointer (and confirmed_txid_count) is borrowed and only valid for the duration of the callback, just like the hash pointer; callers must memcpy/duplicate each 32-byte txid if they need to retain them after the callback returns. Reference the OnBlockProcessedCallback type, the confirmed_txids parameter, and confirmed_txid_count so readers know which symbols the lifetime rule applies to. Ensure the wording mirrors the existing hash lifetime note and mentions that confirmed_txids elements are 32-byte arrays.dash-spv/src/sync/mempool/sync_manager.rs (2)
63-81: Minor: Simplify control flow inSyncCompletehandler.The explicit
return Ok(vec![])at line 73 and the finalOk(vec![])at line 80 create slightly redundant code paths. Both branches return the same value.♻️ Optional simplification
SyncEvent::SyncComplete { .. } => { if self.state() != SyncState::Synced { self.activate_all_peers(requests).await?; let has_activated = self.peers.values().any(|v| v.is_some()); if has_activated { self.set_state(SyncState::Synced); tracing::info!("Mempool manager activated on all peers"); - return Ok(vec![]); - } else { + } else if !has_activated { tracing::warn!( "Sync complete but no peers available for mempool activation" ); } } Ok(vec![]) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/sync/mempool/sync_manager.rs` around lines 63 - 81, The SyncComplete match arm in the SyncEvent handler redundantly returns Ok(vec![]) twice; simplify by removing the inner explicit return and let the function return the single Ok(vec![]) at the end of the arm. Locate the SyncEvent::SyncComplete branch in sync_manager.rs (references: self.state(), activate_all_peers(requests).await, self.peers, has_activated, self.set_state(SyncState::Synced)) and eliminate the early "return Ok(vec![])" so both branches share the final "Ok(vec![])" response.
25-36: Consider adding state validation instart_sync.The default
SyncManager::start_syncimplementation callsensure_not_started(self.state(), self.identifier())?before proceeding. This implementation skips that validation, which could allowstart_syncto be called in unexpected states. While this may be intentional for mempool's recovery semantics (re-activation after disconnect), consider documenting why the validation is omitted or adding a more permissive state check.💡 Optional: Add state validation or documentation
async fn start_sync(&mut self, requests: &RequestSender) -> SyncResult<Vec<SyncEvent>> { + // Note: Unlike other managers, mempool allows start_sync from any state + // to support re-activation after disconnect recovery. // After a full disconnect, re-activate mempool on all connected peers self.activate_all_peers(requests).await?;Or add explicit state validation if needed:
async fn start_sync(&mut self, requests: &RequestSender) -> SyncResult<Vec<SyncEvent>> { + // Only proceed if we're waiting for connections (after disconnect) + if self.state() == SyncState::Synced { + return Ok(vec![]); + } // After a full disconnect, re-activate mempool on all connected peers self.activate_all_peers(requests).await?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/sync/mempool/sync_manager.rs` around lines 25 - 36, The start_sync implementation omits the usual validation ensure_not_started(self.state(), self.identifier())? from SyncManager::start_sync; either call that helper at the top of start_sync to prevent starting from invalid states (e.g., insert ensure_not_started(self.state(), self.identifier())? before activate_all_peers), or add a short doc comment on start_sync explaining why the validation is intentionally skipped for mempool recovery semantics; keep references to start_sync, ensure_not_started, self.state(), self.identifier(), and activate_all_peers in the change so reviewers can easily locate the adjustment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dash-spv/src/sync/mempool/filter.rs`:
- Around line 39-48: The code silently truncates element_count by casting usize
to u32; compute the sum using addresses.len().checked_add(outpoints.len()) and
if that returns None map to SyncError::Validation, then convert to u32 with
u32::try_from(...) and map any Err to SyncError::Validation; use that safe u32
value when calling BloomFilter::new (and ensure the early-zero branch still uses
the same validated path for consistency with BloomFilter::new and
FilterLoad::from_bloom_filter).
---
Nitpick comments:
In `@dash-spv-ffi/src/callbacks.rs`:
- Around line 157-171: Update the doc comment for OnBlockProcessedCallback to
document that the confirmed_txids pointer (and confirmed_txid_count) is borrowed
and only valid for the duration of the callback, just like the hash pointer;
callers must memcpy/duplicate each 32-byte txid if they need to retain them
after the callback returns. Reference the OnBlockProcessedCallback type, the
confirmed_txids parameter, and confirmed_txid_count so readers know which
symbols the lifetime rule applies to. Ensure the wording mirrors the existing
hash lifetime note and mentions that confirmed_txids elements are 32-byte
arrays.
In `@dash-spv/src/client/lifecycle.rs`:
- Line 164: The empty HashSet passed to the mempool bloom filter (currently
HashSet::new()) must be replaced with the wallet's monitored addresses: locate
the bloom/mempool filter construction in lifecycle.rs where HashSet::new() is
used, call the wallet API that returns monitored addresses (e.g.,
wallet.monitored_addresses() or wallet.get_monitored_addresses()), convert that
collection into a HashSet of the expected address type (e.g.,
monitored_addresses.iter().cloned().collect::<HashSet<_>>()), and use that
HashSet in place of HashSet::new(); also handle the wallet method returning an
Option or Result by defaulting to an empty set on error so startup remains
robust.
In `@dash-spv/src/sync/mempool/sync_manager.rs`:
- Around line 63-81: The SyncComplete match arm in the SyncEvent handler
redundantly returns Ok(vec![]) twice; simplify by removing the inner explicit
return and let the function return the single Ok(vec![]) at the end of the arm.
Locate the SyncEvent::SyncComplete branch in sync_manager.rs (references:
self.state(), activate_all_peers(requests).await, self.peers, has_activated,
self.set_state(SyncState::Synced)) and eliminate the early "return Ok(vec![])"
so both branches share the final "Ok(vec![])" response.
- Around line 25-36: The start_sync implementation omits the usual validation
ensure_not_started(self.state(), self.identifier())? from
SyncManager::start_sync; either call that helper at the top of start_sync to
prevent starting from invalid states (e.g., insert
ensure_not_started(self.state(), self.identifier())? before activate_all_peers),
or add a short doc comment on start_sync explaining why the validation is
intentionally skipped for mempool recovery semantics; keep references to
start_sync, ensure_not_started, self.state(), self.identifier(), and
activate_all_peers in the change so reviewers can easily locate the adjustment.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: f3faf2ac-033e-4df0-8f07-800784a6af89
📒 Files selected for processing (34)
dash-spv-ffi/FFI_API.mddash-spv-ffi/include/dash_spv_ffi.hdash-spv-ffi/src/bin/ffi_cli.rsdash-spv-ffi/src/callbacks.rsdash-spv-ffi/src/types.rsdash-spv-ffi/tests/dashd_sync/callbacks.rsdash-spv/ARCHITECTURE.mddash-spv/src/client/lifecycle.rsdash-spv/src/network/manager.rsdash-spv/src/network/mod.rsdash-spv/src/sync/blocks/manager.rsdash-spv/src/sync/events.rsdash-spv/src/sync/identifier.rsdash-spv/src/sync/mempool/filter.rsdash-spv/src/sync/mempool/manager.rsdash-spv/src/sync/mempool/mod.rsdash-spv/src/sync/mempool/progress.rsdash-spv/src/sync/mempool/sync_manager.rsdash-spv/src/sync/mod.rsdash-spv/src/sync/progress.rsdash-spv/src/sync/sync_coordinator.rsdash-spv/src/sync/sync_manager.rsdash-spv/src/test_utils/node.rsdash-spv/src/types.rsdash-spv/tests/dashd_sync/helpers.rsdash-spv/tests/dashd_sync/main.rsdash-spv/tests/dashd_sync/setup.rsdash-spv/tests/dashd_sync/tests_mempool.rskey-wallet-manager/src/lib.rskey-wallet-manager/src/test_utils/wallet.rskey-wallet-manager/src/wallet_interface.rskey-wallet-manager/src/wallet_manager/event_tests.rskey-wallet-manager/src/wallet_manager/mod.rskey-wallet-manager/src/wallet_manager/process_block.rs
✅ Files skipped from review due to trivial changes (5)
- dash-spv/tests/dashd_sync/main.rs
- dash-spv/src/sync/mod.rs
- dash-spv/src/sync/mempool/mod.rs
- dash-spv/tests/dashd_sync/setup.rs
- dash-spv/tests/dashd_sync/tests_mempool.rs
🚧 Files skipped from review as they are similar to previous changes (15)
- dash-spv/src/sync/identifier.rs
- dash-spv/src/types.rs
- key-wallet-manager/src/lib.rs
- dash-spv/src/sync/blocks/manager.rs
- dash-spv/src/network/manager.rs
- dash-spv-ffi/FFI_API.md
- dash-spv/src/sync/sync_coordinator.rs
- dash-spv/src/sync/progress.rs
- dash-spv/ARCHITECTURE.md
- key-wallet-manager/src/wallet_manager/mod.rs
- key-wallet-manager/src/wallet_interface.rs
- dash-spv/src/sync/mempool/progress.rs
- dash-spv/src/test_utils/node.rs
- dash-spv/tests/dashd_sync/helpers.rs
- dash-spv/src/sync/mempool/manager.rs
See the changes in
ARCHITECTURE.mdfor a detailed description.MempoolManagerthat activates after initial sync to monitor unconfirmed transactions via BIP37 bloom filters or local address matching. Includes peer relay management, dedup tracking, IS lock handling, and auto-rebuilding filters on address pool changes.WalletInterfacewithMempoolTransactionResultreturn type andwatched_outpoints()for bloom filter construction. Wire mempool manager intoSyncCoordinatorand propagateconfirmed_txidsthroughBlockProcessedevents for mempool eviction.Based on:
current_sync_peerfrom network manager #511getdatarequests to the peer that sent theinv#527TransactionStatusChangedevent and emit wallet events #552Summary by CodeRabbit
New Features
Tests