Skip to content

feat: add mempool support#558

Draft
xdustinface wants to merge 1 commit intofeat/transaction-status-eventsfrom
feat/mempool-support
Draft

feat: add mempool support#558
xdustinface wants to merge 1 commit intofeat/transaction-status-eventsfrom
feat/mempool-support

Conversation

@xdustinface
Copy link
Collaborator

@xdustinface xdustinface commented Mar 16, 2026

See the changes in ARCHITECTURE.md for a detailed description.

  • Add MempoolManager that activates after initial sync to monitor unconfirmed transactions via BIP37 bloom filters or local address matching. Includes peer relay management, dedup tracking, IS lock handling, and auto-rebuilding filters on address pool changes.
  • Extend WalletInterface with MempoolTransactionResult return type and watched_outpoints() for bloom filter construction. Wire mempool manager into SyncCoordinator and propagate confirmed_txids through BlockProcessed events for mempool eviction.
  • Add FFI bindings, dashd integration tests, and wallet unit tests.

Based on:

Summary by CodeRabbit

  • New Features

    • Added mempool synchronization to monitor unconfirmed wallet transactions across the network.
    • Introduced configurable mempool relay strategies: BloomFilter and FetchAll modes.
    • Enhanced transaction callbacks to include confirmed transaction IDs for better tracking.
    • Added mempool progress monitoring with transaction counters and sync status.
  • Tests

    • Added comprehensive mempool synchronization and lifecycle integration tests.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 16, 2026

📝 Walkthrough

Walkthrough

This PR introduces comprehensive mempool transaction synchronization functionality to the Dash SPV client, adding transaction pool monitoring, per-peer relay coordination, bloom filter-based filtering, InstantSend integration, wallet event emission, and FFI bindings for external language interoperability.

Changes

Cohort / File(s) Summary
FFI Bindings & Documentation
dash-spv-ffi/FFI_API.md, dash-spv-ffi/include/dash_spv_ffi.h, dash-spv-ffi/src/callbacks.rs, dash-spv-ffi/src/types.rs, dash-spv-ffi/src/bin/ffi_cli.rs, dash-spv-ffi/tests/dashd_sync/callbacks.rs
Added FFIMempoolProgress struct, mempool destructor function, Mempool manager enum variant, extended FFISyncProgress with mempool pointer, updated OnBlockProcessedCallback signature to include confirmed transaction IDs, and updated FFI API documentation.
Mempool Core Module
dash-spv/src/sync/mempool/mod.rs, dash-spv/src/sync/mempool/manager.rs, dash-spv/src/sync/mempool/sync_manager.rs, dash-spv/src/sync/mempool/progress.rs, dash-spv/src/sync/mempool/filter.rs
Introduced complete mempool synchronization subsystem with transaction filtering, per-peer queue management, deduplication, bloom filter construction, InstantSend lock handling, progress tracking, and SyncManager trait implementation with comprehensive test suite.
Sync Infrastructure & Events
dash-spv/src/sync/mod.rs, dash-spv/src/sync/identifier.rs, dash-spv/src/sync/events.rs, dash-spv/src/sync/progress.rs, dash-spv/src/sync/sync_coordinator.rs, dash-spv/src/sync/sync_manager.rs
Extended sync system with Mempool manager identifier variant, added confirmed_txids field to BlockProcessed event, integrated mempool progress into aggregate SyncProgress, registered mempool manager in SyncCoordinator, and added SyncManagerProgress::Mempool variant.
Network & Client Configuration
dash-spv/src/network/mod.rs, dash-spv/src/network/manager.rs, dash-spv/src/client/lifecycle.rs
Added peer-directed network methods for filter and mempool operations (send_filter_load, send_filter_clear, request_mempool), enhanced peer selection to require Bloom service flags, and wired conditional mempool manager instantiation in client lifecycle based on enable_mempool_tracking configuration.
Wallet Interface & Implementation
key-wallet-manager/src/wallet_interface.rs, key-wallet-manager/src/lib.rs, key-wallet-manager/src/wallet_manager/mod.rs, key-wallet-manager/src/wallet_manager/process_block.rs, key-wallet-manager/src/test_utils/wallet.rs
Introduced MempoolTransactionResult struct with relevance and address tracking, added watched_outpoints() method to wallet interface, updated process_mempool_transaction signature to accept InstantSend flag and return detailed results, implemented outpoint-based spend detection, and added comprehensive balance/event emission tests.
Test Infrastructure & Utilities
dash-spv/src/test_utils/node.rs, dash-spv/src/types.rs, dash-spv/tests/dashd_sync/setup.rs, dash-spv/tests/dashd_sync/helpers.rs, dash-spv/tests/dashd_sync/main.rs, dash-spv/tests/dashd_sync/tests_mempool.rs, key-wallet-manager/src/wallet_manager/event_tests.rs
Extended test utilities with wallet-scoped RPC helpers, P2P network control methods (connect_to_node, disconnect_peer), dual-client concurrency helpers, mempool event waiters and assertions, comprehensive mempool lifecycle integration tests covering transaction detection, confirmation, disconnection recovery, and multi-peer scenarios.
Documentation & Architecture
dash-spv/ARCHITECTURE.md
Updated sync architecture documentation to reflect 8 parallel managers (added MempoolManager), inter-manager event flow for BlockProcessed/InstantLockReceived notification, sync pipeline diagrams, event type consumer/emitter table, and new sync/mempool/ module overview with operational details.

Sequence Diagram

sequenceDiagram
    participant Peer
    participant MempoolManager
    participant WalletInterface
    participant SyncCoordinator
    participant WalletEventChannel

    activate MempoolManager
    MempoolManager->>Peer: request_mempool()
    Peer-->>MempoolManager: send Inv(txids)
    MempoolManager->>MempoolManager: filter_inv(bloom_filter)
    
    MempoolManager->>Peer: send getdata(eligible_txids)
    Peer-->>MempoolManager: send Tx(transaction)
    
    MempoolManager->>WalletInterface: process_mempool_transaction(tx, is_instant_send)
    activate WalletInterface
    WalletInterface->>WalletInterface: check_relevance(outpoints, addresses)
    WalletInterface-->>MempoolManager: MempoolTransactionResult{is_relevant, net_amount, new_addresses}
    deactivate WalletInterface
    
    MempoolManager->>MempoolManager: update_progress(counters)
    
    alt is_relevant && new_addresses
        MempoolManager->>MempoolManager: rebuild_filter(wallet_data)
        MempoolManager->>Peer: send FilterLoad(new_filter)
    end
    
    MempoolManager->>WalletEventChannel: emit TransactionReceived(Mempool)
    MempoolManager->>SyncCoordinator: emit SyncEvent::TransactionReceived
    deactivate MempoolManager
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Poem

🐰 Mempool waters swirl and flow,
Through bloom-filtered streams, transactions grow,
Peers relay, wallets awake with cheer,
Unconfirmed dreams find their place here! ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title 'feat: add mempool support' is clear and directly reflects the main objective: introducing comprehensive mempool monitoring and management functionality to the SPV client.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/mempool-support
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@xdustinface xdustinface force-pushed the feat/mempool-support branch from 36f64ee to 1106a91 Compare March 17, 2026 03:45
@codecov
Copy link

codecov bot commented Mar 17, 2026

Codecov Report

❌ Patch coverage is 97.33106% with 47 lines in your changes missing coverage. Please review.
✅ Project coverage is 67.36%. Comparing base (39d06fb) to head (1af96dd).

Files with missing lines Patch % Lines
dash-spv/src/sync/mempool/manager.rs 97.55% 25 Missing ⚠️
key-wallet-manager/src/wallet_manager/mod.rs 53.84% 6 Missing ⚠️
dash-spv-ffi/src/bin/ffi_cli.rs 0.00% 5 Missing ⚠️
...wallet-manager/src/wallet_manager/process_block.rs 96.00% 4 Missing ⚠️
dash-spv/src/sync/mempool/filter.rs 97.89% 2 Missing ⚠️
dash-spv-ffi/src/callbacks.rs 83.33% 1 Missing ⚠️
dash-spv/src/network/manager.rs 0.00% 1 Missing ⚠️
dash-spv/src/sync/progress.rs 92.85% 1 Missing ⚠️
dash-spv/src/sync/sync_manager.rs 0.00% 1 Missing ⚠️
dash-spv/src/types.rs 66.66% 1 Missing ⚠️
Additional details and impacted files
@@                        Coverage Diff                         @@
##           feat/transaction-status-events     #558      +/-   ##
==================================================================
+ Coverage                           66.43%   67.36%   +0.92%     
==================================================================
  Files                                 312      316       +4     
  Lines                               65131    66887    +1756     
==================================================================
+ Hits                                43270    45058    +1788     
+ Misses                              21861    21829      -32     
Flag Coverage Δ
core 75.13% <ø> (ø)
ffi 37.64% <82.85%> (+0.14%) ⬆️
rpc 28.28% <ø> (ø)
spv 83.63% <98.07%> (+2.52%) ⬆️
wallet 66.46% <91.15%> (+0.16%) ⬆️
Files with missing lines Coverage Δ
dash-spv-ffi/src/types.rs 92.85% <100.00%> (+0.70%) ⬆️
dash-spv/src/client/lifecycle.rs 64.81% <100.00%> (+1.35%) ⬆️
dash-spv/src/network/mod.rs 72.72% <100.00%> (+23.36%) ⬆️
dash-spv/src/sync/blocks/manager.rs 94.82% <100.00%> (+0.09%) ⬆️
dash-spv/src/sync/events.rs 62.50% <ø> (+5.00%) ⬆️
dash-spv/src/sync/identifier.rs 100.00% <100.00%> (ø)
dash-spv/src/sync/mempool/progress.rs 100.00% <100.00%> (ø)
dash-spv/src/sync/mempool/sync_manager.rs 100.00% <100.00%> (ø)
dash-spv/src/sync/sync_coordinator.rs 85.15% <100.00%> (+0.33%) ⬆️
key-wallet-manager/src/wallet_interface.rs 0.00% <ø> (ø)
... and 10 more

... and 5 files with indirect coverage changes

@github-actions github-actions bot added the merge-conflict The PR conflicts with the target branch. label Mar 17, 2026
@github-actions
Copy link

This PR has merge conflicts with the base branch. Please rebase or merge the base branch into your branch to resolve them.

@xdustinface xdustinface force-pushed the feat/mempool-support branch from 1106a91 to c022a39 Compare March 19, 2026 03:38
@github-actions github-actions bot removed the merge-conflict The PR conflicts with the target branch. label Mar 19, 2026
@xdustinface xdustinface changed the base branch from v0.42-dev to feat/transaction-status-events March 19, 2026 03:38
@xdustinface
Copy link
Collaborator Author

@CodeRabbit review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 19, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@xdustinface
Copy link
Collaborator Author

xdustinface commented Mar 19, 2026

@CodeRabbit review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 19, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
dash-spv-ffi/src/callbacks.rs (1)

157-170: ⚠️ Potential issue | 🟡 Minor

Document the confirmed_txids borrowed buffer lifetime in the callback doc comment.

The OnBlockProcessedCallback documentation describes the hash pointer lifetime but omits confirmed_txids. This parameter is also a borrowed pointer—it points to a temporary buffer that is destroyed immediately when the callback returns (see dispatch code at line 362). The caller must copy the array if it needs to retain the data beyond the callback. Update the doc comment to document this alongside the hash contract, specifying that confirmed_txids references confirmed_txid_count hashes and is only valid for the callback duration.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dash-spv-ffi/src/callbacks.rs` around lines 157 - 170, Update the
OnBlockProcessedCallback doc comment to document that confirmed_txids is a
borrowed pointer (like hash) that references confirmed_txid_count consecutive
32-byte txid entries and is only valid for the lifetime of the callback; callers
must memcpy/duplicate the confirmed_txids buffer if they need to retain it after
the callback returns. Mention both confirmed_txids and confirmed_txid_count
along with the existing hash lifetime contract so the caller knows to copy both
the single hash and the array of confirmed txids before returning.
🧹 Nitpick comments (6)
dash-spv/src/test_utils/node.rs (1)

377-382: Consider using exact address matching for peer verification.

The current IP-prefix check (starts_with(&addr.ip().to_string())) would match any peer on the same IP regardless of port. In test scenarios with multiple nodes on localhost (different ports), this could incorrectly report success when connecting to a different node than intended.

♻️ Proposed fix for exact address matching
         for _ in 0..30 {
             let peers = client.get_peer_info().expect("failed to get peer info");
-            if peers.iter().any(|p| p.addr.to_string().starts_with(&addr.ip().to_string())) {
+            if peers.iter().any(|p| p.addr == addr.to_string()) {
                 tracing::info!("Connected to node {}", addr);
                 return;
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dash-spv/src/test_utils/node.rs` around lines 377 - 382, The peer check is
using an IP-prefix match which can falsely match different ports; update the
loop that calls client.get_peer_info() and inspects peers (the variable peers
and each p.addr) to perform an exact address comparison including port — e.g.,
compare p.addr.to_string() == addr.to_string() or compare the socket-address
types directly rather than using starts_with(&addr.ip().to_string()); keep the
existing logging/tracing and return behavior unchanged.
dash-spv/src/sync/mempool/mod.rs (1)

9-11: Consider making bloom false-positive rate configurable in a follow-up.

The TODO is on point; exposing this via config (or privacy profiles) would make mempool behavior easier to tune per deployment.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dash-spv/src/sync/mempool/mod.rs` around lines 9 - 11, The constant
BLOOM_FALSE_POSITIVE_RATE is hard-coded; make it configurable by wiring a config
or privacy profile into the mempool module, e.g., replace the const
BLOOM_FALSE_POSITIVE_RATE with a value loaded from a config struct or settings
object passed into the mempool (or a MempoolConfig/PrivacyLevel enum) and use
that value wherever BLOOM_FALSE_POSITIVE_RATE is referenced (update
constructors/functions in mod.rs that create the bloom filter to accept the
configured rate); ensure sensible defaults (0.0005) remain if config is absent
and validate the rate is within (0,1).
dash-spv/src/sync/progress.rs (1)

37-39: Clarify aggregate semantics for mempool progress.

mempool is now stored, displayed, and queryable, but aggregate methods (state(), is_synced(), percentage()) still ignore it. If this is intentional (non-blocking monitor), add explicit docs; otherwise wire it into aggregation for consistency.

Also applies to: 161-165, 221-227, 254-256

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dash-spv/src/sync/progress.rs` around lines 37 - 39, The mempool
Option<MempoolProgress> is currently collected but not considered by the
aggregate methods; if this is intentional, add explicit docs stating that
mempool is a non-blocking/auxiliary monitor and is deliberately excluded from
aggregation. Update the doc comment on the mempool field (MempoolProgress) and
the docstrings for state(), is_synced(), and percentage() to say they ignore
mempool, and mirror the same clarification wherever those aggregates are
re-exported or called (the other referenced locations that read or display
mempool). If instead you want mempool included, modify state() to combine
mempool.state() with the existing aggregate, change is_synced() to require
mempool.is_synced() when Some(...), and incorporate mempool.percentage() into
percentage() (handle None as neutral), but choose one approach and make the docs
reflect that change.
dash-spv/src/sync/mempool/progress.rs (1)

144-154: Consider strengthening the timing assertion.

The test relies on a 1ms sleep to ensure time difference, but the assertion p.last_activity() >= before would pass even if last_activity wasn't updated (since the new Instant::now() would always be >= before). A stricter check would be p.last_activity() > before, though this assumes the sleep is long enough to guarantee distinct Instant values.

🔧 Optional: Use stricter assertion
-        assert!(p.last_activity() >= before);
+        assert!(p.last_activity() > before, "last_activity should be updated after mutation");
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dash-spv/src/sync/mempool/progress.rs` around lines 144 - 154, The test
test_last_activity_updated_on_mutation currently asserts p.last_activity() >=
before which can pass even if last_activity wasn't updated; change the assertion
to p.last_activity() > before and increase the sleep to a slightly larger
duration (e.g., from 1ms to 2–5ms) to reliably ensure distinct Instant values
when calling MempoolProgress::default(), then p.add_received(1) so
last_activity() is actually newer than before.
dash-spv/tests/dashd_sync/tests_mempool.rs (1)

21-49: Consider adding #[ignore] attribute for network-dependent tests.

Per coding guidelines: "Mark network-dependent or long-running tests with #[ignore] attribute; run with -- --ignored". These integration tests require a running dashd node and have a 30-second timeout, making them candidates for the #[ignore] attribute.

However, the early-return pattern when TestContext::new() returns None or supports_mining is false provides a reasonable fallback, so this is a soft recommendation.

🔧 Optional: Add #[ignore] for CI-friendly behavior
+#[ignore]
 #[tokio::test]
 async fn test_mempool_detects_incoming_tx() {

Apply similarly to other tests in this file.

As per coding guidelines: "Mark network-dependent or long-running tests with #[ignore] attribute; run with -- --ignored"

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dash-spv/tests/dashd_sync/tests_mempool.rs` around lines 21 - 49, The test
test_mempool_detects_incoming_tx is network-dependent and should be marked with
#[ignore] so it only runs when explicitly requested; add the #[ignore] attribute
above the async fn test_mempool_detects_incoming_tx() declaration and apply the
same attribute to other integration tests in this module (e.g., other top-level
#[tokio::test] functions) that require a running dashd or are long-running,
keeping the existing early-return guards intact.
dash-spv/src/sync/mempool/manager.rs (1)

532-1663: Add two regression tests for uncovered edge paths.

Coverage is strong, but adding tests for (1) empty-wallet bloom activation followed by later address/outpoint appearance, and (2) exceeding max_transactions from already in-flight responses would lock down the two high-risk paths above.

As per coding guidelines "Write unit tests for new functionality in Rust code".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dash-spv/src/sync/mempool/manager.rs` around lines 532 - 1663, Add two
regression tests inside the existing tests module: (1) a
Bloom-empty-then-later-change test that uses create_bloom_manager() to activate
a peer with an empty wallet, then mutates manager.wallet by adding addresses
and/or outpoints and calls check_filter_staleness(&requests) to assert
manager.filter_loaded becomes true, filter_address_count/filter_outpoint_count
updated, and that a FilterLoad/ MemPool message was sent; (2) an
in-flight-excess-response test that fills manager.pending_requests to
MAX_IN_FLIGHT, enqueues additional txids on an activated peer via manager.peers
(Some(VecDeque::...)), then simulates many transaction responses (add
transactions into manager.mempool_state or remove pending entries) and calls
send_queued(&requests) to assert pending_requests never exceeds MAX_IN_FLIGHT
and peer queues are drained/unchanged as expected; reference MempoolManager
methods activate_peer, check_filter_staleness, send_queued and fields
pending_requests, peers, filter_loaded, filter_address_count,
filter_outpoint_count, and MAX_IN_FLIGHT when locating code to extend.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@dash-spv/ARCHITECTURE.md`:
- Around line 1124-1131: The fenced code block showing the sync/mempool
directory listing (the block containing "sync/mempool/" and its subitems) lacks
a language tag; update that fence to include an explicit language (e.g., text)
and do the same for the other similar fenced block referenced around lines
1144-1169 so markdownlint stops flagging them. Locate the fences in
ARCHITECTURE.md that wrap the directory listings (the blocks containing
"sync/mempool/" and the second listing) and prepend the opening ``` with an
appropriate language identifier such as ```text.
- Around line 1194-1200: The current staleness check only compares
address/outpoint counts and misses content changes; update the manager to track
a deterministic fingerprint/revision (e.g., a stable hash) of the actual watched
sets whenever the set changes and include that fingerprint in the filter
snapshot used by check_filter_staleness(); change handle_tx(), the
BlockProcessed path that passes new_addresses, and any code that mutates the
watched sets to update/set needs_filter_rebuild and the new fingerprint, and
make rebuild_filter() refresh the snapshot fingerprint after installing the new
filter so subsequent check_filter_staleness() compares fingerprints (not just
counts) to decide rebuilds.

In `@dash-spv/src/sync/mempool/bloom.rs`:
- Around line 14-23: The function address_payload_bytes currently swallows
unsupported Address payloads by returning None, which silently produces partial
BloomFilter coverage; change it to fail fast by returning a Result and propagate
errors so callers can choose FetchAll or abort: update address_payload_bytes to
return Result<Vec<u8>, AddressPayloadError> (or similar), replace the current
'_' arm to return an Err containing the address and a clear variant (e.g.,
UnsupportedAddressType), and adjust callers of address_payload_bytes (and any
code constructing BloomFilter) to handle the Err by falling back to FetchAll or
returning an error upstream; reference symbols: address_payload_bytes, Address,
Payload, and the BloomFilter construction call-sites that consume this function.

In `@dash-spv/src/sync/mempool/manager.rs`:
- Around line 146-149: When load_bloom_filter sees addresses.is_empty() &&
outpoints.is_empty() it currently returns early without updating filter state,
which lets check_filter_staleness ignore rebuilds because filter_loaded stays
false; change load_bloom_filter so that the empty-case still updates the bloom
filter lifecycle (set filter_loaded = true, update last_loaded/last_rebuilt
timestamps, and store/assign an empty/default Bloom filter state) so future
transitions out of the empty state trigger normal rebuilds. Apply the same
behavior to the other early-returns referenced (the similar checks in
load_bloom_filter and any early-exits around the rebuild logic) and ensure
check_filter_staleness relies on those lifecycle fields (filter_loaded,
last_loaded/last_rebuilt) to decide staleness for functions load_bloom_filter
and check_filter_staleness.
- Around line 329-343: The code currently always inserts relevant transactions
into the mempool (via UnconfirmedTransaction::new and state.add_transaction
inside the block that locks self.mempool_state), which allows tracked entries to
exceed the configured max_transactions when concurrent responses finish; modify
the insertion path to check the configured capacity before adding: read the cap
(e.g., self.max_transactions or equivalent), acquire the write lock on
self.mempool_state, and only call state.add_transaction(unconfirmed_tx) and
update self.progress.set_tracked(...) if state.transactions.len() is below the
cap (or evict oldest/lowest-priority entries to make room if your policy
requires), otherwise drop/skip the insertion and ensure progress/tracking
reflects the capped size. Ensure the same guard is applied where
UnconfirmedTransaction::new is constructed and before state.add_transaction so
racey completion cannot exceed the cap.

In `@dash-spv/src/sync/sync_coordinator.rs`:
- Around line 81-82: SyncCoordinator::new currently seeds initial_progress from
the older manager set but ignores Managers.mempool, causing inconsistent first
reads; update SyncCoordinator::new to include the mempool's current progress
when Managers.mempool is Some by extracting the mempool manager's
snapshot/progress and merging it into initial_progress (i.e., call the mempool
manager's current snapshot/progress accessor and include those fields in the
ProgressSnapshot used to seed initial_progress).

---

Outside diff comments:
In `@dash-spv-ffi/src/callbacks.rs`:
- Around line 157-170: Update the OnBlockProcessedCallback doc comment to
document that confirmed_txids is a borrowed pointer (like hash) that references
confirmed_txid_count consecutive 32-byte txid entries and is only valid for the
lifetime of the callback; callers must memcpy/duplicate the confirmed_txids
buffer if they need to retain it after the callback returns. Mention both
confirmed_txids and confirmed_txid_count along with the existing hash lifetime
contract so the caller knows to copy both the single hash and the array of
confirmed txids before returning.

---

Nitpick comments:
In `@dash-spv/src/sync/mempool/manager.rs`:
- Around line 532-1663: Add two regression tests inside the existing tests
module: (1) a Bloom-empty-then-later-change test that uses
create_bloom_manager() to activate a peer with an empty wallet, then mutates
manager.wallet by adding addresses and/or outpoints and calls
check_filter_staleness(&requests) to assert manager.filter_loaded becomes true,
filter_address_count/filter_outpoint_count updated, and that a FilterLoad/
MemPool message was sent; (2) an in-flight-excess-response test that fills
manager.pending_requests to MAX_IN_FLIGHT, enqueues additional txids on an
activated peer via manager.peers (Some(VecDeque::...)), then simulates many
transaction responses (add transactions into manager.mempool_state or remove
pending entries) and calls send_queued(&requests) to assert pending_requests
never exceeds MAX_IN_FLIGHT and peer queues are drained/unchanged as expected;
reference MempoolManager methods activate_peer, check_filter_staleness,
send_queued and fields pending_requests, peers, filter_loaded,
filter_address_count, filter_outpoint_count, and MAX_IN_FLIGHT when locating
code to extend.

In `@dash-spv/src/sync/mempool/mod.rs`:
- Around line 9-11: The constant BLOOM_FALSE_POSITIVE_RATE is hard-coded; make
it configurable by wiring a config or privacy profile into the mempool module,
e.g., replace the const BLOOM_FALSE_POSITIVE_RATE with a value loaded from a
config struct or settings object passed into the mempool (or a
MempoolConfig/PrivacyLevel enum) and use that value wherever
BLOOM_FALSE_POSITIVE_RATE is referenced (update constructors/functions in mod.rs
that create the bloom filter to accept the configured rate); ensure sensible
defaults (0.0005) remain if config is absent and validate the rate is within
(0,1).

In `@dash-spv/src/sync/mempool/progress.rs`:
- Around line 144-154: The test test_last_activity_updated_on_mutation currently
asserts p.last_activity() >= before which can pass even if last_activity wasn't
updated; change the assertion to p.last_activity() > before and increase the
sleep to a slightly larger duration (e.g., from 1ms to 2–5ms) to reliably ensure
distinct Instant values when calling MempoolProgress::default(), then
p.add_received(1) so last_activity() is actually newer than before.

In `@dash-spv/src/sync/progress.rs`:
- Around line 37-39: The mempool Option<MempoolProgress> is currently collected
but not considered by the aggregate methods; if this is intentional, add
explicit docs stating that mempool is a non-blocking/auxiliary monitor and is
deliberately excluded from aggregation. Update the doc comment on the mempool
field (MempoolProgress) and the docstrings for state(), is_synced(), and
percentage() to say they ignore mempool, and mirror the same clarification
wherever those aggregates are re-exported or called (the other referenced
locations that read or display mempool). If instead you want mempool included,
modify state() to combine mempool.state() with the existing aggregate, change
is_synced() to require mempool.is_synced() when Some(...), and incorporate
mempool.percentage() into percentage() (handle None as neutral), but choose one
approach and make the docs reflect that change.

In `@dash-spv/src/test_utils/node.rs`:
- Around line 377-382: The peer check is using an IP-prefix match which can
falsely match different ports; update the loop that calls client.get_peer_info()
and inspects peers (the variable peers and each p.addr) to perform an exact
address comparison including port — e.g., compare p.addr.to_string() ==
addr.to_string() or compare the socket-address types directly rather than using
starts_with(&addr.ip().to_string()); keep the existing logging/tracing and
return behavior unchanged.

In `@dash-spv/tests/dashd_sync/tests_mempool.rs`:
- Around line 21-49: The test test_mempool_detects_incoming_tx is
network-dependent and should be marked with #[ignore] so it only runs when
explicitly requested; add the #[ignore] attribute above the async fn
test_mempool_detects_incoming_tx() declaration and apply the same attribute to
other integration tests in this module (e.g., other top-level #[tokio::test]
functions) that require a running dashd or are long-running, keeping the
existing early-return guards intact.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 628b1a04-8991-4882-91ad-09805746e821

📥 Commits

Reviewing files that changed from the base of the PR and between 92cf7db and c022a39.

📒 Files selected for processing (34)
  • dash-spv-ffi/FFI_API.md
  • dash-spv-ffi/include/dash_spv_ffi.h
  • dash-spv-ffi/src/bin/ffi_cli.rs
  • dash-spv-ffi/src/callbacks.rs
  • dash-spv-ffi/src/types.rs
  • dash-spv-ffi/tests/dashd_sync/callbacks.rs
  • dash-spv/ARCHITECTURE.md
  • dash-spv/src/client/lifecycle.rs
  • dash-spv/src/network/manager.rs
  • dash-spv/src/network/mod.rs
  • dash-spv/src/sync/blocks/manager.rs
  • dash-spv/src/sync/events.rs
  • dash-spv/src/sync/identifier.rs
  • dash-spv/src/sync/mempool/bloom.rs
  • dash-spv/src/sync/mempool/manager.rs
  • dash-spv/src/sync/mempool/mod.rs
  • dash-spv/src/sync/mempool/progress.rs
  • dash-spv/src/sync/mempool/sync_manager.rs
  • dash-spv/src/sync/mod.rs
  • dash-spv/src/sync/progress.rs
  • dash-spv/src/sync/sync_coordinator.rs
  • dash-spv/src/sync/sync_manager.rs
  • dash-spv/src/test_utils/node.rs
  • dash-spv/src/types.rs
  • dash-spv/tests/dashd_sync/helpers.rs
  • dash-spv/tests/dashd_sync/main.rs
  • dash-spv/tests/dashd_sync/setup.rs
  • dash-spv/tests/dashd_sync/tests_mempool.rs
  • key-wallet-manager/src/lib.rs
  • key-wallet-manager/src/test_utils/wallet.rs
  • key-wallet-manager/src/wallet_interface.rs
  • key-wallet-manager/src/wallet_manager/event_tests.rs
  • key-wallet-manager/src/wallet_manager/mod.rs
  • key-wallet-manager/src/wallet_manager/process_block.rs

See the changes in `ARCHITECTURE.md` for a detailed description.

- Add `MempoolManager` that activates after initial sync to monitor unconfirmed transactions via BIP37 bloom filters or local address matching. Includes peer relay management, dedup tracking, IS lock handling, and auto-rebuilding filters on address pool changes.
- Extend `WalletInterface` with `MempoolTransactionResult` return type and `watched_outpoints()` for bloom filter construction. Wire mempool manager into `SyncCoordinator` and propagate `confirmed_txids` through `BlockProcessed` events for mempool eviction.
- Add FFI bindings, dashd integration tests, and wallet unit tests.
@xdustinface xdustinface force-pushed the feat/mempool-support branch from c022a39 to 1af96dd Compare March 19, 2026 07:57
@github-actions github-actions bot added the merge-conflict The PR conflicts with the target branch. label Mar 19, 2026
@github-actions
Copy link

This PR has merge conflicts with the base branch. Please rebase or merge the base branch into your branch to resolve them.

@xdustinface xdustinface force-pushed the feat/transaction-status-events branch from 92cf7db to 39d06fb Compare March 19, 2026 07:58
@github-actions github-actions bot removed the merge-conflict The PR conflicts with the target branch. label Mar 19, 2026
@xdustinface xdustinface reopened this Mar 19, 2026
@github-actions github-actions bot added the ready-for-review CodeRabbit has approved this PR label Mar 19, 2026
@xdustinface
Copy link
Collaborator Author

@CodeRabbit review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 19, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
dash-spv/src/client/lifecycle.rs (1)

164-164: Address TODO: Populate monitored addresses for bloom filter construction.

The empty HashSet passed here means the mempool filter won't match any wallet addresses until it's rebuilt. This should be populated from the wallet's monitored addresses to enable bloom filter matching on initial startup.

Would you like me to help implement address population from the wallet, or should this be tracked as a follow-up issue?

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dash-spv/src/client/lifecycle.rs` at line 164, The empty HashSet passed to
the mempool bloom filter (currently HashSet::new()) must be replaced with the
wallet's monitored addresses: locate the bloom/mempool filter construction in
lifecycle.rs where HashSet::new() is used, call the wallet API that returns
monitored addresses (e.g., wallet.monitored_addresses() or
wallet.get_monitored_addresses()), convert that collection into a HashSet of the
expected address type (e.g.,
monitored_addresses.iter().cloned().collect::<HashSet<_>>()), and use that
HashSet in place of HashSet::new(); also handle the wallet method returning an
Option or Result by defaulting to an empty set on error so startup remains
robust.
dash-spv-ffi/src/callbacks.rs (1)

157-171: Consider updating doc comment to document new parameters.

The OnBlockProcessedCallback doc comment only mentions the hash pointer lifetime. The new confirmed_txids pointer has the same borrowed lifetime semantics.

📝 Suggested doc comment update
 /// Callback for SyncEvent::BlockProcessed
 ///
-/// The `hash` pointer is borrowed and only valid for the duration of the
-/// callback. Callers must memcpy/duplicate it to retain the value after
-/// the callback returns.
+/// The `hash` and `confirmed_txids` pointers are borrowed and only valid for
+/// the duration of the callback. Callers must memcpy/duplicate them to retain
+/// the values after the callback returns.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dash-spv-ffi/src/callbacks.rs` around lines 157 - 171, Update the doc comment
for OnBlockProcessedCallback to document that the confirmed_txids pointer (and
confirmed_txid_count) is borrowed and only valid for the duration of the
callback, just like the hash pointer; callers must memcpy/duplicate each 32-byte
txid if they need to retain them after the callback returns. Reference the
OnBlockProcessedCallback type, the confirmed_txids parameter, and
confirmed_txid_count so readers know which symbols the lifetime rule applies to.
Ensure the wording mirrors the existing hash lifetime note and mentions that
confirmed_txids elements are 32-byte arrays.
dash-spv/src/sync/mempool/sync_manager.rs (2)

63-81: Minor: Simplify control flow in SyncComplete handler.

The explicit return Ok(vec![]) at line 73 and the final Ok(vec![]) at line 80 create slightly redundant code paths. Both branches return the same value.

♻️ Optional simplification
             SyncEvent::SyncComplete {
                 ..
             } => {
                 if self.state() != SyncState::Synced {
                     self.activate_all_peers(requests).await?;
                     let has_activated = self.peers.values().any(|v| v.is_some());
                     if has_activated {
                         self.set_state(SyncState::Synced);
                         tracing::info!("Mempool manager activated on all peers");
-                        return Ok(vec![]);
-                    } else {
+                    } else if !has_activated {
                         tracing::warn!(
                             "Sync complete but no peers available for mempool activation"
                         );
                     }
                 }
                 Ok(vec![])
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dash-spv/src/sync/mempool/sync_manager.rs` around lines 63 - 81, The
SyncComplete match arm in the SyncEvent handler redundantly returns Ok(vec![])
twice; simplify by removing the inner explicit return and let the function
return the single Ok(vec![]) at the end of the arm. Locate the
SyncEvent::SyncComplete branch in sync_manager.rs (references: self.state(),
activate_all_peers(requests).await, self.peers, has_activated,
self.set_state(SyncState::Synced)) and eliminate the early "return Ok(vec![])"
so both branches share the final "Ok(vec![])" response.

25-36: Consider adding state validation in start_sync.

The default SyncManager::start_sync implementation calls ensure_not_started(self.state(), self.identifier())? before proceeding. This implementation skips that validation, which could allow start_sync to be called in unexpected states. While this may be intentional for mempool's recovery semantics (re-activation after disconnect), consider documenting why the validation is omitted or adding a more permissive state check.

💡 Optional: Add state validation or documentation
 async fn start_sync(&mut self, requests: &RequestSender) -> SyncResult<Vec<SyncEvent>> {
+    // Note: Unlike other managers, mempool allows start_sync from any state
+    // to support re-activation after disconnect recovery.
     // After a full disconnect, re-activate mempool on all connected peers
     self.activate_all_peers(requests).await?;

Or add explicit state validation if needed:

 async fn start_sync(&mut self, requests: &RequestSender) -> SyncResult<Vec<SyncEvent>> {
+    // Only proceed if we're waiting for connections (after disconnect)
+    if self.state() == SyncState::Synced {
+        return Ok(vec![]);
+    }
     // After a full disconnect, re-activate mempool on all connected peers
     self.activate_all_peers(requests).await?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dash-spv/src/sync/mempool/sync_manager.rs` around lines 25 - 36, The
start_sync implementation omits the usual validation
ensure_not_started(self.state(), self.identifier())? from
SyncManager::start_sync; either call that helper at the top of start_sync to
prevent starting from invalid states (e.g., insert
ensure_not_started(self.state(), self.identifier())? before activate_all_peers),
or add a short doc comment on start_sync explaining why the validation is
intentionally skipped for mempool recovery semantics; keep references to
start_sync, ensure_not_started, self.state(), self.identifier(), and
activate_all_peers in the change so reviewers can easily locate the adjustment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@dash-spv/src/sync/mempool/filter.rs`:
- Around line 39-48: The code silently truncates element_count by casting usize
to u32; compute the sum using addresses.len().checked_add(outpoints.len()) and
if that returns None map to SyncError::Validation, then convert to u32 with
u32::try_from(...) and map any Err to SyncError::Validation; use that safe u32
value when calling BloomFilter::new (and ensure the early-zero branch still uses
the same validated path for consistency with BloomFilter::new and
FilterLoad::from_bloom_filter).

---

Nitpick comments:
In `@dash-spv-ffi/src/callbacks.rs`:
- Around line 157-171: Update the doc comment for OnBlockProcessedCallback to
document that the confirmed_txids pointer (and confirmed_txid_count) is borrowed
and only valid for the duration of the callback, just like the hash pointer;
callers must memcpy/duplicate each 32-byte txid if they need to retain them
after the callback returns. Reference the OnBlockProcessedCallback type, the
confirmed_txids parameter, and confirmed_txid_count so readers know which
symbols the lifetime rule applies to. Ensure the wording mirrors the existing
hash lifetime note and mentions that confirmed_txids elements are 32-byte
arrays.

In `@dash-spv/src/client/lifecycle.rs`:
- Line 164: The empty HashSet passed to the mempool bloom filter (currently
HashSet::new()) must be replaced with the wallet's monitored addresses: locate
the bloom/mempool filter construction in lifecycle.rs where HashSet::new() is
used, call the wallet API that returns monitored addresses (e.g.,
wallet.monitored_addresses() or wallet.get_monitored_addresses()), convert that
collection into a HashSet of the expected address type (e.g.,
monitored_addresses.iter().cloned().collect::<HashSet<_>>()), and use that
HashSet in place of HashSet::new(); also handle the wallet method returning an
Option or Result by defaulting to an empty set on error so startup remains
robust.

In `@dash-spv/src/sync/mempool/sync_manager.rs`:
- Around line 63-81: The SyncComplete match arm in the SyncEvent handler
redundantly returns Ok(vec![]) twice; simplify by removing the inner explicit
return and let the function return the single Ok(vec![]) at the end of the arm.
Locate the SyncEvent::SyncComplete branch in sync_manager.rs (references:
self.state(), activate_all_peers(requests).await, self.peers, has_activated,
self.set_state(SyncState::Synced)) and eliminate the early "return Ok(vec![])"
so both branches share the final "Ok(vec![])" response.
- Around line 25-36: The start_sync implementation omits the usual validation
ensure_not_started(self.state(), self.identifier())? from
SyncManager::start_sync; either call that helper at the top of start_sync to
prevent starting from invalid states (e.g., insert
ensure_not_started(self.state(), self.identifier())? before activate_all_peers),
or add a short doc comment on start_sync explaining why the validation is
intentionally skipped for mempool recovery semantics; keep references to
start_sync, ensure_not_started, self.state(), self.identifier(), and
activate_all_peers in the change so reviewers can easily locate the adjustment.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: f3faf2ac-033e-4df0-8f07-800784a6af89

📥 Commits

Reviewing files that changed from the base of the PR and between c022a39 and 1af96dd.

📒 Files selected for processing (34)
  • dash-spv-ffi/FFI_API.md
  • dash-spv-ffi/include/dash_spv_ffi.h
  • dash-spv-ffi/src/bin/ffi_cli.rs
  • dash-spv-ffi/src/callbacks.rs
  • dash-spv-ffi/src/types.rs
  • dash-spv-ffi/tests/dashd_sync/callbacks.rs
  • dash-spv/ARCHITECTURE.md
  • dash-spv/src/client/lifecycle.rs
  • dash-spv/src/network/manager.rs
  • dash-spv/src/network/mod.rs
  • dash-spv/src/sync/blocks/manager.rs
  • dash-spv/src/sync/events.rs
  • dash-spv/src/sync/identifier.rs
  • dash-spv/src/sync/mempool/filter.rs
  • dash-spv/src/sync/mempool/manager.rs
  • dash-spv/src/sync/mempool/mod.rs
  • dash-spv/src/sync/mempool/progress.rs
  • dash-spv/src/sync/mempool/sync_manager.rs
  • dash-spv/src/sync/mod.rs
  • dash-spv/src/sync/progress.rs
  • dash-spv/src/sync/sync_coordinator.rs
  • dash-spv/src/sync/sync_manager.rs
  • dash-spv/src/test_utils/node.rs
  • dash-spv/src/types.rs
  • dash-spv/tests/dashd_sync/helpers.rs
  • dash-spv/tests/dashd_sync/main.rs
  • dash-spv/tests/dashd_sync/setup.rs
  • dash-spv/tests/dashd_sync/tests_mempool.rs
  • key-wallet-manager/src/lib.rs
  • key-wallet-manager/src/test_utils/wallet.rs
  • key-wallet-manager/src/wallet_interface.rs
  • key-wallet-manager/src/wallet_manager/event_tests.rs
  • key-wallet-manager/src/wallet_manager/mod.rs
  • key-wallet-manager/src/wallet_manager/process_block.rs
✅ Files skipped from review due to trivial changes (5)
  • dash-spv/tests/dashd_sync/main.rs
  • dash-spv/src/sync/mod.rs
  • dash-spv/src/sync/mempool/mod.rs
  • dash-spv/tests/dashd_sync/setup.rs
  • dash-spv/tests/dashd_sync/tests_mempool.rs
🚧 Files skipped from review as they are similar to previous changes (15)
  • dash-spv/src/sync/identifier.rs
  • dash-spv/src/types.rs
  • key-wallet-manager/src/lib.rs
  • dash-spv/src/sync/blocks/manager.rs
  • dash-spv/src/network/manager.rs
  • dash-spv-ffi/FFI_API.md
  • dash-spv/src/sync/sync_coordinator.rs
  • dash-spv/src/sync/progress.rs
  • dash-spv/ARCHITECTURE.md
  • key-wallet-manager/src/wallet_manager/mod.rs
  • key-wallet-manager/src/wallet_interface.rs
  • dash-spv/src/sync/mempool/progress.rs
  • dash-spv/src/test_utils/node.rs
  • dash-spv/tests/dashd_sync/helpers.rs
  • dash-spv/src/sync/mempool/manager.rs

@github-actions github-actions bot removed the ready-for-review CodeRabbit has approved this PR label Mar 19, 2026
@github-actions github-actions bot added the ready-for-review CodeRabbit has approved this PR label Mar 19, 2026
@xdustinface xdustinface removed the ready-for-review CodeRabbit has approved this PR label Mar 19, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant