Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a full blob submission path: new submit package (signer, tx builders, DirectSubmitter, gRPC app client), JSON‑RPC + Service wiring, config/validation, protobufs, CLI/main integration, unit + e2e tests, CI job, and docs updates. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as JSON-RPC Client
participant Handler as JSON-RPC Handler
participant Service as API Service
participant Submitter as DirectSubmitter
participant AppClient as GRPC App Client
participant Signer as Signer
participant CelestiaApp as Celestia App
Client->>Handler: Submit(ctx, blobs, options)
Handler->>Service: BlobSubmit(ctx, blobs, options)
Service->>Service: DecodeRequest(blobs, options)
Service->>Submitter: Submit(ctx, Request)
Submitter->>AppClient: AccountInfo(ctx, address)
AppClient->>CelestiaApp: AccountInfo RPC
CelestiaApp-->>AppClient: Account Info
AppClient-->>Submitter: AccountInfo
Submitter->>Submitter: buildBlobTx(req, account)
Submitter->>Signer: Sign(signDoc)
Signer-->>Submitter: signature
Submitter->>AppClient: BroadcastTx(ctx, txBytes)
AppClient->>CelestiaApp: BroadcastTx RPC
CelestiaApp-->>AppClient: Broadcast response
AppClient-->>Submitter: TxStatus
loop poll for confirmation
Submitter->>AppClient: GetTx(ctx, txhash)
AppClient->>CelestiaApp: GetTx RPC
CelestiaApp-->>AppClient: TxResponse
AppClient-->>Submitter: TxStatus
end
Submitter-->>Service: Result{Height}
Service->>Handler: MarshalResult(Result)
Handler-->>Client: JSON-RPC Response
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (15)
proto/cosmos/auth/v1beta1/query.proto (1)
9-11: Buf lint warning: Service suffix convention vs. Cosmos SDK convention.Buf flags that the service name
Queryshould beQueryServiceper theSERVICE_SUFFIXrule. However, this follows the standard Cosmos SDK proto convention where query services are namedQuerywithout a suffix. Since this PR integrates with Cosmos SDK and client code expects these standard service names, keepingQueryis the correct choice for interoperability.Consider suppressing this specific lint rule for Cosmos SDK proto files in your
buf.yamlif it becomes noisy:lint: except: - SERVICE_SUFFIXAlternatively, scope the exception to
cosmos/**paths if buf supports path-based rule configuration.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@proto/cosmos/auth/v1beta1/query.proto` around lines 9 - 11, The proto service is named Query (service Query { ... }) which intentionally follows Cosmos SDK conventions but triggers Buf's SERVICE_SUFFIX lint; suppress this specific lint rule for Cosmos SDK files by adding an exception for SERVICE_SUFFIX in your buf.yaml (or scope the exception to cosmos/** paths) so the Query service name remains unchanged for interoperability with Cosmos SDK clients.proto/cosmos/tx/v1beta1/service.proto (1)
5-5: AlignGetTxResponsewith canonical Cosmos shape.
GetTxResponsecurrently omits decodedtxat field 1 and only returnstx_response(field 2). Adding field 1 improves interoperability and gives callers full tx payload when needed.Suggested patch
import "cosmos/base/abci/v1beta1/abci.proto"; +import "cosmos/tx/v1beta1/tx.proto"; @@ message GetTxResponse { + Tx tx = 1; cosmos.base.abci.v1beta1.TxResponse tx_response = 2; }Also applies to: 34-36
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@proto/cosmos/tx/v1beta1/service.proto` at line 5, GetTxResponse in proto/cosmos/tx/v1beta1/service.proto is missing the decoded tx field; update the GetTxResponse message to include a field named "tx" (the decoded Tx type) as field number 1 and ensure the existing "tx_response" remains as field 2 to match the canonical Cosmos shape; locate the GetTxResponse message definition and add the appropriate import/reference for the Tx message type if missing, then re-generate/protobuf-build artifacts so callers receive both tx (field 1) and tx_response (field 2).config/load_test.go (3)
113-137: Missingt.Parallel()call.
TestSubmissionRequiresAppGRPCHostis missingt.Parallel(), unlike other tests in this file.♻️ Proposed fix
func TestSubmissionRequiresAppGRPCHost(t *testing.T) { + t.Parallel() + dir := t.TempDir()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config/load_test.go` around lines 113 - 137, Add t.Parallel() at the start of the TestSubmissionRequiresAppGRPCHost test to run it in parallel with other tests; locate the TestSubmissionRequiresAppGRPCHost function in config/load_test.go and insert a call to t.Parallel() immediately after the function entry (before creating temp dir or writing the config) so it matches the other tests that run concurrently and avoids serial execution issues when calling Load.
139-163: Missingt.Parallel()call.
TestSubmissionRequiresChainIDis also missingt.Parallel().♻️ Proposed fix
func TestSubmissionRequiresChainID(t *testing.T) { + t.Parallel() + dir := t.TempDir()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config/load_test.go` around lines 139 - 163, The test function TestSubmissionRequiresChainID is missing a call to t.Parallel(); add t.Parallel() at the start of the test body (e.g., immediately inside TestSubmissionRequiresChainID) to enable parallel execution, leaving the rest of the test (temp dir creation, writing config, calling Load and asserting the error contains "submission.chain_id is required") unchanged.
113-192: Consider consolidating validation tests into a table-driven test.The three validation tests (
TestSubmissionRequiresAppGRPCHost,TestSubmissionRequiresChainID,TestSubmissionRequiresSignerKey) follow an identical pattern: write config, load, expect specific error substring. This is a good candidate for table-driven tests per coding guidelines.💡 Table-driven refactor example
func TestSubmissionValidation(t *testing.T) { t.Parallel() tests := []struct { name string config string wantErr string }{ { name: "requires app_grpc_addr", config: ` submission: enabled: true app_grpc_addr: "" chain_id: "mychain" log: level: "info" format: "json" `, wantErr: "submission.app_grpc_addr is required", }, // ... additional cases } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { t.Parallel() dir := t.TempDir() path := filepath.Join(dir, "config.yaml") if err := os.WriteFile(path, []byte(tt.config), 0o644); err != nil { t.Fatalf("WriteFile: %v", err) } _, err := Load(path) if err == nil { t.Fatal("expected validation error, got nil") } if !strings.Contains(err.Error(), tt.wantErr) { t.Fatalf("error = %v, want containing %q", err, tt.wantErr) } }) } }As per coding guidelines: "Use table-driven tests pattern for test cases".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config/load_test.go` around lines 113 - 192, Consolidate the three nearly-identical tests (TestSubmissionRequiresAppGRPCHost, TestSubmissionRequiresChainID, TestSubmissionRequiresSignerKey) into a single table-driven test (e.g., TestSubmissionValidation) that runs subtests with t.Run and t.Parallel; each table entry should include a name, the YAML config string, and the expected error substring (wantErr) and the body should create a temp dir, write the config, call Load(path), assert error is non-nil and contains tt.wantErr. Ensure you remove or replace the original three tests and keep the same assertions/messages but driven from the table entries.pkg/fetch/blobtx_submit_test.go (1)
60-64: Minor: Duplicate helper function across test files.
testNamespaceTypeduplicatestestNamespacedefined inpkg/submit/submit_test.go. Consider consolidating into a shared test helper if the pattern continues to spread.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/fetch/blobtx_submit_test.go` around lines 60 - 64, The test helper testNamespaceType duplicates testNamespace from pkg/submit/submit_test.go; remove the duplicate and consolidate by creating or reusing a shared test helper (e.g., a new common_test.go or a pkg/testhelpers package) that exposes a single function (name it testNamespace or TestNamespace helper) and update callers in pkg/fetch/blobtx_submit_test.go to call that shared function instead of testNamespaceType; ensure the helper signature and returned type (types.Namespace) match existing uses and update imports if you move it into a separate package.pkg/submit/blobtx_test.go (1)
9-23: Consider documenting the expected gas calculation.The hardcoded expected value
83192is a magic number. If the gas estimation formula changes, this test will fail without clear explanation of the expected calculation.💡 Suggested improvement
Add a comment explaining the gas calculation basis:
// Expected gas = base_gas + per_byte_gas * blob_size + ... // For a 600-byte blob at sequence 0: 83192Or compute the expected value from known constants if they're available.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/submit/blobtx_test.go` around lines 9 - 23, The test TestEstimateGas uses a magic number 83192 for the expected gas which is undocumented; update the test to either compute the expected value from the same constants/formula used by EstimateGas or add a clear comment explaining the breakdown (e.g., base_gas + per_blob_gas + per_byte_gas * 600 for sequence 0) so future changes to the estimation logic are traceable; locate TestEstimateGas and adjust the expected value to be derived from the constants or add the explanatory comment referencing EstimateGas and the blob parameters (Namespace, Data length 600, Commitment) used in the test.pkg/submit/tx_builder_test.go (2)
13-32: Missingt.Parallel()for test isolation.Tests in this file don't call
t.Parallel(), which would allow them to run concurrently with other tests (especially useful with-raceflag as per guidelines). Consider addingt.Parallel()at the start of each test function for consistency withcmd/apex/main_test.go.♻️ Suggested fix
func TestBuildMsgPayForBlobs(t *testing.T) { + t.Parallel() + ns1 := testNamespace(1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/submit/tx_builder_test.go` around lines 13 - 32, Add t.Parallel() as the first statement in the TestBuildMsgPayForBlobs test to allow it to run concurrently and improve isolation; locate the TestBuildMsgPayForBlobs function (which exercises BuildMsgPayForBlobs) and insert t.Parallel() at the start of the function body so it runs in parallel with other tests.
44-96: Consider splitting this large test into focused subtests.
TestBuildSignerInfoAndTxEncodingtests multiple distinct operations (BuildSignerInfo, BuildAuthInfo, MarshalTxBody, BuildSignDoc, MarshalTxRaw). While functional, usingt.Run()subtests would improve test isolation, readability, and failure diagnostics.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/submit/tx_builder_test.go` around lines 44 - 96, Split the large TestBuildSignerInfoAndTxEncoding into focused t.Run subtests to improve isolation and diagnostics: create subtests for "BuildSignerInfo" (call BuildSignerInfo, assert Sequence and pubkey bytes), "BuildAuthInfo" (call BuildAuthInfo then MarshalAuthInfo and assert no error), "MarshalTxBody/BuildSignDoc" (call MarshalTxBody, BuildSignDoc, unmarshal signDoc and assert ChainId/AccountNumber), and "MarshalTxRaw" (call MarshalTxRaw, unmarshal TxRaw and assert signatures length). Keep existing assertions and error handling but move them into the corresponding subtest closures to preserve behavior and failure messages.config/load.go (1)
361-379: Consider validating signer key file permissions.The signer key file contains sensitive cryptographic material. Consider warning or erroring if the file is world-readable (permissions like
0644or0666), as this could expose the private key to other users on the system.🛡️ Optional permission check
func resolveSubmissionSignerKey(s *SubmissionConfig, baseDir string) error { if !s.Enabled { return nil } keyPath := strings.TrimSpace(s.SignerKey) if keyPath == "" { return nil } if !filepath.IsAbs(keyPath) { keyPath = filepath.Join(baseDir, keyPath) } + info, err := os.Stat(keyPath) + if err != nil { + return fmt.Errorf("stat submission signer key %q: %w", keyPath, err) + } + // Warn if file is group/world readable (permissions & 0077 != 0) + if info.Mode().Perm()&0o077 != 0 { + // Log warning or return error depending on security posture + // return fmt.Errorf("submission signer key %q has insecure permissions %o", keyPath, info.Mode().Perm()) + } + keyBytes, err := os.ReadFile(keyPath)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config/load.go` around lines 361 - 379, In resolveSubmissionSignerKey, add a file permission check on keyPath before reading the file: use os.Stat to get FileMode and test whether mode&0o077 != 0 (or otherwise detect world/group-readable/writeable) and return or log an error/warning if the key file is world-readable (e.g., 0644/0666) to prevent exposing s.SignerPrivateKey; keep the existing behavior of joining baseDir when relative and only perform the permission check when s.Enabled and keyPath is non-empty, then proceed to read the file and set s.SignerPrivateKey as before.pkg/submit/signer.go (1)
18-18: Hardcoded address prefix limits chain compatibility.The
defaultAddressPrefix = "celestia"is hardcoded. Per the signer-mismatch check inDirectSubmitter.resolveFees(context snippet atpkg/submit/direct.go:212), this means the submitter will reject anySignerAddressusing a different Bech32 prefix even if the underlying key is valid.This is acceptable for Celestia mainnet but may need consideration for testnets or future chains with different prefixes. Consider making this configurable if multi-chain support is planned.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/submit/signer.go` at line 18, defaultAddressPrefix is hardcoded to "celestia", causing SignerAddress checks in DirectSubmitter.resolveFees to reject valid keys with other Bech32 prefixes; make the address prefix configurable (e.g., add a configurable field on DirectSubmitter or a constructor parameter, with default "celestia" fallback) and replace uses of the const defaultAddressPrefix with that configurable value so tests and alternative chains can provide a different Bech32 prefix without changing logic; ensure SignerAddress validation and any related helpers read the new field and keep current behavior when the value is unset.pkg/submit/direct_test.go (1)
45-55: Redundant error handling inGetTx.Lines 51-53 check
f.getTxErrsagain afterqueueErralready returnednil. ThequeueErrhelper returnsnilwhenidx >= len(errs), so this secondary check will only trigger ifgetTxStatusesis empty whilegetTxErrsis not, returning the last error. This logic is confusing and potentially masks test setup errors.♻️ Simplified implementation
func (f *fakeAppClient) GetTx(_ context.Context, _ string) (*TxStatus, error) { call := f.getTxCalls f.getTxCalls++ if err := queueErr(f.getTxErrs, call); err != nil { return nil, err } - if len(f.getTxStatuses) == 0 && len(f.getTxErrs) > 0 { - return nil, f.getTxErrs[len(f.getTxErrs)-1] - } return queueTxStatus(f.getTxStatuses, call), nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/submit/direct_test.go` around lines 45 - 55, The GetTx method on fakeAppClient contains a redundant secondary check that inspects f.getTxErrs after queueErr has already been called; remove lines that check "if len(f.getTxStatuses) == 0 && len(f.getTxErrs) > 0 { return nil, f.getTxErrs[len(f.getTxErrs)-1] }" and simply rely on the result of queueErr and then return queueTxStatus(f.getTxStatuses, call), nil; update fakeAppClient.GetTx to call queueErr(f.getTxErrs, call) and return any error it yields, otherwise return the queued status from queueTxStatus so behavior is clear and not duplicated (referencing GetTx, getTxErrs, getTxStatuses, queueErr, queueTxStatus).pkg/submit/submit.go (1)
104-127: Consider validating required blob fields early.
decodeBlobsaccepts blobs with emptyDataorCommitmentfields. WhileBuildMsgPayForBlobscatches missing commitments andmarshalBlobProtocatches missing data downstream, validating here would provide clearer errors at the API boundary.This is a recommended improvement but not blocking, as the downstream validation does catch these cases.
💡 Optional: Add early validation for required fields
func decodeBlobs(raw json.RawMessage) ([]Blob, error) { var blobsJSON []jsonBlob if err := json.Unmarshal(raw, &blobsJSON); err != nil { return nil, fmt.Errorf("decode submit blobs: %w", err) } blobs := make([]Blob, len(blobsJSON)) for i := range blobsJSON { + if len(blobsJSON[i].Data) == 0 { + return nil, fmt.Errorf("decode submit blob %d: data is required", i) + } + if len(blobsJSON[i].Commitment) == 0 { + return nil, fmt.Errorf("decode submit blob %d: commitment is required", i) + } ns, err := namespaceFromBytes(blobsJSON[i].Namespace) if err != nil { return nil, fmt.Errorf("decode submit blob %d namespace: %w", i, err) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/submit/submit.go` around lines 104 - 127, In decodeBlobs, add early validation after unmarshalling and before constructing each Blob: for each jsonBlob in blobsJSON (in decodeBlobs), ensure required fields like Data (blobsJSON[i].Data) and Commitment (blobsJSON[i].Commitment) are non-empty and return a descriptive error that includes the blob index (e.g., "decode submit blob %d missing data" / "missing commitment") when validation fails; this keeps the API boundary clear and complements downstream checks in BuildMsgPayForBlobs and marshalBlobProto.pkg/submit/tx_builder.go (1)
186-197: Consider documenting precision limits for FeeAmountFromGasPrice.Using
float64forgasPriceworks for typical Celestia gas prices but may lose precision for extremely large gas limits (>2^53). For current Celestia parameters this is unlikely to be an issue, but a comment noting the practical limits would help future maintainers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/submit/tx_builder.go` around lines 186 - 197, The FeeAmountFromGasPrice function uses float64 for gasPrice and multiplies by gasLimit, which can lose integer precision for very large values (beyond 2^53); add a short comment above FeeAmountFromGasPrice documenting this precision limit, note that results are safe for typical Celestia parameters but may be inaccurate for gasLimit > 2^53 or extremely large gasPrice, and suggest switching to math/big (big.Int/big.Rat) if callers need to support those extremes.pkg/submit/direct.go (1)
297-300: Consider making sequence mismatch detection more robust.The string matching relies on specific error text ("account sequence mismatch" or "incorrect account sequence"). If Celestia changes the error message wording, this detection would fail silently.
This is acceptable for v1 given the retry bound, but consider adding a comment documenting the expected error format or extracting it to a more visible constant for maintainability.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/submit/direct.go` around lines 297 - 300, The isSequenceMismatchText function relies on two hard-coded substrings which can break if error wording changes; extract those substrings into named constants (e.g., sequenceMismatchMsg1, sequenceMismatchMsg2) and add a clear comment above isSequenceMismatchText documenting the expected error format and why these checks exist (or include a TODO to expand patterns/regex later); keep the current logic in isSequenceMismatchText but reference the new constants so the match strings are more visible and maintainable.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@AGENTS.md`:
- Around line 22-27: The Markdown fenced code blocks for the architecture
diagram and the cmd/apex file list (the triple-backtick blocks starting the flow
diagram and the file-list block) are missing language identifiers which triggers
MD040; update those opening fences to include a language token (e.g., use
```text) for the diagram block and the file-list block (and any other fenced
blocks between lines ~33-80) so all fenced blocks include a language identifier.
In `@pkg/submit/app_client.go`:
- Around line 50-52: NewGRPCAppClient currently hardcodes
insecure.NewCredentials(), weakening security; change it to use TLS by default
and allow explicit opt-in for insecure. Update NewGRPCAppClient to accept a
configuration parameter (e.g., a boolean allowInsecure or a
grpc.DialOption/credentials.TransportCredentials argument) and, when not opting
into insecure, create transport credentials via credentials.NewTLS(tlsConfig)
(or default tls.Config) and call grpc.Dial/ grpc.NewClient with those
credentials; when allowInsecure==true keep insecure.NewCredentials() for
backward compatibility. Apply the same change pattern to the similar code in
pkg/fetch/celestia_app.go to ensure both constructors support TLS by default and
optional insecure mode.
In `@proto/cosmos/crypto/secp256k1/keys.proto`:
- Line 3: The proto package declaration lacks a version suffix and violates the
PACKAGE_VERSION_SUFFIX lint rule; update the package statement in
proto/cosmos/crypto/secp256k1/keys.proto from "package cosmos.crypto.secp256k1;"
to include a version suffix (e.g., "package cosmos.crypto.secp256k1.v1;") so it
matches the project's convention used by other packages and buf linting; ensure
any generated/importing code or references to this package (package declaration
in keys.proto) are updated accordingly to the new package name.
---
Nitpick comments:
In `@config/load_test.go`:
- Around line 113-137: Add t.Parallel() at the start of the
TestSubmissionRequiresAppGRPCHost test to run it in parallel with other tests;
locate the TestSubmissionRequiresAppGRPCHost function in config/load_test.go and
insert a call to t.Parallel() immediately after the function entry (before
creating temp dir or writing the config) so it matches the other tests that run
concurrently and avoids serial execution issues when calling Load.
- Around line 139-163: The test function TestSubmissionRequiresChainID is
missing a call to t.Parallel(); add t.Parallel() at the start of the test body
(e.g., immediately inside TestSubmissionRequiresChainID) to enable parallel
execution, leaving the rest of the test (temp dir creation, writing config,
calling Load and asserting the error contains "submission.chain_id is required")
unchanged.
- Around line 113-192: Consolidate the three nearly-identical tests
(TestSubmissionRequiresAppGRPCHost, TestSubmissionRequiresChainID,
TestSubmissionRequiresSignerKey) into a single table-driven test (e.g.,
TestSubmissionValidation) that runs subtests with t.Run and t.Parallel; each
table entry should include a name, the YAML config string, and the expected
error substring (wantErr) and the body should create a temp dir, write the
config, call Load(path), assert error is non-nil and contains tt.wantErr. Ensure
you remove or replace the original three tests and keep the same
assertions/messages but driven from the table entries.
In `@config/load.go`:
- Around line 361-379: In resolveSubmissionSignerKey, add a file permission
check on keyPath before reading the file: use os.Stat to get FileMode and test
whether mode&0o077 != 0 (or otherwise detect world/group-readable/writeable) and
return or log an error/warning if the key file is world-readable (e.g.,
0644/0666) to prevent exposing s.SignerPrivateKey; keep the existing behavior of
joining baseDir when relative and only perform the permission check when
s.Enabled and keyPath is non-empty, then proceed to read the file and set
s.SignerPrivateKey as before.
In `@pkg/fetch/blobtx_submit_test.go`:
- Around line 60-64: The test helper testNamespaceType duplicates testNamespace
from pkg/submit/submit_test.go; remove the duplicate and consolidate by creating
or reusing a shared test helper (e.g., a new common_test.go or a pkg/testhelpers
package) that exposes a single function (name it testNamespace or TestNamespace
helper) and update callers in pkg/fetch/blobtx_submit_test.go to call that
shared function instead of testNamespaceType; ensure the helper signature and
returned type (types.Namespace) match existing uses and update imports if you
move it into a separate package.
In `@pkg/submit/blobtx_test.go`:
- Around line 9-23: The test TestEstimateGas uses a magic number 83192 for the
expected gas which is undocumented; update the test to either compute the
expected value from the same constants/formula used by EstimateGas or add a
clear comment explaining the breakdown (e.g., base_gas + per_blob_gas +
per_byte_gas * 600 for sequence 0) so future changes to the estimation logic are
traceable; locate TestEstimateGas and adjust the expected value to be derived
from the constants or add the explanatory comment referencing EstimateGas and
the blob parameters (Namespace, Data length 600, Commitment) used in the test.
In `@pkg/submit/direct_test.go`:
- Around line 45-55: The GetTx method on fakeAppClient contains a redundant
secondary check that inspects f.getTxErrs after queueErr has already been
called; remove lines that check "if len(f.getTxStatuses) == 0 &&
len(f.getTxErrs) > 0 { return nil, f.getTxErrs[len(f.getTxErrs)-1] }" and simply
rely on the result of queueErr and then return queueTxStatus(f.getTxStatuses,
call), nil; update fakeAppClient.GetTx to call queueErr(f.getTxErrs, call) and
return any error it yields, otherwise return the queued status from
queueTxStatus so behavior is clear and not duplicated (referencing GetTx,
getTxErrs, getTxStatuses, queueErr, queueTxStatus).
In `@pkg/submit/direct.go`:
- Around line 297-300: The isSequenceMismatchText function relies on two
hard-coded substrings which can break if error wording changes; extract those
substrings into named constants (e.g., sequenceMismatchMsg1,
sequenceMismatchMsg2) and add a clear comment above isSequenceMismatchText
documenting the expected error format and why these checks exist (or include a
TODO to expand patterns/regex later); keep the current logic in
isSequenceMismatchText but reference the new constants so the match strings are
more visible and maintainable.
In `@pkg/submit/signer.go`:
- Line 18: defaultAddressPrefix is hardcoded to "celestia", causing
SignerAddress checks in DirectSubmitter.resolveFees to reject valid keys with
other Bech32 prefixes; make the address prefix configurable (e.g., add a
configurable field on DirectSubmitter or a constructor parameter, with default
"celestia" fallback) and replace uses of the const defaultAddressPrefix with
that configurable value so tests and alternative chains can provide a different
Bech32 prefix without changing logic; ensure SignerAddress validation and any
related helpers read the new field and keep current behavior when the value is
unset.
In `@pkg/submit/submit.go`:
- Around line 104-127: In decodeBlobs, add early validation after unmarshalling
and before constructing each Blob: for each jsonBlob in blobsJSON (in
decodeBlobs), ensure required fields like Data (blobsJSON[i].Data) and
Commitment (blobsJSON[i].Commitment) are non-empty and return a descriptive
error that includes the blob index (e.g., "decode submit blob %d missing data" /
"missing commitment") when validation fails; this keeps the API boundary clear
and complements downstream checks in BuildMsgPayForBlobs and marshalBlobProto.
In `@pkg/submit/tx_builder_test.go`:
- Around line 13-32: Add t.Parallel() as the first statement in the
TestBuildMsgPayForBlobs test to allow it to run concurrently and improve
isolation; locate the TestBuildMsgPayForBlobs function (which exercises
BuildMsgPayForBlobs) and insert t.Parallel() at the start of the function body
so it runs in parallel with other tests.
- Around line 44-96: Split the large TestBuildSignerInfoAndTxEncoding into
focused t.Run subtests to improve isolation and diagnostics: create subtests for
"BuildSignerInfo" (call BuildSignerInfo, assert Sequence and pubkey bytes),
"BuildAuthInfo" (call BuildAuthInfo then MarshalAuthInfo and assert no error),
"MarshalTxBody/BuildSignDoc" (call MarshalTxBody, BuildSignDoc, unmarshal
signDoc and assert ChainId/AccountNumber), and "MarshalTxRaw" (call
MarshalTxRaw, unmarshal TxRaw and assert signatures length). Keep existing
assertions and error handling but move them into the corresponding subtest
closures to preserve behavior and failure messages.
In `@pkg/submit/tx_builder.go`:
- Around line 186-197: The FeeAmountFromGasPrice function uses float64 for
gasPrice and multiplies by gasLimit, which can lose integer precision for very
large values (beyond 2^53); add a short comment above FeeAmountFromGasPrice
documenting this precision limit, note that results are safe for typical
Celestia parameters but may be inaccurate for gasLimit > 2^53 or extremely large
gasPrice, and suggest switching to math/big (big.Int/big.Rat) if callers need to
support those extremes.
In `@proto/cosmos/auth/v1beta1/query.proto`:
- Around line 9-11: The proto service is named Query (service Query { ... })
which intentionally follows Cosmos SDK conventions but triggers Buf's
SERVICE_SUFFIX lint; suppress this specific lint rule for Cosmos SDK files by
adding an exception for SERVICE_SUFFIX in your buf.yaml (or scope the exception
to cosmos/** paths) so the Query service name remains unchanged for
interoperability with Cosmos SDK clients.
In `@proto/cosmos/tx/v1beta1/service.proto`:
- Line 5: GetTxResponse in proto/cosmos/tx/v1beta1/service.proto is missing the
decoded tx field; update the GetTxResponse message to include a field named "tx"
(the decoded Tx type) as field number 1 and ensure the existing "tx_response"
remains as field 2 to match the canonical Cosmos shape; locate the GetTxResponse
message definition and add the appropriate import/reference for the Tx message
type if missing, then re-generate/protobuf-build artifacts so callers receive
both tx (field 1) and tx_response (field 2).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 67b239e5-849b-4096-8a5b-6e752f540eb1
⛔ Files ignored due to path filters (12)
go.sumis excluded by!**/*.sumpkg/api/grpc/gen/celestia/blob/v1/tx.pb.gois excluded by!**/*.pb.go,!**/gen/**pkg/api/grpc/gen/cosmos/auth/v1beta1/auth.pb.gois excluded by!**/*.pb.go,!**/gen/**pkg/api/grpc/gen/cosmos/auth/v1beta1/query.pb.gois excluded by!**/*.pb.go,!**/gen/**pkg/api/grpc/gen/cosmos/auth/v1beta1/query_grpc.pb.gois excluded by!**/*.pb.go,!**/gen/**pkg/api/grpc/gen/cosmos/base/abci/v1beta1/abci.pb.gois excluded by!**/*.pb.go,!**/gen/**pkg/api/grpc/gen/cosmos/base/v1beta1/coin.pb.gois excluded by!**/*.pb.go,!**/gen/**pkg/api/grpc/gen/cosmos/crypto/secp256k1/keys.pb.gois excluded by!**/*.pb.go,!**/gen/**pkg/api/grpc/gen/cosmos/tx/signing/v1beta1/signing.pb.gois excluded by!**/*.pb.go,!**/gen/**pkg/api/grpc/gen/cosmos/tx/v1beta1/service.pb.gois excluded by!**/*.pb.go,!**/gen/**pkg/api/grpc/gen/cosmos/tx/v1beta1/service_grpc.pb.gois excluded by!**/*.pb.go,!**/gen/**pkg/api/grpc/gen/cosmos/tx/v1beta1/tx.pb.gois excluded by!**/*.pb.go,!**/gen/**
📒 Files selected for processing (33)
AGENTS.mdcmd/apex/main.gocmd/apex/main_test.goconfig/config.goconfig/load.goconfig/load_test.gogo.modpkg/api/jsonrpc/blob.gopkg/api/jsonrpc/server_test.gopkg/api/service.gopkg/api/service_test.gopkg/fetch/blobtx_submit_test.gopkg/submit/app_client.gopkg/submit/app_client_test.gopkg/submit/blobtx.gopkg/submit/blobtx_test.gopkg/submit/direct.gopkg/submit/direct_test.gopkg/submit/signer.gopkg/submit/signer_test.gopkg/submit/submit.gopkg/submit/submit_test.gopkg/submit/tx_builder.gopkg/submit/tx_builder_test.goproto/celestia/blob/v1/tx.protoproto/cosmos/auth/v1beta1/auth.protoproto/cosmos/auth/v1beta1/query.protoproto/cosmos/base/abci/v1beta1/abci.protoproto/cosmos/base/v1beta1/coin.protoproto/cosmos/crypto/secp256k1/keys.protoproto/cosmos/tx/signing/v1beta1/signing.protoproto/cosmos/tx/v1beta1/service.protoproto/cosmos/tx/v1beta1/tx.proto
| @@ -0,0 +1,9 @@ | |||
| syntax = "proto3"; | |||
|
|
|||
| package cosmos.crypto.secp256k1; | |||
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify current Buf lint status and whether PACKAGE_VERSION_SUFFIX is already excepted.
fd '^buf(\.gen)?\.ya?ml$' -H
rg -n 'PACKAGE_VERSION_SUFFIX|except|ignore' buf.yaml buf.gen.yaml . 2>/dev/null || true
buf lint proto/cosmos/crypto/secp256k1/keys.protoRepository: evstack/apex
Length of output: 1222
🏁 Script executed:
cat -n buf.yamlRepository: evstack/apex
Length of output: 210
🏁 Script executed:
cat -n buf.gen.yamlRepository: evstack/apex
Length of output: 360
🏁 Script executed:
# Check if there are any proto file with ignore directives
rg -n 'buf:ignore|buf:lint:ignore' proto/cosmos/crypto/secp256k1/keys.protoRepository: evstack/apex
Length of output: 38
🏁 Script executed:
# Check what other proto files in the same directory look like
fd '.proto$' proto/cosmos/crypto/ | head -20Repository: evstack/apex
Length of output: 96
🏁 Script executed:
# Check what package patterns are used elsewhere in the proto directory
rg -n '^package ' proto/ | head -30Repository: evstack/apex
Length of output: 912
🏁 Script executed:
# Check if this is a deliberate Cosmos SDK compatibility choice
# by looking for any config, docs, or comments about proto package versioning
rg -i 'cosmos.*upstream|package.*version|secp256k1' --type md --type txt proto/ README* CLAUDE* 2>/dev/null || trueRepository: evstack/apex
Length of output: 38
Add version suffix to package declaration for consistency with buf lint rules.
Line 3 (package cosmos.crypto.secp256k1;) fails the PACKAGE_VERSION_SUFFIX lint rule. Every other proto package in this codebase includes a version suffix (e.g., cosmos.tx.v1beta1, cosmos.auth.v1beta1, apex.v1). Update to package cosmos.crypto.secp256k1.v1; or another appropriate version.
🧰 Tools
🪛 Buf (1.66.1)
[error] 3-3: Package name "cosmos.crypto.secp256k1" should be suffixed with a correctly formed version, such as "cosmos.crypto.secp256k1.v1".
(PACKAGE_VERSION_SUFFIX)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@proto/cosmos/crypto/secp256k1/keys.proto` at line 3, The proto package
declaration lacks a version suffix and violates the PACKAGE_VERSION_SUFFIX lint
rule; update the package statement in proto/cosmos/crypto/secp256k1/keys.proto
from "package cosmos.crypto.secp256k1;" to include a version suffix (e.g.,
"package cosmos.crypto.secp256k1.v1;") so it matches the project's convention
used by other packages and buf linting; ensure any generated/importing code or
references to this package (package declaration in keys.proto) are updated
accordingly to the new package name.
cmd/apex/main_test.go
Outdated
There was a problem hiding this comment.
those tests feel a bit useless.
There was a problem hiding this comment.
yea will remove this, adding e2e tests
There was a problem hiding this comment.
Any reason we don't simply import celestia-app and use the txBuilder to build the tx?
There was a problem hiding this comment.
mainly dont want to deal with cosmos sdk deps.
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pkg/fetch/blobtx.go (1)
386-418:⚠️ Potential issue | 🟠 MajorReject BlobTxs whose commitment count doesn't match blob count.
These commitments are matched purely by index. If
ShareCommitmentsis shorter thanparsed.Blobs, later blobs getCommitment: nil, and earlier ones can be mispaired as well. Sincepkg/api/service.gouses commitment as the lookup key forblob.GetByCommitment, it's safer to skip the whole tx once the counts diverge.Suggested change
+ if len(parsed.PFB.ShareCommitments) != len(parsed.Blobs) { + blobIndex += len(parsed.Blobs) + continue + } for i, rb := range parsed.Blobs { ns, ok := namespaceFromRawBlob(rb) if !ok { blobIndex++ continue } @@ - // Commitment comes from MsgPayForBlobs, matched by index. - var commitment []byte - if i < len(parsed.PFB.ShareCommitments) { - commitment = parsed.PFB.ShareCommitments[i] - } + // Commitment comes from MsgPayForBlobs, matched by index. + commitment := parsed.PFB.ShareCommitments[i]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/fetch/blobtx.go` around lines 386 - 418, Check for mismatched counts between parsed.Blobs and parsed.PFB.ShareCommitments before iterating and reject/skip the entire BlobTx when they differ: if len(parsed.PFB.ShareCommitments) != len(parsed.Blobs) then treat the tx as invalid (skip it or return an error) instead of proceeding and assigning nil/mispaired commitments. Update the code around the loop that constructs types.Blob (the section referencing parsed.Blobs, parsed.PFB.ShareCommitments, signer fallback to rb.Signer, and blobIndex) to perform this length check early and avoid appending any blobs from that tx when counts diverge.
🧹 Nitpick comments (2)
pkg/submit/blobtx_test.go (2)
9-23: Consider table-driven tests for broader coverage.The test validates a single scenario. Per coding guidelines, prefer the table-driven pattern to cover additional cases (e.g., multiple blobs, varying data sizes, empty blobs array, different sequence values).
♻️ Example table-driven refactor
func TestEstimateGas(t *testing.T) { t.Parallel() - gas, err := EstimateGas([]Blob{{ - Namespace: testNamespace(1), - Data: make([]byte, 600), - Commitment: []byte("c1"), - }}, 0) - if err != nil { - t.Fatalf("EstimateGas: %v", err) - } - if gas != 83192 { - t.Fatalf("gas = %d, want 83192", gas) + tests := []struct { + name string + blobs []Blob + sequence uint64 + wantGas uint64 + }{ + { + name: "single blob 600 bytes", + blobs: []Blob{{ + Namespace: testNamespace(1), + Data: make([]byte, 600), + Commitment: []byte("c1"), + }}, + sequence: 0, + wantGas: 83192, + }, + // Add more cases: multiple blobs, larger data, etc. + } + + for _, tc := range tests { + t.Run(tc.name, func(t *testing.T) { + t.Parallel() + gas, err := EstimateGas(tc.blobs, tc.sequence) + if err != nil { + t.Fatalf("EstimateGas: %v", err) + } + if gas != tc.wantGas { + t.Fatalf("gas = %d, want %d", gas, tc.wantGas) + } + }) } }As per coding guidelines: "Use table-driven tests pattern for test cases".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/submit/blobtx_test.go` around lines 9 - 23, TestEstimateGas currently asserts a single hard-coded scenario; refactor it into a table-driven test that iterates multiple cases (e.g., single small blob, multiple blobs, empty blobs slice, varying Data sizes, different sequence values) to increase coverage. Create a slice of test cases with name, input []Blob (use Blob, testNamespace helper), sequence int, expected gas int, and expected error boolean, then loop over them calling EstimateGas and use t.Run for subtests to assert results and errors. Ensure TestEstimateGas (or a new TestEstimateGas_TableDriven) still calls EstimateGas and preserves the original assertion as one of the cases.
25-50: Same suggestion: consider table-driven tests for MarshalBlobTx.This test covers a single scenario. Table-driven tests would allow verifying multiple blobs, edge cases (empty Data, missing Commitment), and different ShareVersion values.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/submit/blobtx_test.go` around lines 25 - 50, TestMarshalBlobTx currently exercises only one scenario; convert it into a table-driven test that iterates over cases exercising multiple blobs, edge cases (empty Data, missing Commitment), and different ShareVersion values. Update the test to define a slice of test cases (with name, inner, blobs input, and expected validations), then loop over them calling MarshalBlobTx, using decodeBlobTxEnvelope and assertions for inner, blob count/contents, and typeID (protoBlobTxTypeID) per case; ensure each case runs t.Run(name, func(t *testing.T){ t.Parallel() ... }) and preserves existing failure checks for consistency.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/ci.yml:
- Around line 57-58: The CI workflow's Go test invocation for
TestSubmissionViaJSONRPC is missing the race detector flag; update the command
that currently runs "go test -count=1 -timeout 20m -run TestSubmissionViaJSONRPC
./..." to include "-race" so it runs "go test -race -count=1 -timeout 20m -run
TestSubmissionViaJSONRPC ./..." ensuring the submission e2e tests execute under
the race detector.
In `@e2e/submission_test.go`:
- Around line 279-307: The apexProcess currently uses a buffered proc.done chan
error that is sent to once in startApexProcess, but multiple helpers
(waitForApexHTTP, waitForIndexedBlob, and Stop) all receive from it which can
race and lose the exit error; change apexProcess to store the wait error in a
field (e.g., waitErr error) and add a done chan struct{} that gets closed when
the goroutine finishes so any number of waiters can observe completion; in
startApexProcess set proc.waitErr from cmd.Wait() inside the goroutine and
close(proc.done) (or a renamed doneDone chan struct{}), and update
waitForApexHTTP, waitForIndexedBlob, and Stop to select/receive on the done chan
and then read the stored waitErr (safely, add a mutex or atomic if needed)
instead of receiving an error directly from proc.done.
- Around line 447-470: The doRPC helper uses http.DefaultClient which has no
deadline and can hang tests; change it to use a client with a timeout or a
context with timeout: create a local http.Client with a reasonable Timeout
(e.g., 5–15s) or use context.WithTimeout when building the request in doRPC,
then call that client.Do (or req.WithContext(ctx)) instead of
http.DefaultClient.Do so RPC calls cannot block the e2e job indefinitely; update
the error messages unchanged and ensure you import time/context as needed.
In `@justfile`:
- Around line 47-49: The e2e-submission recipe is running `cd e2e` in a separate
shell so the subsequent `go test` runs from the repo root; update the
e2e-submission recipe to run the change directory and tests in the same shell by
combining the commands with `&&` and add the `-race` flag to the `go test`
invocation (target: e2e-submission).
In `@pkg/fetch/blobtx_test.go`:
- Around line 186-204: The test TestParseBlobTxLegacyFormat currently checks
signer and data but not that legacy share commitments are recovered; add an
assertion after parsing that verifies the commitment produced from
buildLegacyBlobTx (the [][]byte{[]byte("legacy-commit")} input) is present in
the parsed result — e.g. check parsed.Blobs[0].ShareCommitments (or the
appropriate field on parsed) contains the byte slice/string "legacy-commit" and
fail the test if it does not; this uses parseBlobTx and parsed.PFB as context to
locate where to add the assertion.
In `@pkg/submit/blobtx.go`:
- Around line 70-78: The function marshalBlobProto should validate
blob.Namespace length before indexing/slicing to avoid panics; in
marshalBlobProto check that len(blob.Namespace) == types.NamespaceSize and
return a clear error when not, then set namespaceVersion = blob.Namespace[0] and
namespaceID = blob.Namespace[1:] only after that validation; also ensure the
error message reports the actual len(blob.Namespace) and expected
types.NamespaceSize (and keep the existing blob.Data empty check).
In `@pkg/submit/direct_test.go`:
- Around line 175-184: The test currently only checks for any non-nil error from
submitter.Submit; update it to assert the specific signer-mismatch failure and
that no broadcast happened: after calling submitter.Submit with
TxConfig{SignerAddress: "celestia1different"}, assert that the returned error
matches the expected signer-mismatch error (use errors.Is(err,
ErrSignerMismatch) or assert the error message contains a clear "signer
mismatch" string) and additionally assert broadcastCalls == 0 to ensure nothing
was broadcast; reference the Submit method on submitter, the Request/ TxConfig
used in the test, the expected ErrSignerMismatch symbol (or the exact message)
and the broadcastCalls counter to locate and change the assertions.
---
Outside diff comments:
In `@pkg/fetch/blobtx.go`:
- Around line 386-418: Check for mismatched counts between parsed.Blobs and
parsed.PFB.ShareCommitments before iterating and reject/skip the entire BlobTx
when they differ: if len(parsed.PFB.ShareCommitments) != len(parsed.Blobs) then
treat the tx as invalid (skip it or return an error) instead of proceeding and
assigning nil/mispaired commitments. Update the code around the loop that
constructs types.Blob (the section referencing parsed.Blobs,
parsed.PFB.ShareCommitments, signer fallback to rb.Signer, and blobIndex) to
perform this length check early and avoid appending any blobs from that tx when
counts diverge.
---
Nitpick comments:
In `@pkg/submit/blobtx_test.go`:
- Around line 9-23: TestEstimateGas currently asserts a single hard-coded
scenario; refactor it into a table-driven test that iterates multiple cases
(e.g., single small blob, multiple blobs, empty blobs slice, varying Data sizes,
different sequence values) to increase coverage. Create a slice of test cases
with name, input []Blob (use Blob, testNamespace helper), sequence int, expected
gas int, and expected error boolean, then loop over them calling EstimateGas and
use t.Run for subtests to assert results and errors. Ensure TestEstimateGas (or
a new TestEstimateGas_TableDriven) still calls EstimateGas and preserves the
original assertion as one of the cases.
- Around line 25-50: TestMarshalBlobTx currently exercises only one scenario;
convert it into a table-driven test that iterates over cases exercising multiple
blobs, edge cases (empty Data, missing Commitment), and different ShareVersion
values. Update the test to define a slice of test cases (with name, inner, blobs
input, and expected validations), then loop over them calling MarshalBlobTx,
using decodeBlobTxEnvelope and assertions for inner, blob count/contents, and
typeID (protoBlobTxTypeID) per case; ensure each case runs t.Run(name, func(t
*testing.T){ t.Parallel() ... }) and preserves existing failure checks for
consistency.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3158521f-6cce-40c4-9de0-0371b4f2ce4d
⛔ Files ignored due to path filters (1)
e2e/go.sumis excluded by!**/*.sum
📒 Files selected for processing (9)
.github/workflows/ci.ymle2e/go.mode2e/submission_test.gojustfilepkg/fetch/blobtx.gopkg/fetch/blobtx_test.gopkg/submit/blobtx.gopkg/submit/blobtx_test.gopkg/submit/direct_test.go
There was a problem hiding this comment.
🧹 Nitpick comments (4)
pkg/submit/submit_test.go (2)
10-10: UsetestSignerAddressconstant consistently.Line 31 hardcodes
"celestia1test"instead of using thetestSignerAddressconstant defined at line 10. If the constant is updated, the test input won't change, causing a false failure.Suggested fix
optsRaw, err := json.Marshal(map[string]any{ "gas_price": 0.25, "is_gas_price_set": true, "max_gas_price": 1.5, "gas": 1234, "tx_priority": int(PriorityHigh), - "signer_address": "celestia1test", + "signer_address": testSignerAddress, })Also applies to: 31-31
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/submit/submit_test.go` at line 10, The test hardcodes the signer address string instead of using the existing constant testSignerAddress; update the test to replace the literal "celestia1test" with the testSignerAddress constant (used where the test constructs inputs/assertions) so the test consistently relies on the single source of truth (look for occurrences in submit_test.go around the test that builds the signer/input and change the hardcoded value to testSignerAddress).
12-90: Consider consolidating into table-driven tests.The coding guidelines recommend using table-driven tests. The
DecodeRequesttests (lines 12-74) andMarshalResulttests (lines 76-90) could each be consolidated into table-driven form for easier extension and maintenance.Example structure for DecodeRequest tests
func TestDecodeRequest(t *testing.T) { tests := []struct { name string blobsRaw json.RawMessage optionsRaw json.RawMessage wantErr bool checkResult func(t *testing.T, req *Request) }{ { name: "full request with options", blobsRaw: /* ... */, optionsRaw: /* ... */, checkResult: func(t *testing.T, req *Request) { assertDecodedBlob(t, req, testNamespace(7)) assertDecodedOptions(t, req.Options) }, }, { name: "nil options", blobsRaw: /* ... */, optionsRaw: json.RawMessage("null"), checkResult: func(t *testing.T, req *Request) { if req.Options != nil { t.Fatalf("options = %#v, want nil", req.Options) } }, }, { name: "invalid namespace", blobsRaw: json.RawMessage(`[{"namespace":"AQI=",...}]`), optionsRaw: nil, wantErr: true, }, } for _, tc := range tests { t.Run(tc.name, func(t *testing.T) { req, err := DecodeRequest(tc.blobsRaw, tc.optionsRaw) if tc.wantErr { if err == nil { t.Fatal("expected error, got nil") } return } if err != nil { t.Fatalf("DecodeRequest: %v", err) } if tc.checkResult != nil { tc.checkResult(t, req) } }) } }As per coding guidelines: "Use table-driven tests pattern for test cases".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/submit/submit_test.go` around lines 12 - 90, The tests for DecodeRequest and MarshalResult are repetitive and should be converted to table-driven tests: consolidate TestDecodeRequest, TestDecodeRequestNilOptions, and TestDecodeRequestRejectsInvalidNamespace into a single TestDecodeRequest table that enumerates cases (full request, nil options, invalid namespace) with blobsRaw, optionsRaw, wantErr and a checkResult callback that calls assertDecodedBlob and assertDecodedOptions or validates nil options; similarly combine TestMarshalResult and TestMarshalResultRejectsNil into a small table-driven TestMarshalResult with cases for a valid Result and a nil Result checking expected output or expected error; update test runner to iterate tests with t.Run and call DecodeRequest/MarshalResult accordingly while preserving existing helper functions (assertDecodedBlob, assertDecodedOptions) and testNamespace references.pkg/submit/tx_builder_test.go (1)
93-101: Consider table-driven tests for better edge case coverage.The test validates the basic rounding-up behavior, but a table-driven approach would better cover edge cases such as zero gas, zero price, exact integer results, and boundary values.
♻️ Suggested table-driven refactor
func TestFeeAmountFromGasPrice(t *testing.T) { - fee, err := FeeAmountFromGasPrice(1001, 0.001) - if err != nil { - t.Fatalf("FeeAmountFromGasPrice: %v", err) - } - if fee != "2" { - t.Fatalf("fee = %q, want 2", fee) + tests := []struct { + name string + gasLimit uint64 + gasPrice float64 + want string + wantErr bool + }{ + {name: "rounds up", gasLimit: 1001, gasPrice: 0.001, want: "2"}, + {name: "exact integer", gasLimit: 1000, gasPrice: 0.001, want: "1"}, + {name: "zero gas", gasLimit: 0, gasPrice: 0.001, want: "0"}, + {name: "zero price", gasLimit: 1000, gasPrice: 0, want: "0"}, + {name: "large values", gasLimit: 1_000_000, gasPrice: 0.002, want: "2000"}, + } + for _, tc := range tests { + t.Run(tc.name, func(t *testing.T) { + got, err := FeeAmountFromGasPrice(tc.gasLimit, tc.gasPrice) + if (err != nil) != tc.wantErr { + t.Fatalf("error = %v, wantErr = %v", err, tc.wantErr) + } + if got != tc.want { + t.Fatalf("fee = %q, want %q", got, tc.want) + } + }) } }As per coding guidelines: "Use table-driven tests pattern for test cases."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/submit/tx_builder_test.go` around lines 93 - 101, Replace the single-case TestFeeAmountFromGasPrice with a table-driven test that iterates over multiple cases exercising FeeAmountFromGasPrice (include cases: zero gas, zero price, exact-integer result, rounding-up boundary like gas=1001 price=0.001, and large values), each case specifying input gas and price, expected fee string and whether an error is expected; in the loop call FeeAmountFromGasPrice and assert both the returned fee and error behavior for each table row, and name subtests using t.Run to make failures clear (use the existing TestFeeAmountFromGasPrice test function and the FeeAmountFromGasPrice symbol to locate where to change).docs/running.md (1)
67-71: Documentation is missing submission configuration fields.The docs show only
enabled,app_grpc_addr,chain_id, andsigner_key, but the fullSubmissionConfigstruct (perconfig/config.go:96-105) also includesgas_price,max_gas_price, andconfirmation_timeout. ThedefaultConfigYAMLinconfig/load.goincludes these fields.Consider adding the missing fields for completeness:
submission: enabled: false app_grpc_addr: "localhost:9090" chain_id: "mocha-4" signer_key: "/path/to/apex-submission.key" gas_price: 0 # default gas price per unit max_gas_price: 0 # maximum gas price cap confirmation_timeout: 30 # seconds to wait for tx confirmation🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/running.md` around lines 67 - 71, The documentation's submission YAML is missing fields present on SubmissionConfig and in defaultConfigYAML; update docs/running.md to include gas_price, max_gas_price, and confirmation_timeout under the submission block, matching the defaults/types used in config/config.go (SubmissionConfig) and config/load.go (defaultConfigYAML) so the example config is complete and consistent with the actual struct.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@docs/running.md`:
- Around line 67-71: The documentation's submission YAML is missing fields
present on SubmissionConfig and in defaultConfigYAML; update docs/running.md to
include gas_price, max_gas_price, and confirmation_timeout under the submission
block, matching the defaults/types used in config/config.go (SubmissionConfig)
and config/load.go (defaultConfigYAML) so the example config is complete and
consistent with the actual struct.
In `@pkg/submit/submit_test.go`:
- Line 10: The test hardcodes the signer address string instead of using the
existing constant testSignerAddress; update the test to replace the literal
"celestia1test" with the testSignerAddress constant (used where the test
constructs inputs/assertions) so the test consistently relies on the single
source of truth (look for occurrences in submit_test.go around the test that
builds the signer/input and change the hardcoded value to testSignerAddress).
- Around line 12-90: The tests for DecodeRequest and MarshalResult are
repetitive and should be converted to table-driven tests: consolidate
TestDecodeRequest, TestDecodeRequestNilOptions, and
TestDecodeRequestRejectsInvalidNamespace into a single TestDecodeRequest table
that enumerates cases (full request, nil options, invalid namespace) with
blobsRaw, optionsRaw, wantErr and a checkResult callback that calls
assertDecodedBlob and assertDecodedOptions or validates nil options; similarly
combine TestMarshalResult and TestMarshalResultRejectsNil into a small
table-driven TestMarshalResult with cases for a valid Result and a nil Result
checking expected output or expected error; update test runner to iterate tests
with t.Run and call DecodeRequest/MarshalResult accordingly while preserving
existing helper functions (assertDecodedBlob, assertDecodedOptions) and
testNamespace references.
In `@pkg/submit/tx_builder_test.go`:
- Around line 93-101: Replace the single-case TestFeeAmountFromGasPrice with a
table-driven test that iterates over multiple cases exercising
FeeAmountFromGasPrice (include cases: zero gas, zero price, exact-integer
result, rounding-up boundary like gas=1001 price=0.001, and large values), each
case specifying input gas and price, expected fee string and whether an error is
expected; in the loop call FeeAmountFromGasPrice and assert both the returned
fee and error behavior for each table row, and name subtests using t.Run to make
failures clear (use the existing TestFeeAmountFromGasPrice test function and the
FeeAmountFromGasPrice symbol to locate where to change).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d32a1789-d853-427d-8928-d60aa7288b8c
📒 Files selected for processing (16)
cmd/apex/main.gocmd/apex/main_test.goconfig/config.goconfig/load.goconfig/load_test.godocs/running.mdpkg/api/jsonrpc/server_test.gopkg/api/service_test.gopkg/fetch/blobtx_submit_test.gopkg/submit/app_client_test.gopkg/submit/blobtx_test.gopkg/submit/direct_test.gopkg/submit/signer.gopkg/submit/signer_test.gopkg/submit/submit_test.gopkg/submit/tx_builder_test.go
🚧 Files skipped from review as they are similar to previous changes (7)
- pkg/submit/direct_test.go
- config/config.go
- cmd/apex/main_test.go
- pkg/submit/blobtx_test.go
- pkg/submit/app_client_test.go
- pkg/submit/signer_test.go
- config/load_test.go
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
pkg/submit/blobtx.go (1)
70-78:⚠️ Potential issue | 🔴 CriticalValidate namespace length before indexing/slicing to prevent panic.
At Line 74 and Line 75,
blob.Namespaceis indexed/sliced before size validation. An empty namespace can panic instead of returning an error.🐛 Suggested fix
func marshalBlobProto(blob Blob) ([]byte, error) { if len(blob.Data) == 0 { return nil, errors.New("blob data is required") } + if len(blob.Namespace) != types.NamespaceSize { + return nil, fmt.Errorf("invalid namespace size: got %d, want %d", len(blob.Namespace), types.NamespaceSize) + } namespaceVersion := blob.Namespace[0] namespaceID := blob.Namespace[1:] - if len(namespaceID) != types.NamespaceSize-1 { - return nil, fmt.Errorf("invalid namespace size: got %d, want %d", len(blob.Namespace), types.NamespaceSize) - } out := protowire.AppendTag(nil, 1, protowire.BytesType)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/submit/blobtx.go` around lines 70 - 78, In marshalBlobProto, validate the length of blob.Namespace before indexing/slicing it to avoid a panic: check that len(blob.Namespace) == types.NamespaceSize (or at least >=1 and == types.NamespaceSize) and return an error if not, then only after that extract namespaceVersion := blob.Namespace[0] and namespaceID := blob.Namespace[1:]; also adjust the error message to report len(blob.Namespace) rather than using the invalid slice length.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@config/load.go`:
- Around line 333-359: The emptiness checks in validateSubmission allow
whitespace-only strings (e.g., s.SignerKey) to pass; call strings.TrimSpace on
the relevant string fields (at minimum s.SignerKey, and similarly
s.CelestiaAppGRPCAddr and s.ChainID) before validating them and either replace
the fields with the trimmed value or validate against the trimmed variable so
that values like " " are treated as empty and rejected; update
validateSubmission to trim and validate these fields (use strings.TrimSpace)
prior to the existing empty-string checks.
---
Duplicate comments:
In `@pkg/submit/blobtx.go`:
- Around line 70-78: In marshalBlobProto, validate the length of blob.Namespace
before indexing/slicing it to avoid a panic: check that len(blob.Namespace) ==
types.NamespaceSize (or at least >=1 and == types.NamespaceSize) and return an
error if not, then only after that extract namespaceVersion := blob.Namespace[0]
and namespaceID := blob.Namespace[1:]; also adjust the error message to report
len(blob.Namespace) rather than using the invalid slice length.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9e17b60b-0e38-41ec-9add-1eede32af9d6
📒 Files selected for processing (9)
config/load.goconfig/load_test.gopkg/api/jsonrpc/blob.gopkg/fetch/blobtx.gopkg/submit/blobtx.gopkg/submit/submit.gopkg/submit/submit_test.gopkg/submit/tx_builder_test.gopkg/types/types.go
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
config/load.go (1)
341-375:⚠️ Potential issue | 🟠 MajorTrim submission strings before validating them.
Whitespace-only
submission.app_grpc_addr,submission.chain_id, andsubmission.signer_keystill pass here.resolveSubmissionSignerKeyPathalso returns early on a trimmed-empty key without writing the normalized value back, so" "survives until the submitter tries to open it.💡 Suggested fix
func validateSubmission(s *SubmissionConfig) error { if !s.Enabled { return nil } + s.CelestiaAppGRPCAddr = strings.TrimSpace(s.CelestiaAppGRPCAddr) + s.ChainID = strings.TrimSpace(s.ChainID) + s.SignerKey = strings.TrimSpace(s.SignerKey) if s.CelestiaAppGRPCAddr == "" { return errors.New("submission.app_grpc_addr is required when submission.enabled is true") } @@ func resolveSubmissionSignerKeyPath(s *SubmissionConfig, baseDir string) error { if !s.Enabled { return nil } keyPath := strings.TrimSpace(s.SignerKey) + s.SignerKey = keyPath if keyPath == "" { return nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config/load.go` around lines 341 - 375, validateSubmission currently treats whitespace-only fields as valid and resolveSubmissionSignerKeyPath trims but doesn't persist the normalized value; update validation to use trimmed values for CelestiaAppGRPCAddr, ChainID and SignerKey (e.g., trim into local vars or assign back) so empty/whitespace fails the required checks, and modify resolveSubmissionSignerKeyPath to set s.SignerKey = keyPath after trimming (and return an error if trimmed SignerKey is empty when s.Enabled is true) so the normalized path is stored before any file access.pkg/submit/submit.go (1)
132-141:⚠️ Potential issue | 🟠 MajorNormalize and validate submit options at the JSON boundary.
The exact
"null"check misses whitespace-wrapped null, and negative gas values / arbitrarytx_priorityvalues are accepted here and only fail later, if at all. Rejecting them during decode keeps the JSON-RPC surface deterministic.💡 Suggested fix
import ( + "bytes" "context" "encoding/json" "errors" "fmt" @@ func decodeOptions(raw json.RawMessage) (*TxConfig, error) { - if len(raw) == 0 || string(raw) == "null" { + trimmed := bytes.TrimSpace(raw) + if len(trimmed) == 0 || bytes.Equal(trimmed, []byte("null")) { return nil, nil } var cfg TxConfig - if err := json.Unmarshal(raw, &cfg); err != nil { + if err := json.Unmarshal(trimmed, &cfg); err != nil { return nil, fmt.Errorf("decode submit options: %w", err) } + if cfg.GasPrice < 0 { + return nil, errors.New("decode submit options: gas_price must be non-negative") + } + if cfg.MaxGasPrice < 0 { + return nil, errors.New("decode submit options: max_gas_price must be non-negative") + } + if cfg.TxPriority != 0 && + cfg.TxPriority != PriorityLow && + cfg.TxPriority != PriorityMedium && + cfg.TxPriority != PriorityHigh { + return nil, fmt.Errorf("decode submit options: invalid tx_priority %d", cfg.TxPriority) + } return &cfg, nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/submit/submit.go` around lines 132 - 141, The decodeOptions function should normalize JSON nulls and validate fields early: in decodeOptions, treat any whitespace-wrapped "null" (trim spaces before comparing) as nil, and after unmarshalling into TxConfig validate and reject invalid values (e.g., negative Gas values and out-of-range TxPriority) returning a clear error; update decodeOptions to trim raw bytes and check for empty/trimmed "null" and then validate the TxConfig fields (referencing TxConfig, decodeOptions) before returning to ensure deterministic JSON-RPC errors.
🧹 Nitpick comments (8)
pkg/api/grpc/server_test.go (1)
132-137: Consider extractingtestNamespaceto a shared test utility.This function is duplicated verbatim in
pkg/api/notifier_test.go. Extracting it to a shared test helper (e.g.,pkg/testutil/namespace.goor an internal test package) would reduce duplication and ensure consistent test namespace construction across the codebase.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/api/grpc/server_test.go` around lines 132 - 137, The test helper function testNamespace (used in pkg/api/grpc/server_test.go and duplicated in pkg/api/notifier_test.go) should be moved into a shared test utility so both tests import and call the same implementation; create a new package-level test helper (e.g., a test-only package) that exposes testNamespace, move the function body there, update both server_test.go and notifier_test.go to import that helper and call testNamespace(b) instead of duplicating the code, and run tests to ensure the symbol resolution and imports are correct.pkg/fetch/blobtx_test.go (1)
186-210: Consider adding a multi-blob legacy format test for parity.
TestParseBlobTxMultiBlobcovers the proto format with multiple blobs, but there's no equivalent for the legacy format. Adding a similar test would ensure the legacy multi-blob path is exercised.💡 Example multi-blob legacy test
func TestParseBlobTxLegacyFormatMultiBlob(t *testing.T) { tx := buildLegacyBlobTx("legacy-signer", [][]byte{[]byte("c1"), []byte("c2")}, rawBlob{Namespace: testNS(1), Data: []byte("a")}, rawBlob{Namespace: testNS(2), Data: []byte("b")}, ) parsed, err := parseBlobTx(tx) if err != nil { t.Fatalf("parseBlobTx legacy multi-blob: %v", err) } if len(parsed.Blobs) != 2 { t.Fatalf("got %d blobs, want 2", len(parsed.Blobs)) } if len(parsed.PFB.ShareCommitments) != 2 { t.Fatalf("got %d commitments, want 2", len(parsed.PFB.ShareCommitments)) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/fetch/blobtx_test.go` around lines 186 - 210, Add a new test mirroring TestParseBlobTxMultiBlob but using the legacy builder: create TestParseBlobTxLegacyFormatMultiBlob that constructs a legacy tx via buildLegacyBlobTx with multiple share commitments (e.g., [][]byte{[]byte("c1"), []byte("c2")}) and multiple rawBlob args, call parseBlobTx on that tx, and assert parsed.Blobs length and parsed.PFB.ShareCommitments length are both 2; reuse the same validation style as TestParseBlobTxLegacyFormat to fail the test on errors or mismatched counts.pkg/submit/blobtx.go (1)
10-14: Consider adding source documentation for gas constants.These gas estimation constants are critical for correct transaction submission. Adding comments referencing the Celestia documentation or source where these values are derived would improve maintainability and make it easier to keep them in sync with protocol updates.
const ( - baseBlobTxGas = 65_000 - firstSequenceGasOverhead = 10_000 - gasPerBlobByte = 8 + // baseBlobTxGas is the fixed gas cost for a PayForBlobs transaction. + // Source: Celestia gas estimation guidance. + baseBlobTxGas = 65_000 + // firstSequenceGasOverhead is extra gas for the first tx from an account. + firstSequenceGasOverhead = 10_000 + // gasPerBlobByte is the per-byte gas cost for blob data. + gasPerBlobByte = 8 )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/submit/blobtx.go` around lines 10 - 14, Add source documentation comments above the gas constants explaining what each constant represents and citing the Celestia specification or repository commit/PR where the values come from; specifically document baseBlobTxGas, firstSequenceGasOverhead, and gasPerBlobByte with short descriptions (e.g., base transaction gas, per-sequence overhead, per-blob-byte cost) and include a reference to the exact Celestia doc/page or git commit/line so future reviewers can verify or update the values.pkg/api/service_test.go (2)
358-365: Clarify the use of"index": -1in test data.The test uses
"index": -1for the blob, which is unusual since blob indices are typically non-negative. If-1is intentional (e.g., as a sentinel for "unset"), consider adding a comment. Otherwise, use a valid non-negative index.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/api/service_test.go` around lines 358 - 365, The test constructs blob JSON via the json.Marshal call that assigns "index": -1 (blobsRaw variable); clarify intent by either replacing -1 with a valid non-negative index (e.g., 0) if you meant a real blob index, or add a short inline comment immediately above the json.Marshal block explaining that -1 is an intentional sentinel for "unset" and is handled by the code under test—update any assertions if you change the value.
397-412: Consider usingbytes.Equalfor byte slice comparisons.The helper uses
string(blob.Data) != "hello"and similar string conversions for byte comparisons. While functional, usingbytes.Equalis more idiomatic for comparing byte slices.Suggested improvement
func assertSubmittedBlob(t *testing.T, blob submit.Blob, wantNamespace types.Namespace, wantSigner []byte) { t.Helper() if blob.Namespace != wantNamespace { t.Fatalf("namespace = %x, want %x", blob.Namespace, wantNamespace) } - if string(blob.Data) != "hello" { + if !bytes.Equal(blob.Data, []byte("hello")) { t.Fatalf("data = %q, want %q", blob.Data, "hello") } - if string(blob.Commitment) != "c1" { + if !bytes.Equal(blob.Commitment, []byte("c1")) { t.Fatalf("commitment = %q, want %q", blob.Commitment, "c1") } - if string(blob.Signer) != string(wantSigner) { + if !bytes.Equal(blob.Signer, wantSigner) { t.Fatalf("signer = %x, want %x", blob.Signer, wantSigner) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/api/service_test.go` around lines 397 - 412, The assertions in assertSubmittedBlob use string conversions to compare byte slices (blob.Data, blob.Commitment, blob.Signer vs literals/wantSigner); replace these with bytes.Equal for idiomatic, safe slice comparison (e.g., bytes.Equal(blob.Data, []byte("hello")), bytes.Equal(blob.Commitment, []byte("c1")), and bytes.Equal(blob.Signer, wantSigner)) and add an import for the bytes package in the test file; keep the same failure messages but use t.Fatalf when bytes.Equal returns false.config/config.go (1)
135-143: Consider documenting the meaning of zero values for gas price fields.The default
GasPrice: 0andMaxGasPrice: 0values could be ambiguous—it's unclear whether 0 means "use automatic estimation" or "invalid/unset". Consider adding a comment or documentation clarifying the expected behavior when these are zero.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config/config.go` around lines 135 - 143, The SubmissionConfig has ambiguous defaults GasPrice: 0 and MaxGasPrice: 0; update config.go to document the intended semantics for zero (e.g., "0 = automatic estimation" or "0 = unset/invalid") next to the GasPrice and MaxGasPrice fields of SubmissionConfig, and add/mention a clear fallback or validation in the config initialization/validation path (e.g., in the constructor or ValidateSubmissionConfig function) to enforce or translate the zero value into the intended behavior (or require explicit non-zero values). Use the SubmissionConfig, GasPrice, MaxGasPrice, and ConfirmationTimeout identifiers to locate the fields and related validation code to update.cmd/apex/main_test.go (1)
25-161: Consider table-driving theopenBlobSubmitterscenarios.These three tests all exercise the same function with small config variations, so a single table would remove duplicated setup and make it easier to extend the submission cases later.
As per coding guidelines,
**/*_test.go: Use table-driven tests pattern for test cases.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/apex/main_test.go` around lines 25 - 161, Three tests exercise openBlobSubmitter with small config variations; convert them to a single table-driven test to remove duplication. Create a test function (e.g. TestOpenBlobSubmitterCases) with a slice of cases containing name, cfg overrides (Submission.Enabled, Submission.SignerKey, grpc address) and expected results (nil submitter, error, or successful Submit with height/checks). In the loop call t.Run(case.name, func(t *testing.T){...}) and apply shared setup via helper functions for signer loading and for starting the gRPC auth/tx servers only when the case requires a running backend; invoke openBlobSubmitter(cfg) and assert the case-specific expectations (submitter nil vs error vs successful Submit and checks), reusing symbols openBlobSubmitter, submissionConfigWithAddr, writeSignerKey, submitter.Submit, and the submissionAuthServer/submissionTxServer helpers.pkg/submit/tx_builder.go (1)
192-200: Consider fixed-point arithmetic for fee calculations.The function intentionally rounds up—as documented in its comment—to ensure fees don't fall short. For stronger mathematical guarantees in financial code, consider using
big.Rator a decimal library instead of floating-point multiplication, which would eliminate any IEEE-754 precision concerns.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/submit/tx_builder.go` around lines 192 - 200, FeeAmountFromGasPrice currently multiplies float64 gasPrice by gasLimit which risks IEEE-754 precision errors; replace the float arithmetic with exact rational/decimal math (e.g., math/big.Rat or a decimal library) inside FeeAmountFromGasPrice: construct a big.Rat for gasPrice, multiply by a big.Int representing gasLimit, compute the ceiling of the rational result (for big.Rat: ceil = (num + den - 1) / den using big.Int arithmetic), convert that integer to the desired string and return it; update validation of gasPrice inputs accordingly and keep the function name FeeAmountFromGasPrice, parameters gasLimit and gasPrice (or change gasPrice type to string/decimal type) so callers and error handling remain clear.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/submit/direct.go`:
- Around line 209-241: The resolveFees method in DirectSubmitter currently
ignores a per-request TxConfig.MaxGasPrice; after determining gasPrice (the
block that sets gasPrice from opts when opts.IsGasPriceSet or opts.GasPrice>0)
add a check that if opts != nil && opts.MaxGasPrice > 0 and gasPrice >
opts.MaxGasPrice return an error similar to the existing maxGasPrice check
(e.g., "submission gas price %.6f exceeds per-request max %.6f"), and keep the
existing global s.maxGasPrice enforcement afterwards (or combine checks) so both
per-request and submitter-wide ceilings are respected; reference
DirectSubmitter.resolveFees, TxConfig.MaxGasPrice, and s.maxGasPrice.
---
Duplicate comments:
In `@config/load.go`:
- Around line 341-375: validateSubmission currently treats whitespace-only
fields as valid and resolveSubmissionSignerKeyPath trims but doesn't persist the
normalized value; update validation to use trimmed values for
CelestiaAppGRPCAddr, ChainID and SignerKey (e.g., trim into local vars or assign
back) so empty/whitespace fails the required checks, and modify
resolveSubmissionSignerKeyPath to set s.SignerKey = keyPath after trimming (and
return an error if trimmed SignerKey is empty when s.Enabled is true) so the
normalized path is stored before any file access.
In `@pkg/submit/submit.go`:
- Around line 132-141: The decodeOptions function should normalize JSON nulls
and validate fields early: in decodeOptions, treat any whitespace-wrapped "null"
(trim spaces before comparing) as nil, and after unmarshalling into TxConfig
validate and reject invalid values (e.g., negative Gas values and out-of-range
TxPriority) returning a clear error; update decodeOptions to trim raw bytes and
check for empty/trimmed "null" and then validate the TxConfig fields
(referencing TxConfig, decodeOptions) before returning to ensure deterministic
JSON-RPC errors.
---
Nitpick comments:
In `@cmd/apex/main_test.go`:
- Around line 25-161: Three tests exercise openBlobSubmitter with small config
variations; convert them to a single table-driven test to remove duplication.
Create a test function (e.g. TestOpenBlobSubmitterCases) with a slice of cases
containing name, cfg overrides (Submission.Enabled, Submission.SignerKey, grpc
address) and expected results (nil submitter, error, or successful Submit with
height/checks). In the loop call t.Run(case.name, func(t *testing.T){...}) and
apply shared setup via helper functions for signer loading and for starting the
gRPC auth/tx servers only when the case requires a running backend; invoke
openBlobSubmitter(cfg) and assert the case-specific expectations (submitter nil
vs error vs successful Submit and checks), reusing symbols openBlobSubmitter,
submissionConfigWithAddr, writeSignerKey, submitter.Submit, and the
submissionAuthServer/submissionTxServer helpers.
In `@config/config.go`:
- Around line 135-143: The SubmissionConfig has ambiguous defaults GasPrice: 0
and MaxGasPrice: 0; update config.go to document the intended semantics for zero
(e.g., "0 = automatic estimation" or "0 = unset/invalid") next to the GasPrice
and MaxGasPrice fields of SubmissionConfig, and add/mention a clear fallback or
validation in the config initialization/validation path (e.g., in the
constructor or ValidateSubmissionConfig function) to enforce or translate the
zero value into the intended behavior (or require explicit non-zero values). Use
the SubmissionConfig, GasPrice, MaxGasPrice, and ConfirmationTimeout identifiers
to locate the fields and related validation code to update.
In `@pkg/api/grpc/server_test.go`:
- Around line 132-137: The test helper function testNamespace (used in
pkg/api/grpc/server_test.go and duplicated in pkg/api/notifier_test.go) should
be moved into a shared test utility so both tests import and call the same
implementation; create a new package-level test helper (e.g., a test-only
package) that exposes testNamespace, move the function body there, update both
server_test.go and notifier_test.go to import that helper and call
testNamespace(b) instead of duplicating the code, and run tests to ensure the
symbol resolution and imports are correct.
In `@pkg/api/service_test.go`:
- Around line 358-365: The test constructs blob JSON via the json.Marshal call
that assigns "index": -1 (blobsRaw variable); clarify intent by either replacing
-1 with a valid non-negative index (e.g., 0) if you meant a real blob index, or
add a short inline comment immediately above the json.Marshal block explaining
that -1 is an intentional sentinel for "unset" and is handled by the code under
test—update any assertions if you change the value.
- Around line 397-412: The assertions in assertSubmittedBlob use string
conversions to compare byte slices (blob.Data, blob.Commitment, blob.Signer vs
literals/wantSigner); replace these with bytes.Equal for idiomatic, safe slice
comparison (e.g., bytes.Equal(blob.Data, []byte("hello")),
bytes.Equal(blob.Commitment, []byte("c1")), and bytes.Equal(blob.Signer,
wantSigner)) and add an import for the bytes package in the test file; keep the
same failure messages but use t.Fatalf when bytes.Equal returns false.
In `@pkg/fetch/blobtx_test.go`:
- Around line 186-210: Add a new test mirroring TestParseBlobTxMultiBlob but
using the legacy builder: create TestParseBlobTxLegacyFormatMultiBlob that
constructs a legacy tx via buildLegacyBlobTx with multiple share commitments
(e.g., [][]byte{[]byte("c1"), []byte("c2")}) and multiple rawBlob args, call
parseBlobTx on that tx, and assert parsed.Blobs length and
parsed.PFB.ShareCommitments length are both 2; reuse the same validation style
as TestParseBlobTxLegacyFormat to fail the test on errors or mismatched counts.
In `@pkg/submit/blobtx.go`:
- Around line 10-14: Add source documentation comments above the gas constants
explaining what each constant represents and citing the Celestia specification
or repository commit/PR where the values come from; specifically document
baseBlobTxGas, firstSequenceGasOverhead, and gasPerBlobByte with short
descriptions (e.g., base transaction gas, per-sequence overhead, per-blob-byte
cost) and include a reference to the exact Celestia doc/page or git commit/line
so future reviewers can verify or update the values.
In `@pkg/submit/tx_builder.go`:
- Around line 192-200: FeeAmountFromGasPrice currently multiplies float64
gasPrice by gasLimit which risks IEEE-754 precision errors; replace the float
arithmetic with exact rational/decimal math (e.g., math/big.Rat or a decimal
library) inside FeeAmountFromGasPrice: construct a big.Rat for gasPrice,
multiply by a big.Int representing gasLimit, compute the ceiling of the rational
result (for big.Rat: ceil = (num + den - 1) / den using big.Int arithmetic),
convert that integer to the desired string and return it; update validation of
gasPrice inputs accordingly and keep the function name FeeAmountFromGasPrice,
parameters gasLimit and gasPrice (or change gasPrice type to string/decimal
type) so callers and error handling remain clear.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c9fddca2-70e2-40d9-8614-3a042cf9c20b
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (24)
.github/workflows/ci.ymlcmd/apex/main_test.goconfig/config.goconfig/load.goconfig/load_test.goe2e/submission_test.gogo.modjustfilepkg/api/grpc/server_test.gopkg/api/jsonrpc/server_test.gopkg/api/notifier_test.gopkg/api/service_test.gopkg/fetch/blobtx_submit_test.gopkg/fetch/blobtx_test.gopkg/submit/blobtx.gopkg/submit/blobtx_test.gopkg/submit/celestia_blob.gopkg/submit/direct.gopkg/submit/direct_test.gopkg/submit/submit.gopkg/submit/submit_test.gopkg/submit/tx_builder.gopkg/submit/tx_builder_test.gopkg/types/namespace_blob.go
🚧 Files skipped from review as they are similar to previous changes (6)
- pkg/submit/submit_test.go
- .github/workflows/ci.yml
- pkg/api/jsonrpc/server_test.go
- config/load_test.go
- go.mod
- pkg/submit/direct_test.go
Overview
closes #4
closes #9
Summary by CodeRabbit
New Features
Documentation
Tests