Skip to content
This repository was archived by the owner on Dec 5, 2021. It is now read-only.

[pull] develop from ethereum-optimism:develop#573

Open
pull[bot] wants to merge 10000 commits intoomgnetwork:developfrom
ethereum-optimism:develop
Open

[pull] develop from ethereum-optimism:develop#573
pull[bot] wants to merge 10000 commits intoomgnetwork:developfrom
ethereum-optimism:develop

Conversation

@pull
Copy link

@pull pull bot commented Oct 13, 2021

See Commits and Changes for more details.


Created by pull[bot]

Can you help keep this open source service alive? 💖 Please sponsor : )

ajsutton and others added 30 commits March 9, 2026 20:26
* op-supernode: return partial optimistic blocks in superroot_atTimestamp

Previously, when VerifiedAt returned NotFound for a chain, the code
would skip optimistic data collection for that chain entirely. This
meant OptimisticAtTimestamp would be missing entries for chains that
lacked verified data, even if optimistic data was available.

Now the handler always attempts to collect optimistic data regardless
of whether verified data is present. Chains where both verified and
optimistic data are unavailable are absent from the map (as documented
by the response type comment). The Data field remains nil whenever any
chain lacks verified data.

New tests cover:
- verified NotFound but optimistic available -> chain present in map
- verified NotFound and optimistic NotFound -> chain absent from map
- verified NotFound and optimistic errors non-NotFound -> returns error

https://claude.ai/code/session_01SMasfxHWzRezDREXkV2TzR

* op-acceptance-tests: unskip interop fault proof tests (#19180)

These tests were blocked on supernode not returning optimistic blocks
for chains that hadn't yet been fully verified. Now that the
superroot_atTimestamp RPC returns partial optimistic data (whatever
blocks are available), the skip condition is resolved.

https://claude.ai/code/session_01SMasfxHWzRezDREXkV2TzR

* Attempt to fix test.

* Fix test.

* Rename method.

* Better name

* Simplify superroot code.

---------

Co-authored-by: Claude <noreply@anthropic.com>
…19441)

* op-supernode: return partial optimistic blocks in superroot_atTimestamp

Previously, when VerifiedAt returned NotFound for a chain, the code
would skip optimistic data collection for that chain entirely. This
meant OptimisticAtTimestamp would be missing entries for chains that
lacked verified data, even if optimistic data was available.

Now the handler always attempts to collect optimistic data regardless
of whether verified data is present. Chains where both verified and
optimistic data are unavailable are absent from the map (as documented
by the response type comment). The Data field remains nil whenever any
chain lacks verified data.

New tests cover:
- verified NotFound but optimistic available -> chain present in map
- verified NotFound and optimistic NotFound -> chain absent from map
- verified NotFound and optimistic errors non-NotFound -> returns error

https://claude.ai/code/session_01SMasfxHWzRezDREXkV2TzR

* op-acceptance-tests: unskip interop fault proof tests (#19180)

These tests were blocked on supernode not returning optimistic blocks
for chains that hadn't yet been fully verified. Now that the
superroot_atTimestamp RPC returns partial optimistic data (whatever
blocks are available), the skip condition is resolved.

https://claude.ai/code/session_01SMasfxHWzRezDREXkV2TzR

* Attempt to fix test.

* Fix test.

* Rename method.

* Better name

* Simplify superroot code.

* fix(supernode): consider verifier L1 progress in superroot CurrentL1

The superroot_atTimestamp RPC was only considering derivation pipeline
CurrentL1 when computing the minimum L1 block. This adds verifier L1
progress to the minimum calculation so callers know that both derivation
and verification have processed up to the reported L1 block.

Closes #18651

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: remove goimports alignment spacing in mock methods

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: apply goimports formatting to realign mock method blocks

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: goimports formatting in superroot_test.go struct fields

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
* fix(ci): capture vm-compat findings JSON as CI artifact

When the analyze-op-program-client job fails, CircleCI truncates the
Docker build log output, losing the beginning of the findings JSON
array. This makes it impossible to review the full findings.

Instead of relying on stdout, the analysis now writes findings to a file
that gets extracted from the Docker image and stored as a CI artifact.

Changes:
- run-static-analysis.sh: write findings to $FINDINGS_OUTPUT_PATH if set
- Dockerfile.vmcompat: capture analysis exit code instead of failing the
  build, so findings can be extracted from the image
- Makefile run-vm-compat: tag image, extract findings via docker cp,
  then check exit code and fail appropriately
- CircleCI: add store_artifacts step (when: always) for findings JSON

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor(op-program): use docker export stage instead of docker cp

Use a FROM scratch export stage with --output to write findings files
directly to the host, matching the pattern in Dockerfile.repro. This
replaces the docker create/cp/rm dance with a simpler approach.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor(op-program): move exit code check into Dockerfile

Add a `check` stage that fails the build if analysis had a non-zero
exit code. The Makefile now runs two cached builds: `--target export`
to extract findings, then `--target check` to fail on errors. This
removes all the shell exit-code parsing from the Makefile.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor(op-program): single docker invocation with check stage

Chain export → check → export so a single docker build --output
handles both: the check stage fails if analysis found issues, and
the export stage (depending on check) only writes findings on success.
Restores the commented-out go1.25 build line.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor(op-program): always export findings, check exit code after

Export both findings and exit code via the scratch export stage, then
check the exit code in the Makefile after docker build completes.
This ensures findings are always available as CI artifacts even when
analysis fails.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(op-program): add commented exit check for go1.25 vm-compat build

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(ci): use FINDINGS_OUTPUT_PATH env var instead of hardcoded path

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(ci): revert COPY to hardcoded path — ENV not visible across stages

$FINDINGS_OUTPUT_PATH is set in the builder stage but the scratch export
stage doesn't inherit it. COPY expanded to an empty source, copying the
entire builder filesystem and hitting permission errors on Go module
cache directories.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The kona-build-release job takes ~9 minutes even when no Rust files
changed, because cargo rebuilds from scratch on every fresh checkout
(mtime-based fingerprinting defeats the target cache).

Fix: use CircleCI path-filtering to detect rust/ changes. On feature
branches where rust/ is untouched, the build step checks for cached
binaries in the restored target cache and exits early if found.

Safety:
- develop/main always build fresh (never skip)
- If cached binaries are missing, falls through to cargo build
- All other rust-build-binary invocations default to building (opt-in)
- Flake-shake workflow is unaffected (doesn't pass the parameter)

Co-authored-by: smartcontracts <smartcontracts@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…19467)

The `ls | head -20` pipe in the early-exit path fails with exit code
141 (SIGPIPE) when ls output exceeds 20 lines, because CircleCI runs
bash with `-eo pipefail`. Add `|| true` to suppress the broken pipe.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(ci): add required gate jobs for Rust CI and E2E workflows

Always include rust-ci.yml and rust-e2e.yml in the continuation pipeline
by changing the path-filtering mapping from conditional inclusion to
always-include. Gate individual job execution behind a new
c-rust_changes_detected boolean parameter, set to true only when
(rust|\.circleci)/.* paths change.

Add required-rust-ci and required-rust-e2e gate jobs that always run
and fan-in on required Rust jobs. When no Rust changes are detected,
the required jobs are skipped and CircleCI treats the dependencies as
satisfied, so the gate jobs pass immediately. This enables GitHub
required status checks for Rust CI.

Also gate kona-publish-prestates with c-rust_changes_detected to
preserve current behavior (only publish on develop with Rust changes).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(ci): use workflow-level gating instead of unsupported job-level when

CircleCI does not support `when` as a key on job references within
a workflow (error: "Unexpected argument(s): when"). Move the gating
from job-level to workflow-level by splitting each workflow into two
mutually exclusive workflows:

- rust-ci: runs when c-rust_changes_detected=true OR dispatch (real jobs + gate)
- rust-ci-gate-skip: runs when webhook AND NOT c-rust_changes_detected (gate only)

Same pattern for rust-e2e-ci / rust-e2e-gate-skip.

The gate job (required-rust-ci / required-rust-e2e) runs in exactly
one workflow per pipeline, ensuring GitHub always sees the status check.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* test: temporarily remove .circleci from rust change detection path

Remove .circleci from the path filter so only rust/ changes trigger
rust CI. This lets us verify the gate-skip workflows work correctly
on this PR (since .circleci/ files are in the diff vs develop).

This commit will be reverted after verification.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Revert "test: temporarily remove .circleci from rust change detection path"

This reverts commit 9512bf7.

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* test: add coverage for CANNON_KONA respected game type paths

Add tests verifying that the existing override key
(overrides.cfg.startingRespectedGameType) can upgrade a chain's
respected game type to CANNON_KONA. Also add deploy-path tests for
PERMISSIONED_CANNON, CANNON_KONA, and invalid game types.

* chore: format

* fix: hack for forklive being out of date

* fix: add assert for game type change

* feat: add feature flag enforcement

* Update OPContractsManagerV2.t.sol

Co-authored-by: graphite-app[bot] <96075541+graphite-app[bot]@users.noreply.github.com>

* chore: bump semver

* fix: skip CANNON_KONA revert test when feature is enabled

---------

Co-authored-by: graphite-app[bot] <96075541+graphite-app[bot]@users.noreply.github.com>
* chore: Remove remnants of kona-supervisor

* chore: Add kona-node back to justfile

---------

Co-authored-by: wwared <541936+wwared@users.noreply.github.com>
`variant.rs` defined `EthereumDataSourceVariant` and implemented
`AsyncIterator` for it, but `AsyncIterator` is not defined anywhere in
the kona codebase and the file was never included in `sources/mod.rs`.
The type was therefore completely unreachable and unusable at runtime.

`EthereumDataSource` in `ethereum.rs` already covers both the calldata
and blob cases through the `DataAvailabilityProvider` trait, making
`EthereumDataSourceVariant` redundant.

Fixes #19340

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Keep tooling current with upstream releases.
#19355)

The `load_calldata` match on `TxEnvelope` used a catch-all `_ => return None`
that silently dropped EIP-4844 (type 3) transactions. The derivation spec
(derivation.md:504) and op-node's `isValidBatchTx` both explicitly accept
type-3 transactions, so kona must do the same to avoid divergent L2 state
in the pre-Ecotone window.

Add a `TxEnvelope::Eip4844` match arm that extracts `to` and `input` like
the other accepted transaction types, and update the corresponding test
from asserting the tx is ignored to asserting its calldata is included.

Closes #19352

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
… panics on truncated input (#19361)

* fix(kona/protocol): add bounds checks in span batch decode to prevent panics on truncated input

Multiple span batch decode functions panic on truncated input instead of
returning an error. This adds explicit length checks before each unsafe
slice operation:

- prefix.rs: decode_parent_check and decode_l1_origin_check now check
  r.len() >= 20 before split_at(20)
- transactions.rs: decode_tx_sigs now checks r.len() >= 64 before
  indexing r[..32] and r[32..64]
- transactions.rs: decode_tx_tos now checks r.len() >= 20 before
  indexing r[..20]

On short input, each function returns SpanBatchError::Decoding(...)
instead of panicking, allowing the batch to be dropped gracefully —
consistent with Go op-node's io.ReadFull behavior.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* style: fix rustfmt formatting in span batch decode

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* style: fix clippy field_reassign_with_default in span batch tests

Use struct initialization syntax instead of Default::default()
followed by field reassignment.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* style: fix rustfmt formatting for struct initialization

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…alidator transition (#19360)

* fix(kona/derive): copy origin field during Holocene BatchQueue→BatchValidator transition (BQ-12)

On Holocene activation BatchProvider::attempt_update() transitions from
BatchQueue to BatchValidator. It was copying l1_blocks but not the origin
field, leaving BatchValidator.origin as None.

Go's BatchMux.TransformHolocene() copies both fields (batch_mux.go:67-68):
  bs.l1Blocks = slices.Clone(bp.l1Blocks)
  bs.origin = bp.origin

Without the origin copy, the first update_origins() call after transition
always enters the self.origin != self.prev.origin() branch (None != Some(…)).
This causes two failure modes:
1. Normal case: current L1 block is pushed onto l1_blocks again, creating a
   duplicate entry that corrupts the two-slot epoch window.
2. Lagging case: l1_blocks.clear() discards all transferred history, then
   next_batch() returns MissingOrigin.crit(), halting derivation.

Fix: copy batch_queue.origin into bv.origin alongside l1_blocks, matching
the Go reference implementation.

Closes #19356

* fix: apply rustfmt formatting fix

Wrap long comment line in batch_provider.rs to satisfy nightly rustfmt check.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: rename test to reflect expected behavior

The test verifies that origin IS transferred during Holocene transition,
so rename from _not_transferred to _transferred.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Remove dangling reference to random issue ID.

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
… in BlobData::fill (#19362)

* fix(kona/derive): return Reset instead of Critical on blob under-fill in BlobData::fill

When the blob provider returns fewer blobs than there are blob-hash placeholders
(under-fill), BlobData::fill previously returned BlobDecodingError::InvalidLength
which mapped to PipelineErrorKind::Critical — permanently terminating the pipeline
and requiring operator restart.

This is wrong. Under-fill is a transient condition:
- The provider may not have all blobs cached/synced yet for a newly confirmed block
- The L1 block may have been reorged out, causing the provider to return no blobs

In both cases the correct response is to reset the pipeline and retry from the
last safe L1 block, exactly as op-node does: fillBlobPointers wraps the same
error as NewResetError at blob_data_source.go:110.

Changes:
- Add ResetError::BlobsUnderFill(usize, usize) to pipeline.rs
- Add BlobProviderError::NotEnoughBlobs(usize, usize) to sources.rs, mapped to
  ResetError::BlobsUnderFill.reset() in From<BlobProviderError> for PipelineErrorKind
- Change BlobData::fill return type from Result<bool, BlobDecodingError> to
  Result<bool, BlobProviderError>, returning NotEnoughBlobs on under-fill
- Update blobs.rs caller to use the simplified .into() conversion
- Update tests to assert under-fill produces a Reset-level error

Fixes #19359

* fix(kona/derive): remove unused BlobProviderError import

The refactor to use Into::<PipelineErrorKind>::into made the direct
BlobProviderError import unnecessary, triggering a clippy warning.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: apply cargo fmt to blobs.rs import ordering

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(kona/derive): address review feedback on blob under-fill PR

- Wrap BlobProviderError in ResetError::BlobsUnderFill instead of
  duplicating fields
- Use named fields for BlobProviderError::NotEnoughBlobs
- Remove unnecessary map_err in BlobSource fill loop
- Add Clone to BlobDecodingError and BlobProviderError

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…gs (#19462)

CircleCI forwards all explicitly-passed pipeline parameters to
continuation configs during dynamic config (setup: true). When
parameters are only declared in the setup config (config.yml) but not
in the continuation configs, manually triggered pipelines fail with
"Unexpected argument(s)".

This was introduced by #19142 which renamed continuation parameters
with a c- prefix to fix "Conflicting pipeline parameters" errors.
The c- prefix solved the conflict but broke manual dispatch because
the original parameter names are still forwarded to continuation.

Add all setup config parameters as passthrough declarations in each
continuation config (main.yml, rust-ci.yml, rust-e2e.yml). These are
not referenced by any job — the c- prefixed versions remain in use.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* op-challenger: Replace supervisor with supernode in vm runner

Adds a new supernode RPC endpoint `superroot_syncStatus` which returns
the aggregate values of all individual op-node's current `SyncStatus`es.

The new method is very similar to `superroot_atTimestamp` but this keeps
them separate for now, though they could be combined in the future.

* op-supernode: Move `superroot_syncStatus` to `supernode_syncStatus`

* op-supernode: Use the minimum value for CurrentL1 in syncStatus

* op-supernode: Consider `chain.VerifierCurrentL1s` in `supernode_syncStatus`

* op-supernode: Deduplicate common code between `supernode_syncStatus` and `superroot_atTiemstamp`

* op-supernode: Address review comments

- Add `CurrentLocalSafeTimestamp` to `superroot_atTimestamp` result
- Make `CurrentSafeTimestamp` be derived from the safe heads instead of
 local-safe for `superroot_atTimestamp`
- Remove unnecessary `Result` type in helper
- Build `chainIDs` in the loop inside `Aggregate`
- Remove superfluous `else if` case
- Update corresponding tests

---------

Co-authored-by: wwared <541936+wwared@users.noreply.github.com>
Co-authored-by: wwared <541936+wwared@users.noreply.github.com>
…#19358)

* fix(kona/derive): make syscfg update failure non-fatal in L1Traversal

When `update_with_receipts` fails to apply system config changes from L1
receipts, both `PollingTraversal::advance_origin` and
`IndexedTraversal::provide_next_block` were returning a Critical pipeline
error, permanently halting the pipeline.

op-node's reference implementation (`l1_traversal.go:78-82`) treats this
failure as non-fatal with the comment: "failure to apply is just
informational, so we just log the error and continue."

This commit aligns kona's behaviour with op-node: on syscfg update error,
log a warning and continue advancing the origin rather than halting.

Fixes #19353

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: apply rustfmt to indexed.rs test line added in this fix

The block2 struct literal added in this fix exceeds the 100-char
line width limit. Wrap it to comply with rustfmt requirements.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
…et constructors (#19452)

* refactor(op-devstack): simplify acceptance testing around direct preset constructors

fix acceptance and kona e2e test harness regressions

fix same-timestamp cycle test and kona shared runtime wrappers

address review feedback and restore devstack test coverage

fix(ci): stabilize rust e2e websocket tests

* refactor(op-devstack): validate preset option support

* fix(ci): reduce rust subprocess log volume
* fix(kona/derive): add over-fill check in BlobSource::load_blobs

After the blob-pointer fill loop completes, add a post-loop check:
if `blob_index < blobs.len()` the provider returned more blobs than
were requested. Return `ResetError::BlobsOverFill` (→
PipelineErrorKind::Reset) rather than silently discarding the extras.

This mirrors op-node's `fillBlobPointers` check at
blob_data_source.go:162-163 which returns
`fmt.Errorf("got too many blobs")` wrapped as `NewResetError`.

Over-fill can occur with buggy blob providers (e.g. third-party RPC
services that ignore the requested hash list) or in rare L1 reorg
scenarios where the blob set shifts between hash collection and fetch.

Changes:
- Add `ResetError::BlobsOverFill(usize, usize)` variant to
  `pipeline.rs` (symmetric with the existing `BlobsUnderFill` variant).
- Import `ResetError` in `blobs.rs` and add the post-loop guard.
- Add `should_return_extra_blob` flag to `TestBlobProvider` for testing.
- Add `test_load_blobs_overfill_triggers_reset` regression test.

Fixes #19363

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* chore: use function names instead of line numbers in Go references

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(kona/derive): use named fields for BlobsOverFill variant

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor(kona/derive): rename blob_index to filled_blobs in BlobSource (#19480)

Rename the `blob_index` variable in `load_blobs` to `filled_blobs`
for clarity, as the variable tracks the number of blob placeholders
that were filled rather than serving as a traditional index.

Addresses review feedback from optimism#19364.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* chore: remove references to Go implementation in comments

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(kona/derive): remove useless BlobProviderError conversion in map_err

The `From<BlobProviderError> for PipelineErrorKind` impl already exists,
so `?` handles the conversion automatically. The explicit `map_err` was
redundant.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
* chore(linter): migrate Makefile to justfile

Migrate linter build targets from Make to Just. The Makefile now
delegates to just with a deprecation warning, preserving backwards
compatibility for existing make invocations.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: pass make variables as env vars in deprecated.mk shim

The deprecated.mk shim forwards all make variable assignments to just
via JUSTFLAGS. This breaks when inherited variables (like GO_TEST_FLAGS
from parent make processes) are not defined in the target justfile,
since just rejects unknown variable overrides.

Fix by passing variables as environment variables instead of just CLI
args. Since all shared justfile variables (GITCOMMIT, GITDATE, VERSION,
GOOS, GOARCH, etc.) use env() for defaults, they still pick up the
forwarded values. Unknown variables are silently ignored.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: correct .PHONY declaration in deprecated Make shim

The .PHONY declaration was formatted as a recipe (indented under .PHONY:)
rather than as a prerequisite list (.PHONY: targets). This caused Make to
treat targets like `cannon` and `op-program` as file/directory targets,
and since directories with those names exist in the repo, Make would skip
them with "is up to date" instead of running the just delegation.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(linter): restore file-dependency tracking in justfile build target

The original Makefile only rebuilt when files under analyzers/ or
.custom-gcl.yml changed. Replicate this with a timestamp check in the
just build recipe.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor(linter): extract uptodate check into reusable justfiles/uptodate.sh

Move the file-dependency freshness check from an inline bash script in
linter/justfile into a standalone helper at justfiles/uptodate.sh so other
justfiles can reuse the same pattern.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(justfiles): rebuild when source paths don't exist in uptodate.sh

If a source path is neither a file nor a directory, force a rebuild
instead of silently skipping it.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* chore(justfiles): warn when source path doesn't exist in uptodate.sh

Print a warning to stderr before forcing a rebuild when a source path
doesn't exist — this is likely an error and should be visible.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Convert 9 hardcoded tests to fuzz tests for broader coverage:
- mint: fuzz recipient address and amount
- mint revert: fuzz non-owner caller address
- burn/burnFrom: fuzz mint and burn amounts
- transfer/transferFrom: fuzz mint and transfer amounts
- approve: fuzz approval amount
- increaseAllowance/decreaseAllowance: fuzz initial and delta amounts

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* Initial plan

* chore: remove accidental PR.md

Co-authored-by: karlfloersch <706123+karlfloersch@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: karlfloersch <706123+karlfloersch@users.noreply.github.com>
* chore: add fix-rust-fmt Claude Code skill

Adds a skill that fixes Rust formatting CI failures by running
`just fmt-fix` with the correct nightly toolchain via mise.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Apply suggestion from @ajsutton

* Update .claude/skills/fix-rust-fmt/SKILL.md

Co-authored-by: Sebastian Stammler <seb@oplabs.co>

* docs: link /fix-rust-fmt skill from kona CLAUDE.md

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* docs: reference fix-rust-fmt skill in rust-dev.md

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Sebastian Stammler <seb@oplabs.co>
Bump all reth dependencies from v1.11.2 to v1.11.3 in the rust workspace,
including op-reth crate versions and the Cargo.lock.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

⤵️ pull merge-conflict Resolve conflicts manually

Projects

None yet

Development

Successfully merging this pull request may close these issues.