MiniChain v0 prototype completion#48
MiniChain v0 prototype completion#48arunabha003 wants to merge 52 commits intoStabilityNexus:mainfrom
Conversation
WalkthroughIntroduces MiniChain v0, a complete minimal blockchain system with Proof-of-Work consensus, transaction processing, persistent storage, TCP-based peer networking with gossip protocols, transaction mempool, and a comprehensive CLI for node operation and wallet management. Changes
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related issues
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 23
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@doc/testing.md`:
- Line 22: The section heading "## 3) Single-Device End-to-End " contains a
trailing space; remove the trailing whitespace so the heading reads exactly "##
3) Single-Device End-to-End" (update the heading text in the doc/testing.md
content to remove the extra space).
In `@src/minichain/genesis.py`:
- Around line 56-74: apply_genesis_block currently only checks structural
constraints but does not verify that the block header fields match the provided
GenesisConfig; add explicit checks in apply_genesis_block to compare
block.header.version, block.header.timestamp, and block.header.difficulty_target
against config.version, config.timestamp, and config.difficulty_target and raise
a ValueError with a clear message if any mismatch occurs (do this after
config.validate() and the existing header checks so the function rejects a
genesis block whose header fields don't match the config).
- Around line 29-38: The validate method in GenesisConfig (def validate) only
checks difficulty_target > 0; add an upper-bound check against the consensus
maximum so genesis targets that exceed consensus are rejected. Update validate
to either call the existing is_valid_pow(difficulty_target) or compare
difficulty_target <= CONSENSUS_MAX_DIFFICULTY (or the project’s named max target
constant) and raise ValueError("Genesis difficulty_target exceeds consensus
maximum") when it’s too large; keep the other checks (timestamp, addresses,
balances) unchanged.
In `@src/minichain/mempool.py`:
- Around line 184-190: The eviction loop currently does an O(n) min() scan over
self._entries_by_id for each removal (while len(self._entries_by_id) >
self.max_size) causing O(k·n) behavior; replace this by maintaining a min-heap
(e.g., heapq) keyed on (fee, received_at) for fast eviction: update the heap
whenever entries are inserted/updated/removed and pop from the heap to get the
lowest-priority entry instead of calling min(); ensure the heap stays in sync
with the dict (self._entries_by_id) and continue to call _remove_entry(entry)
for actual removal.
- Around line 1-11: Ruff flagged the use of typing.Iterable as deprecated;
update the import in the module so Iterable is imported from collections.abc
instead of typing (modify the import line that currently reads "from typing
import Iterable"), keeping all other imports (e.g., Account, State, Transaction)
unchanged so symbols used in the mempool module continue to resolve.
- Around line 118-126: The loop iterates self._sender_pools while calling
_recompute_sender_pool which can mutate _sender_pools via _remove_entry, causing
a RuntimeError; fix by iterating over a snapshot of the mapping (e.g., iterate
over list(self._sender_pools.items()) or list(self._sender_pools.keys())) so
mutations to _sender_pools during _recompute_sender_pool/_remove_entry won't
affect the iteration and then build sender_ready from that snapshot.
- Around line 146-165: In remove_confirmed_transactions, the loop that calls
_recompute_sender_pool for each sender in touched_senders is redundant because
the subsequent loop already recomputes all sender pools; remove the first loop
that iterates over touched_senders (the block referencing touched_senders and
calling _recompute_sender_pool) and keep the existing loop over
self._sender_pools to avoid duplicate recomputation while preserving
revalidation behavior.
In `@src/minichain/network.py`:
- Around line 621-653: The loop in _handle_sync_blocks calls the synchronous
_sync_block_applier for each block which can block the event loop for large
batches; change the handling so block application is offloaded or yields: either
make _sync_block_applier an async Callable and await it inside
_handle_sync_blocks, or call the sync function via loop.run_in_executor for each
block and await the future, and/or periodically yield (await asyncio.sleep(0))
between iterations; ensure you still discard peer from _sync_inflight and call
_spawn(self._request_missing_blocks(peer_id)) as before when application fails
or when done.
- Around line 795-801: The _reconnect_loop currently calls connect_to_peer
sequentially for every peer returned by _reconnect_candidates which can stall
for N × connect_timeout_seconds; change it to attempt reconnections in bounded
batches (e.g., slice the iterator to a max_concurrent_attempts per cycle) and
run those attempts concurrently with asyncio.gather (or create tasks and await
asyncio.wait) so that at most max_concurrent_attempts connect_to_peer calls run
in parallel each cycle; keep the sleep using config.reconnect_interval_seconds
and ensure you still check self._running between batches and when awaiting the
gather so the loop can exit promptly.
- Line 14: The import line uses Callable and Coroutine from typing which is
deprecated; update the import to keep Any from typing and import Callable and
Coroutine from collections.abc instead (replace "from typing import Any,
Callable, Coroutine" with "from typing import Any" and "from collections.abc
import Callable, Coroutine") so references to Callable and Coroutine in this
module (e.g., any function/type annotations that use Callable or Coroutine)
continue to work without the deprecated typing import.
- Around line 789-793: The code only inserts new entries into self._known_peers
but never refreshes the last_seen timestamp for already-known peers; update the
existing peer's last_seen (e.g., via self._known_peers[node_id]['last_seen'] =
info['last_seen'] or assign a fresh timestamp) when existing is not None so
re-discovery refreshes freshness, keeping the subsequent
self._spawn(self.connect_to_peer(peer, discovered_via="mdns")) logic unchanged.
- Line 19: The module-level set _LOCAL_DISCOVERY_REGISTRY currently holds strong
references to MiniChainNetwork instances; change it to a weakref.WeakSet to
avoid preventing GC if stop() is never called: import weakref, replace the
initialization of _LOCAL_DISCOVERY_REGISTRY with weakref.WeakSet(), and keep
existing add/remove usage (e.g., where MiniChainNetwork instances are added or
removed) unchanged so instances can be garbage-collected automatically when no
other strong refs exist.
- Around line 442-455: The peer reader loop in _peer_reader_loop currently
swallows all exceptions; change the except block to explicitly handle
expected/disconnection cases (e.g., asyncio.CancelledError and protocol errors
like NetworkError raised during _read_message or _handle_peer_message) while
logging any unexpected exceptions with full details (use self._logger.exception
or similar) before proceeding to finally call _close_connection(peer_id); ensure
you still break/return on normal EOF (message is None) and do not re-raise
expected disconnect errors so the connection is closed cleanly.
- Around line 829-836: The _is_self_peer method currently does exact-string
compares for peer.host vs listen_host/advertise_host; update it to
normalize/resolve hosts before comparison: use the ipaddress module to try
ipaddress.ip_address(peer.host) and
ipaddress.ip_address(self.listen_host/self.advertise_host) and treat addresses
equal if their packed/numeric form matches or if both are loopback (is_loopback)
so "localhost", "127.0.0.1", "::1" and "0.0.0.0" wildcard/listen semantics are
handled; fall back to the original string compare if ip parsing fails. Apply
this logic inside _is_self_peer (and reuse in any helper if present) comparing
peer.port to listen_port as before.
- Around line 1045-1066: _read_message currently applies
config.connect_timeout_seconds to every readline call and allows unbounded
lines, causing idle peers to be disconnected and risking memory exhaustion;
change _read_message to accept an optional timeout parameter (defaulting to
self.config.connect_timeout_seconds for backward compatibility) and pass that
timeout into asyncio.wait_for, and also enforce a max line size by using
reader.readline(max_bytes) or equivalent to bound the read (choose a sensible
limit). Update callers such as _peer_reader_loop to call
_read_message(connection.reader, eof_ok=True, timeout=300) or timeout=None for
long-lived streams so regular inter-message gaps don't trigger disconnects.
In `@src/minichain/node.py`:
- Around line 92-112: The start method may leak the SQLite connection if
SQLiteStorage(db_path) succeeds but _initialize_chain_manager(storage) raises;
wrap initialization so that if _initialize_chain_manager raises you close the
local storage object (storage.close() or storage.shutdown()) before re-raising,
and only assign self._storage = storage (and self._chain_manager =
chain_manager) after successful initialization; use a try/except or try/finally
around the call to _initialize_chain_manager to ensure storage is closed on
error while leaving successful paths unchanged.
- Around line 179-188: _persist_head currently only writes the single tip via
storage.persist_block_state_and_metadata using chain_manager.tip_block,
tip_hash, state, and height, which breaks reorg durability; update _persist_head
to first identify the fork point and persist the full canonical segment from
that fork point (persist each block/state for heights that became canonical and
remove any replaced heights from storage) before writing the final
metadata/state (using the same storage.persist_block_state_and_metadata
contract), and preserve the existing StorageError -> NodeError behavior so
failures still raise NodeError; reference chain_manager to obtain the fork
point, the sequence of canonical blocks, and heights to delete and persist.
In `@src/minichain/serialization.py`:
- Around line 47-56: The function serialize_canonical currently passes
sort_keys=True to json.dumps which reorders keys and breaks the canonical field
order produced by _to_field_map; update serialize_canonical to call json.dumps
with sort_keys=False (preserving the canonical_map key order created from
field_order) so transaction and block header serialization remain deterministic
for consensus.
In `@src/minichain/transaction.py`:
- Around line 42-56: The is_coinbase method currently rejects coinbase
transactions with amount == 0; update the validation in is_coinbase to allow
zero amounts by checking that self.amount is an int and self.amount >= 0
(instead of > 0), so zero-reward chains validated by ChainConfig.validate and
create_coinbase_transaction() will be accepted; keep the other checks
(timestamp, sender == COINBASE_SENDER, nonce == 0, fee == 0, signature == "",
public_key == "") unchanged.
- Around line 73-80: transaction_id currently calls
bytes.fromhex(self.signature) and bytes.fromhex(self.public_key) directly which
will raise on malformed hex; before converting validate the hex strings (e.g.,
check they are not None, are str, and match a hex pattern or try/except
ValueError) and if invalid raise a controlled error (e.g., raise
ValueError("malformed hex in signature") or a domain-specific
InvalidTransactionError) so Merkle computation/block validation doesn't crash
unexpectedly; update transaction_id to validate/parse both self.signature and
self.public_key and only extend digest_input on successful conversion, handling
errors with a clear, controlled exception mentioning transaction_id,
self.signature, and self.public_key.
In `@tests/test_crypto.py`:
- Around line 40-47: The test binds an unused signing_key causing lint RUF059;
change how you unpack the first generate_key_pair() call in
test_invalid_signature_is_rejected so the unused private key isn't assigned to a
named variable (e.g., use a throwaway underscore when destructuring: _,
verify_key = generate_key_pair()), keeping the rest of the test (wrong_signature
= sign_message(...), verify_signature(...)) unchanged.
In `@tests/test_genesis.py`:
- Around line 60-71: Replace the try/except assertion in the
test_genesis_requires_empty_state test with pytest.raises to follow pytest
conventions: call create_genesis_block and apply_genesis_block (using
GenesisConfig, State, Account as in the diff) inside a with
pytest.raises(ValueError) as excinfo: block and then assert "empty state" in
str(excinfo.value); apply the same change to the other similar test referenced
(the one around lines 74-85) to remove the try/except pattern and use
pytest.raises consistently.
In `@tests/test_transaction_gossip.py`:
- Around line 1-10: Add the pytest.importorskip guard for PyNaCl at the top of
the test file to match other tests: call pytest.importorskip("nacl") (e.g.,
immediately after the from __future__ import annotations line) before importing
minichain.crypto so the test consistently skips when PyNaCl is unavailable; this
affects the test module that imports derive_address and generate_key_pair as
well as MiniChainNetwork, NetworkConfig, and Transaction.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (42)
.github/workflows/ci.yml.gitignoreMakefiledoc/implementation.mddoc/testing.mdpyproject.tomlsrc/minichain/__init__.pysrc/minichain/__main__.pysrc/minichain/block.pysrc/minichain/chain.pysrc/minichain/consensus.pysrc/minichain/crypto.pysrc/minichain/genesis.pysrc/minichain/mempool.pysrc/minichain/merkle.pysrc/minichain/mining.pysrc/minichain/network.pysrc/minichain/node.pysrc/minichain/serialization.pysrc/minichain/state.pysrc/minichain/storage.pysrc/minichain/transaction.pytests/test_block.pytests/test_block_gossip.pytests/test_chain.pytests/test_chain_sync.pytests/test_cli.pytests/test_comprehensive_integration.pytests/test_consensus.pytests/test_crypto.pytests/test_genesis.pytests/test_mempool.pytests/test_merkle.pytests/test_mining.pytests/test_network.pytests/test_node.pytests/test_scaffold.pytests/test_serialization.pytests/test_state.pytests/test_storage.pytests/test_transaction.pytests/test_transaction_gossip.py
| ``` | ||
|
|
||
|
|
||
| ## 3) Single-Device End-to-End |
There was a problem hiding this comment.
Remove trailing space in section heading (Line 22).
🧹 Fix trailing space
-## 3) Single-Device End-to-End
+## 3) Single-Device End-to-End📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ## 3) Single-Device End-to-End | |
| ## 3) Single-Device End-to-End |
🧰 Tools
🪛 markdownlint-cli2 (0.21.0)
[warning] 22-22: Trailing spaces
Expected: 0 or 2; Actual: 1
(MD009, no-trailing-spaces)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@doc/testing.md` at line 22, The section heading "## 3) Single-Device
End-to-End " contains a trailing space; remove the trailing whitespace so the
heading reads exactly "## 3) Single-Device End-to-End" (update the heading text
in the doc/testing.md content to remove the extra space).
| def validate(self) -> None: | ||
| if self.timestamp < 0: | ||
| raise ValueError("Genesis timestamp must be non-negative") | ||
| if self.difficulty_target <= 0: | ||
| raise ValueError("Genesis difficulty_target must be positive") | ||
| for address, balance in self.initial_balances.items(): | ||
| if not _is_lower_hex(address, 40): | ||
| raise ValueError(f"Invalid genesis address: {address}") | ||
| if balance < 0: | ||
| raise ValueError(f"Negative genesis balance for {address}") |
There was a problem hiding this comment.
Add an upper-bound check for difficulty_target.
is_valid_pow rejects targets above the consensus max, but GenesisConfig.validate() only checks > 0. This can allow genesis configs that create headers invalid under consensus rules.
🔧 Proposed fix
-from dataclasses import dataclass, field
+from dataclasses import dataclass, field
+from minichain.consensus import MAX_TARGET
@@
- if self.difficulty_target <= 0:
- raise ValueError("Genesis difficulty_target must be positive")
+ if self.difficulty_target <= 0 or self.difficulty_target > MAX_TARGET:
+ raise ValueError("Genesis difficulty_target must be within consensus bounds")🧰 Tools
🪛 Ruff (0.15.2)
[warning] 31-31: Avoid specifying long messages outside the exception class
(TRY003)
[warning] 33-33: Avoid specifying long messages outside the exception class
(TRY003)
[warning] 36-36: Avoid specifying long messages outside the exception class
(TRY003)
[warning] 38-38: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/minichain/genesis.py` around lines 29 - 38, The validate method in
GenesisConfig (def validate) only checks difficulty_target > 0; add an
upper-bound check against the consensus maximum so genesis targets that exceed
consensus are rejected. Update validate to either call the existing
is_valid_pow(difficulty_target) or compare difficulty_target <=
CONSENSUS_MAX_DIFFICULTY (or the project’s named max target constant) and raise
ValueError("Genesis difficulty_target exceeds consensus maximum") when it’s too
large; keep the other checks (timestamp, addresses, balances) unchanged.
| def apply_genesis_block(state: State, block: Block, config: GenesisConfig) -> None: | ||
| """Apply genesis allocations to an empty state.""" | ||
| config.validate() | ||
| if state.accounts: | ||
| raise ValueError("Genesis can only be applied to an empty state") | ||
| if block.header.block_height != 0: | ||
| raise ValueError("Genesis block height must be 0") | ||
| if block.header.previous_hash != GENESIS_PREVIOUS_HASH: | ||
| raise ValueError("Genesis previous_hash must be all zeros") | ||
| if block.transactions: | ||
| raise ValueError("Genesis block must not contain transactions") | ||
|
|
||
| expected_merkle_root = blake2b_digest(b"").hex() | ||
| if block.header.merkle_root != expected_merkle_root: | ||
| raise ValueError("Genesis merkle_root must commit to an empty tx list") | ||
|
|
||
| for address, balance in config.initial_balances.items(): | ||
| state.set_account(address, Account(balance=balance, nonce=0)) | ||
|
|
There was a problem hiding this comment.
Verify genesis header fields match the config.
apply_genesis_block checks structural constraints but doesn’t assert that the block header’s version, timestamp, and difficulty_target match the provided GenesisConfig. This can silently accept mismatched genesis blocks.
🔧 Proposed fix
if block.header.previous_hash != GENESIS_PREVIOUS_HASH:
raise ValueError("Genesis previous_hash must be all zeros")
+ if block.header.version != config.version:
+ raise ValueError("Genesis version mismatch")
+ if block.header.timestamp != config.timestamp:
+ raise ValueError("Genesis timestamp mismatch")
+ if block.header.difficulty_target != config.difficulty_target:
+ raise ValueError("Genesis difficulty_target mismatch")
if block.transactions:
raise ValueError("Genesis block must not contain transactions")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def apply_genesis_block(state: State, block: Block, config: GenesisConfig) -> None: | |
| """Apply genesis allocations to an empty state.""" | |
| config.validate() | |
| if state.accounts: | |
| raise ValueError("Genesis can only be applied to an empty state") | |
| if block.header.block_height != 0: | |
| raise ValueError("Genesis block height must be 0") | |
| if block.header.previous_hash != GENESIS_PREVIOUS_HASH: | |
| raise ValueError("Genesis previous_hash must be all zeros") | |
| if block.transactions: | |
| raise ValueError("Genesis block must not contain transactions") | |
| expected_merkle_root = blake2b_digest(b"").hex() | |
| if block.header.merkle_root != expected_merkle_root: | |
| raise ValueError("Genesis merkle_root must commit to an empty tx list") | |
| for address, balance in config.initial_balances.items(): | |
| state.set_account(address, Account(balance=balance, nonce=0)) | |
| def apply_genesis_block(state: State, block: Block, config: GenesisConfig) -> None: | |
| """Apply genesis allocations to an empty state.""" | |
| config.validate() | |
| if state.accounts: | |
| raise ValueError("Genesis can only be applied to an empty state") | |
| if block.header.block_height != 0: | |
| raise ValueError("Genesis block height must be 0") | |
| if block.header.previous_hash != GENESIS_PREVIOUS_HASH: | |
| raise ValueError("Genesis previous_hash must be all zeros") | |
| if block.header.version != config.version: | |
| raise ValueError("Genesis version mismatch") | |
| if block.header.timestamp != config.timestamp: | |
| raise ValueError("Genesis timestamp mismatch") | |
| if block.header.difficulty_target != config.difficulty_target: | |
| raise ValueError("Genesis difficulty_target mismatch") | |
| if block.transactions: | |
| raise ValueError("Genesis block must not contain transactions") | |
| expected_merkle_root = blake2b_digest(b"").hex() | |
| if block.header.merkle_root != expected_merkle_root: | |
| raise ValueError("Genesis merkle_root must commit to an empty tx list") | |
| for address, balance in config.initial_balances.items(): | |
| state.set_account(address, Account(balance=balance, nonce=0)) |
🧰 Tools
🪛 Ruff (0.15.2)
[warning] 60-60: Avoid specifying long messages outside the exception class
(TRY003)
[warning] 62-62: Avoid specifying long messages outside the exception class
(TRY003)
[warning] 64-64: Avoid specifying long messages outside the exception class
(TRY003)
[warning] 66-66: Avoid specifying long messages outside the exception class
(TRY003)
[warning] 70-70: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/minichain/genesis.py` around lines 56 - 74, apply_genesis_block currently
only checks structural constraints but does not verify that the block header
fields match the provided GenesisConfig; add explicit checks in
apply_genesis_block to compare block.header.version, block.header.timestamp, and
block.header.difficulty_target against config.version, config.timestamp, and
config.difficulty_target and raise a ValueError with a clear message if any
mismatch occurs (do this after config.validate() and the existing header checks
so the function rejects a genesis block whose header fields don't match the
config).
| """Mempool data structures and transaction selection logic.""" | ||
|
|
||
| from __future__ import annotations | ||
|
|
||
| import time | ||
| from dataclasses import dataclass, field | ||
| from heapq import heappop, heappush | ||
| from typing import Iterable | ||
|
|
||
| from minichain.state import Account, State | ||
| from minichain.transaction import Transaction |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Imports look good; minor modernization note from static analysis.
Ruff UP035 flags that Iterable should be imported from collections.abc rather than typing (deprecated since Python 3.9).
♻️ Suggested import fix
-from typing import Iterable
+from collections.abc import Iterable📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| """Mempool data structures and transaction selection logic.""" | |
| from __future__ import annotations | |
| import time | |
| from dataclasses import dataclass, field | |
| from heapq import heappop, heappush | |
| from typing import Iterable | |
| from minichain.state import Account, State | |
| from minichain.transaction import Transaction | |
| """Mempool data structures and transaction selection logic.""" | |
| from __future__ import annotations | |
| import time | |
| from dataclasses import dataclass, field | |
| from heapq import heappop, heappush | |
| from collections.abc import Iterable | |
| from minichain.state import Account, State | |
| from minichain.transaction import Transaction |
🧰 Tools
🪛 Ruff (0.15.2)
[warning] 8-8: Import from collections.abc instead: Iterable
Import from collections.abc
(UP035)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/minichain/mempool.py` around lines 1 - 11, Ruff flagged the use of
typing.Iterable as deprecated; update the import in the module so Iterable is
imported from collections.abc instead of typing (modify the import line that
currently reads "from typing import Iterable"), keeping all other imports (e.g.,
Account, State, Transaction) unchanged so symbols used in the mempool module
continue to resolve.
| sender_ready: dict[str, list[_PoolEntry]] = {} | ||
| for sender, pool in self._sender_pools.items(): | ||
| self._recompute_sender_pool(sender, state) | ||
| ready_entries = sorted( | ||
| (pool.entries[nonce] for nonce in pool.ready_nonces), | ||
| key=lambda entry: entry.transaction.nonce, | ||
| ) | ||
| if ready_entries: | ||
| sender_ready[sender] = ready_entries |
There was a problem hiding this comment.
RuntimeError: dictionary changed size during iteration — iterating _sender_pools while _recompute_sender_pool can mutate it.
_recompute_sender_pool may call _remove_entry, which pops the sender from _sender_pools when the last entry is removed (Line 246-247). This mutates the dict while iterating over .items() on Line 119, causing a RuntimeError at runtime.
🐛 Proposed fix: snapshot the dict before iterating
- for sender, pool in self._sender_pools.items():
+ for sender, pool in list(self._sender_pools.items()):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/minichain/mempool.py` around lines 118 - 126, The loop iterates
self._sender_pools while calling _recompute_sender_pool which can mutate
_sender_pools via _remove_entry, causing a RuntimeError; fix by iterating over a
snapshot of the mapping (e.g., iterate over list(self._sender_pools.items()) or
list(self._sender_pools.keys())) so mutations to _sender_pools during
_recompute_sender_pool/_remove_entry won't affect the iteration and then build
sender_ready from that snapshot.
| def is_coinbase(self) -> bool: | ||
| """Return whether this transaction follows coinbase conventions.""" | ||
| if not _is_lower_hex(self.recipient, ADDRESS_HEX_LENGTH): | ||
| return False | ||
| if not isinstance(self.amount, int) or self.amount <= 0: | ||
| return False | ||
| if not isinstance(self.timestamp, int) or self.timestamp < 0: | ||
| return False | ||
| return ( | ||
| self.sender == COINBASE_SENDER | ||
| and self.nonce == 0 | ||
| and self.fee == 0 | ||
| and self.signature == "" | ||
| and self.public_key == "" | ||
| ) |
There was a problem hiding this comment.
Allow zero-amount coinbase when block_reward is 0.
ChainConfig.validate() permits block_reward = 0, but is_coinbase() rejects amount == 0, which will make zero-reward networks fail validation and prevent create_coinbase_transaction() from working. Relax this check and rely on block-level accounting to enforce the expected amount.
🔧 Proposed fix
- if not isinstance(self.amount, int) or self.amount <= 0:
+ if not isinstance(self.amount, int) or self.amount < 0:
return False🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/minichain/transaction.py` around lines 42 - 56, The is_coinbase method
currently rejects coinbase transactions with amount == 0; update the validation
in is_coinbase to allow zero amounts by checking that self.amount is an int and
self.amount >= 0 (instead of > 0), so zero-reward chains validated by
ChainConfig.validate and create_coinbase_transaction() will be accepted; keep
the other checks (timestamp, sender == COINBASE_SENDER, nonce == 0, fee == 0,
signature == "", public_key == "") unchanged.
| def transaction_id(self) -> bytes: | ||
| """Return a deterministic transaction hash for Merkle commitments.""" | ||
| digest_input = bytearray(self.signing_bytes()) | ||
| if self.signature: | ||
| digest_input.extend(bytes.fromhex(self.signature)) | ||
| if self.public_key: | ||
| digest_input.extend(bytes.fromhex(self.public_key)) | ||
| return blake2b_digest(bytes(digest_input)) |
There was a problem hiding this comment.
Guard transaction_id() against malformed hex in signature/public_key.
bytes.fromhex() will raise on malformed values, which can bubble up during Merkle computation or block validation and crash processing on untrusted input. Validate before converting and raise a controlled error.
🔒 Proposed fix
digest_input = bytearray(self.signing_bytes())
if self.signature:
+ if not _is_lower_hex(self.signature, SIGNATURE_HEX_LENGTH):
+ raise ValueError("Invalid signature hex")
digest_input.extend(bytes.fromhex(self.signature))
if self.public_key:
+ if not _is_lower_hex(self.public_key, PUBLIC_KEY_HEX_LENGTH):
+ raise ValueError("Invalid public_key hex")
digest_input.extend(bytes.fromhex(self.public_key))
return blake2b_digest(bytes(digest_input))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/minichain/transaction.py` around lines 73 - 80, transaction_id currently
calls bytes.fromhex(self.signature) and bytes.fromhex(self.public_key) directly
which will raise on malformed hex; before converting validate the hex strings
(e.g., check they are not None, are str, and match a hex pattern or try/except
ValueError) and if invalid raise a controlled error (e.g., raise
ValueError("malformed hex in signature") or a domain-specific
InvalidTransactionError) so Merkle computation/block validation doesn't crash
unexpectedly; update transaction_id to validate/parse both self.signature and
self.public_key and only extend digest_input on successful conversion, handling
errors with a clear, controlled exception mentioning transaction_id,
self.signature, and self.public_key.
| def test_invalid_signature_is_rejected() -> None: | ||
| signing_key, verify_key = generate_key_pair() | ||
| other_signing_key, _ = generate_key_pair() | ||
| message = b"minichain-message" | ||
|
|
||
| wrong_signature = sign_message(message, other_signing_key) | ||
|
|
||
| assert not verify_signature(message, wrong_signature, verify_key) |
There was a problem hiding this comment.
Avoid unused binding to keep lint clean.
signing_key is unused here and triggers ruff RUF059.
🧹 Proposed fix
- signing_key, verify_key = generate_key_pair()
+ _signing_key, verify_key = generate_key_pair()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_invalid_signature_is_rejected() -> None: | |
| signing_key, verify_key = generate_key_pair() | |
| other_signing_key, _ = generate_key_pair() | |
| message = b"minichain-message" | |
| wrong_signature = sign_message(message, other_signing_key) | |
| assert not verify_signature(message, wrong_signature, verify_key) | |
| def test_invalid_signature_is_rejected() -> None: | |
| _signing_key, verify_key = generate_key_pair() | |
| other_signing_key, _ = generate_key_pair() | |
| message = b"minichain-message" | |
| wrong_signature = sign_message(message, other_signing_key) | |
| assert not verify_signature(message, wrong_signature, verify_key) |
🧰 Tools
🪛 Ruff (0.15.2)
[warning] 41-41: Unpacked variable signing_key is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_crypto.py` around lines 40 - 47, The test binds an unused
signing_key causing lint RUF059; change how you unpack the first
generate_key_pair() call in test_invalid_signature_is_rejected so the unused
private key isn't assigned to a named variable (e.g., use a throwaway underscore
when destructuring: _, verify_key = generate_key_pair()), keeping the rest of
the test (wrong_signature = sign_message(...), verify_signature(...)) unchanged.
| def test_genesis_requires_empty_state() -> None: | ||
| config = GenesisConfig(initial_balances={"dd" * 20: 1}) | ||
| block = create_genesis_block(config) | ||
| state = State() | ||
| state.set_account("ff" * 20, Account(balance=1, nonce=0)) | ||
|
|
||
| try: | ||
| apply_genesis_block(state, block, config) | ||
| except ValueError as exc: | ||
| assert "empty state" in str(exc) | ||
| else: | ||
| raise AssertionError("Expected ValueError for non-empty state") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check the actual code in the test file
fd "test_genesis.py" --type f | head -1 | xargs cat -nRepository: StabilityNexus/MiniChain
Length of output: 3371
🏁 Script executed:
# Check if pytest is used in the project
fd "pyproject.toml\|setup.py\|requirements.*\.txt" --type f | head -5Repository: StabilityNexus/MiniChain
Length of output: 50
🏁 Script executed:
# Search for pytest imports in test files to confirm it's available
rg "import pytest" tests/ | head -20Repository: StabilityNexus/MiniChain
Length of output: 627
🏁 Script executed:
# Check for linting configuration
fd "ruff.toml\|pyproject.toml\|setup.cfg\|tox.ini" --type f | head -5 | xargs -I {} cat {}Repository: StabilityNexus/MiniChain
Length of output: 50
🏁 Script executed:
# Check if there are other similar try-except-else patterns in the test file
rg "try:" tests/test_genesis.py -A 5 -B 1Repository: StabilityNexus/MiniChain
Length of output: 504
🏁 Script executed:
# Look for ruff configuration in common locations
fd "pyproject.toml" --type f --exec cat {} \;Repository: StabilityNexus/MiniChain
Length of output: 809
🏁 Script executed:
# Verify these are real ruff rules
web_search PT017 TRY003 ruff linting rulesRepository: StabilityNexus/MiniChain
Length of output: 118
🌐 Web query:
PT017 TRY003 ruff pytest exception testing
💡 Result:
PT017 and TRY003 are Ruff lint rules that commonly show up when writing pytest exception tests and custom exceptions.
PT017 (pytest-assert-in-except)
Meaning: You’re asserting inside an except block (typically after manually catching an exception). Ruff recommends using pytest.raises() instead. (docs.astral.sh)
Bad (triggers PT017):
def test_foo():
try:
1 / 0
except ZeroDivisionError as e:
assert "division" in str(e)Good:
import pytest
def test_foo():
with pytest.raises(ZeroDivisionError) as exc_info:
1 / 0
assert "division" in str(exc_info.value)(Recent pytest versions also support a check= callback form; see Ruff’s rule doc.) (docs.astral.sh)
TRY003 (raise-vanilla-args)
Meaning: You’re raising a custom exception with a “long” message built at the raise site; Ruff recommends putting the message formatting inside the exception class so the exception is reusable and consistent. (docs.astral.sh)
Bad (triggers TRY003):
class CantBeNegative(Exception):
pass
def foo(x: int) -> None:
if x < 0:
raise CantBeNegative(f"{x} is negative")Good:
class CantBeNegative(Exception):
def __init__(self, number: int) -> None:
super().__init__(f"{number} is negative")
def foo(x: int) -> None:
if x < 0:
raise CantBeNegative(x)Citations:
- 1: https://docs.astral.sh/ruff/rules/pytest-assert-in-except/?utm_source=openai
- 2: https://docs.astral.sh/ruff/rules/pytest-assert-in-except/?utm_source=openai
- 3: https://docs.astral.sh/ruff/rules/raise-vanilla-args/?utm_source=openai
- 4: https://docs.astral.sh/ruff/rules/raise-vanilla-args/?utm_source=openai
🏁 Script executed:
# Check if there are any ruff config overrides or if PT and TRY rules are mentioned anywhere
rg "PT017|TRY003|flake8-pytest-style|tryceratops" --type toml --type yamlRepository: StabilityNexus/MiniChain
Length of output: 193
🏁 Script executed:
# Read the ruff.toml file to see full configuration
cat ruff.tomlRepository: StabilityNexus/MiniChain
Length of output: 1883
Use pytest.raises() for exception assertions.
This resolves the PT017 (pytest-assert-in-except) warning and follows pytest conventions.
♻️ Suggested refactor
-from dataclasses import replace
+from dataclasses import replace
+
+import pytest
@@
- try:
- apply_genesis_block(state, block, config)
- except ValueError as exc:
- assert "empty state" in str(exc)
- else:
- raise AssertionError("Expected ValueError for non-empty state")
+ with pytest.raises(ValueError, match="empty state"):
+ apply_genesis_block(state, block, config)
@@
- try:
- apply_genesis_block(state, block, config)
- except ValueError as exc:
- assert "previous_hash" in str(exc)
- else:
- raise AssertionError("Expected ValueError for invalid previous_hash")
+ with pytest.raises(ValueError, match="previous_hash"):
+ apply_genesis_block(state, block, config)Also applies to: 74-85
🧰 Tools
🪛 Ruff (0.15.2)
[warning] 69-69: Found assertion on exception exc in except block, use pytest.raises() instead
(PT017)
[warning] 71-71: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_genesis.py` around lines 60 - 71, Replace the try/except assertion
in the test_genesis_requires_empty_state test with pytest.raises to follow
pytest conventions: call create_genesis_block and apply_genesis_block (using
GenesisConfig, State, Account as in the diff) inside a with
pytest.raises(ValueError) as excinfo: block and then assert "empty state" in
str(excinfo.value); apply the same change to the other similar test referenced
(the one around lines 74-85) to remove the try/except pattern and use
pytest.raises consistently.
| """Integration tests for transaction gossip propagation.""" | ||
|
|
||
| from __future__ import annotations | ||
|
|
||
| import asyncio | ||
|
|
||
| from minichain.crypto import derive_address, generate_key_pair | ||
| from minichain.network import MiniChainNetwork, NetworkConfig | ||
| from minichain.transaction import Transaction | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Search for dependency declarations
fd -a -t f 'pyproject.toml|requirements.*|setup.cfg|setup.py|poetry.lock' | head -20Repository: StabilityNexus/MiniChain
Length of output: 101
🏁 Script executed:
# Search for PyNaCl/nacl in dependency files
find . -type f \( -name "pyproject.toml" -o -name "requirements*.txt" -o -name "setup.py" -o -name "setup.cfg" -o -name "poetry.lock" \) -exec grep -l -i "pynacl\|nacl" {} \;Repository: StabilityNexus/MiniChain
Length of output: 84
🏁 Script executed:
# Check for pytest.importorskip patterns across test files
rg 'pytest\.importorskip|importorskip' tests/ --type pyRepository: StabilityNexus/MiniChain
Length of output: 781
🏁 Script executed:
# Check the generate_key_pair implementation
rg -A 5 'def generate_key_pair' --type pyRepository: StabilityNexus/MiniChain
Length of output: 418
🏁 Script executed:
cat pyproject.tomlRepository: StabilityNexus/MiniChain
Length of output: 809
🏁 Script executed:
# Check if test_transaction_gossip.py has the importorskip guard
rg -A 10 'test_transaction_gossip' tests/ --type py | head -20Repository: StabilityNexus/MiniChain
Length of output: 848
🏁 Script executed:
# Check the _require_nacl implementation
rg -B 2 -A 3 'def _require_nacl' src/ --type pyRepository: StabilityNexus/MiniChain
Length of output: 423
🏁 Script executed:
# Verify the actual first lines of test_transaction_gossip.py
head -15 tests/test_transaction_gossip.pyRepository: StabilityNexus/MiniChain
Length of output: 530
Add pytest.importorskip guard for consistency with other test modules.
All 14 other test files in the suite use pytest.importorskip("nacl") as a consistent pattern. Although PyNaCl is a required dependency, this test should align with the codebase pattern by adding the guard.
Proposed fix
"""Integration tests for transaction gossip propagation."""
from __future__ import annotations
import asyncio
+import pytest
+
+pytest.importorskip("nacl")
from minichain.crypto import derive_address, generate_key_pair
from minichain.network import MiniChainNetwork, NetworkConfig
from minichain.transaction import Transaction📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| """Integration tests for transaction gossip propagation.""" | |
| from __future__ import annotations | |
| import asyncio | |
| from minichain.crypto import derive_address, generate_key_pair | |
| from minichain.network import MiniChainNetwork, NetworkConfig | |
| from minichain.transaction import Transaction | |
| """Integration tests for transaction gossip propagation.""" | |
| from __future__ import annotations | |
| import asyncio | |
| import pytest | |
| pytest.importorskip("nacl") | |
| from minichain.crypto import derive_address, generate_key_pair | |
| from minichain.network import MiniChainNetwork, NetworkConfig | |
| from minichain.transaction import Transaction |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_transaction_gossip.py` around lines 1 - 10, Add the
pytest.importorskip guard for PyNaCl at the top of the test file to match other
tests: call pytest.importorskip("nacl") (e.g., immediately after the from
__future__ import annotations line) before importing minichain.crypto so the
test consistently skips when PyNaCl is unavailable; this affects the test module
that imports derive_address and generate_key_pair as well as MiniChainNetwork,
NetworkConfig, and Transaction.
Addressed Issues:
Part of #8
Completed implementation of the v0 protocol, addressing all features mentioned in Issue #8.
Summary
Closed PRs Included
Demo Video 1 (Same Device, Two-Node Simulation)
Shows:
connected_peersvisibilityMinichain_v0_prototype.1.2.mp4
Demo Video 2 (Two Different Devices)
Shows:
height,tip_hash)minichain_LAN.mp4
Checklist
AI Usage Disclosure
Check one of the checkboxes below:
I have used the following AI models and tools: TODO
We encourage contributors to use AI tools responsibly when creating Pull Requests. While AI can be a valuable aid, it is essential to ensure that your contributions meet the task requirements, build successfully, include relevant tests, and pass all linters. Submissions that do not meet these standards may be closed without warning to maintain the quality and integrity of the project. Please take the time to understand the changes you are proposing and their impact.
Summary by CodeRabbit
Release Notes
New Features
Documentation
Tests
Chores