[feature]: Demonstration for testnet blockchain, node connectivity and peer connections across different machines#44
Conversation
WalkthroughThis pull request implements a complete MiniChain testnet node with a Python CLI interface, P2P networking infrastructure, and blockchain synchronization capabilities. A new Node class manages lifecycle operations, network communication, chain validation, and optional mining, while the P2P module is rewritten as a full TCP-based implementation with peer management and message broadcasting. Changes
Sequence DiagramsequenceDiagram
participant CLI as CLI Entry
participant Node as Node Instance
participant P2P as P2PNetwork
participant Peers as Connected Peers
participant BC as Blockchain
CLI->>Node: start()
Node->>P2P: configure(host, port)
Node->>P2P: connect_to_peer(peer_address)
activate P2P
P2P->>Peers: TCP handshake (hello)
Peers-->>P2P: hello response
P2P-->>Node: connected
deactivate P2P
Node->>Node: _sync_chain()
Node->>P2P: request_chain(peer)
P2P->>Peers: get_chain message
Peers-->>P2P: chain_data
P2P-->>Node: chain_data
Node->>BC: validate_chain(chain_data)
activate BC
BC-->>Node: validation result
deactivate BC
alt Valid chain received
Node->>BC: replace_chain(chain_data)
BC->>BC: rebuild state
end
Node->>Node: _run() mining loop
loop Mining & Message Processing
Node->>P2P: broadcast_transaction(tx)
P2P->>Peers: tx message
Node->>Node: _try_mine_block()
Note over Node: If block mined
Node->>P2P: broadcast_block(block)
P2P->>Peers: block message
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 12
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cli.py`:
- Around line 78-93: _sync_chain currently uses a hardcoded asyncio.sleep(2)
after calling network.request_chain which can mark sync complete before any peer
responds; replace the sleep with an await on an asyncio.Event (e.g.,
self._chain_received_event) that is set by the code path that processes an
incoming chain (update the chain-receiving handler to call
self._chain_received_event.set()), optionally use asyncio.wait_for to bound
waiting with a timeout, then set self._sync_complete = True and log the height
after the event completes; reference functions/attributes: _sync_chain,
network.request_chain, and the chain-response handler (the method that applies
incoming chain) to set the event.
- Around line 154-161: The _handle_get_chain method is iterating
self.chain.chain without acquiring the Blockchain lock; replace that iteration
with the thread-safe helper by calling self.chain.to_dict_list() to get
chain_data (preserving the existing await self.network.send_chain(sender,
chain_data) and logger.info usage). Specifically, update the chain data
assignment in _handle_get_chain to use Blockchain.to_dict_list() so access is
protected by the lock.
- Around line 297-309: The signal handler creates an unreferenced task
asyncio.create_task(node.stop()) which can be garbage-collected and may cause
node.stop() to run twice (once from the handler and once in the main finally).
Modify shutdown_handler to store the created task in a module/local variable
(e.g., shutdown_task) so the task is retained, and then in the main
shutdown/finally logic check that shutdown_task: if it exists await it (or await
it with timeout) instead of calling node.stop() again, or guard node.stop() by
checking shutdown_task.done() to avoid duplicate stops; update references to
shutdown_handler, node.stop, and loop.add_signal_handler accordingly.
- Around line 205-230: The mining flow in _try_mine_block is wrong: create a
coinbase transaction for self.miner_address and insert it into the block's
transactions list before calling mine_block (so the reward is recorded
on-chain), do not call chain.state.credit_mining_reward directly (remove that
out-of-band call), and ensure pending transactions aren't lost on mining failure
by returning them to the mempool if mine_block raises/returns a timeout (e.g.,
call mempool.add_transactions or mempool.return_transactions with the list
returned from get_transactions_for_block in the except path); keep using
mine_block, chain.add_block, and network.broadcast_block as the success path
only.
In `@minichain/chain.py`:
- Around line 89-163: validate_chain currently verifies block hashes match
computed values but misses a proof-of-work difficulty check, allowing unmined
blocks to pass; update validate_chain (and similarly add_block) to compute the
block header hash via calculate_hash(Block.to_header_dict()) and then verify it
meets the difficulty target (e.g., check computed_hash has the required leading
zeros based on block_data.get("difficulty", 0)); if the check fails, log via
logger.warning (include block index) and return False; ensure you reference
Block, Transaction and temp_state usage remain unchanged and perform the
difficulty check immediately after computing computed_hash and before applying
transactions.
- Around line 189-225: The current rebuild clears self.chain and self.state
before reconstructing, risking a corrupted partial state if Transaction(**tx) or
state application fails; instead, construct a new_chain list and new_state =
State() locally: create the genesis Block (use GENESIS_HASH) and append
subsequent Block objects (Block(...), set nonce/hash) while converting
transactions with Transaction(**tx) and calling new_state.validate_and_apply(tx)
and checking its return (raise/propagate on failure); only after the entire
new_chain and new_state are built successfully, atomically assign self.chain =
new_chain and self.state = new_state; also remove the unused genesis_data
variable.
In `@minichain/p2p.py`:
- Around line 370-382: The function _legacy_handle_message is defined at module
level but expects a P2PNetwork instance via self and accesses
self._handler_callback, so either remove this dead legacy function or convert it
into an instance method on P2PNetwork: move/indent _legacy_handle_message into
the P2PNetwork class definition so it becomes async def
_legacy_handle_message(self, msg): and uses self._handler_callback normally;
alternatively, if you must keep a module-level helper, change its signature to
accept an explicit network parameter (e.g., network) and reference
network._handler_callback instead and update all callers accordingly.
- Around line 132-194: connect_to_peer duplicates the HELLO framing/parsing
logic that already exists on Peer (Peer.send and Peer.receive) and similarly in
_handle_incoming_connection; refactor by instantiating a Peer early (e.g.,
Peer(node_id=None or temporary id, reader, writer, address)), use peer.send(...)
to send the hello and peer.receive() to read the response, then validate/update
peer.node_id after the handshake, handle self-connection/duplicate checks using
the resolved peer_node_id, and only add the Peer to self.peers and start
_listen_to_peer after the handshake succeeds; apply the same change to
_handle_incoming_connection to remove duplicated framing code.
- Around line 344-352: The return type annotation of request_chain is
misleading: update the signature of async def request_chain(self, peer: Peer) to
return None (-> None) and adjust its docstring to state that the method only
sends a "get_chain" request and that the actual chain is delivered
asynchronously via the message callback/handler (the code path that processes
incoming messages), so callers should not expect a direct return value; keep the
existing try/except and logger.error behavior intact.
- Around line 35-51: The receive() implementation in p2p.py currently returns
None when length > MAX_MESSAGE_SIZE but leaves the message body on the TCP
stream, desynchronizing future reads; update receive() (the async def receive
method using HEADER_SIZE, MAX_MESSAGE_SIZE, self.reader and self.address) to
either (preferred) close the connection (call self.writer.close() and await
self.writer.wait_closed() or equivalent) immediately when an oversized length is
detected, or alternatively read and discard exactly length bytes from
self.reader before returning None, and ensure any exceptions are logged with
context before returning; make sure the chosen approach is applied where the
oversized length check currently returns None so the stream is not left with the
body bytes.
- Around line 270-299: The local variable msg_data assigned in _process_message
is unused and should be removed to satisfy linting (Ruff F841); delete the
assignment message.get("data", {}) and any references to msg_data, and update
the comment for the "get_chain" branch (inside _process_message where msg_type
== "get_chain") to clarify that it intentionally does nothing here (no-op/pass)
because the actual response is handled by the external handler callback
(_handler_callback); ensure only the msg_id/seen_messages logic, msg_type check,
and forwarding to _handler_callback remain.
- Around line 282-286: The current deduplication uses an unordered set
(self.seen_messages) so slicing list(self.seen_messages)[-5000:] discards
arbitrary entries; change self.seen_messages to an ordered container (e.g.,
collections.OrderedDict or collections.deque) and update places where msg_id is
added: when receiving msg_id in the handler (the block referencing msg_id and
self.seen_messages) add the id via OrderedDict[self.seen_messages][msg_id]=None
and call move_to_end(msg_id) or append to deque, and when length exceeds 10000
evict the oldest entry with popitem(last=False) for OrderedDict (or popleft()
for deque) until size is within target (trim to 5000 or remove one each time) so
eviction removes the oldest seen IDs instead of random ones; keep references to
self.seen_messages and msg_id to locate the change.
Summary
Hey @Zahnentferner, this PR Replaced mock networking with real TCP peer connections, node IDs, message framing, chain requests, and broadcast deduplication
location:
minichain/p2p.pyAdded explicit chain validation and safe chain replacement (longest valid chain wins).
location:
minichain/chain.pyAdded a clean CLI to start nodes with
--port,--peers,--mineCommands for testing
Test 1: Same Machine, Two Terminals
Test 2: Two Machines, Same LAN
Follow the below steps for test 2
find the IP addresses
Machine A (WSL + Windows Port Forwarding)
WSL: Start the node (miner)
Windows PowerShell (Admin): Forward port 9000 to WSL
Machine B (Same LAN)
Test connectivity first:
Test port first:
If ping works, start the node for (Linux/macOS):
Windows:
What this exactly does:
minichain_node.mp4
Summary by CodeRabbit
New Features