Skip to content

[feature]: Demonstration for testnet blockchain, node connectivity and peer connections across different machines#44

Open
anuragShingare30 wants to merge 4 commits intoStabilityNexus:mainfrom
anuragShingare30:feat/node-enhancement
Open

[feature]: Demonstration for testnet blockchain, node connectivity and peer connections across different machines#44
anuragShingare30 wants to merge 4 commits intoStabilityNexus:mainfrom
anuragShingare30:feat/node-enhancement

Conversation

@anuragShingare30
Copy link

@anuragShingare30 anuragShingare30 commented Feb 24, 2026

Summary

  • Hey @Zahnentferner, this PR Replaced mock networking with real TCP peer connections, node IDs, message framing, chain requests, and broadcast deduplication
    location: minichain/p2p.py

  • Added explicit chain validation and safe chain replacement (longest valid chain wins).
    location: minichain/chain.py

  • Added a clean CLI to start nodes with --port, --peers, --mine

Commands for testing

Test 1: Same Machine, Two Terminals

# activate the virtual environment
source .venv/bin/activate

# Terminal 1
python cli.py --port 9000

# Terminal 2
python cli.py --port 9001 --peers 127.0.0.1:9000

Test 2: Two Machines, Same LAN

# Machine A (Ex: 192.168.1.10)
python3 cli.py --port 9000 --mine

# Machine B (Ex: 192.168.1.20)
python3 cli.py --port 9001 --peers 192.168.1.10:9000

Follow the below steps for test 2

find the IP addresses

# run this both on machine A and machine B
ip addr | grep "inet " | grep -v 127.0.0.1
# or
hostname -I

Machine A (WSL + Windows Port Forwarding)

WSL: Start the node (miner)

cd ~/web2/minichain
source .venv/bin/activate
python3 cli.py --port 9000 --mine

Windows PowerShell (Admin): Forward port 9000 to WSL

$wslIp = wsl hostname -I
$wslIp = $wslIp.Trim()
netsh interface portproxy add v4tov4 listenport=9000 listenaddress=0.0.0.0 connectport=9000 connectaddress=$wslIp
netsh advfirewall firewall add rule name="MiniChain Node" dir=in action=allow protocol=tcp localport=9000
netsh interface portproxy show all

Machine B (Same LAN)

Test connectivity first:

ping 192.168.137.10

Test port first:

# linux
nc -zv 192.168.137.10 9000

# windows
Test-NetConnection -ComputerName 192.168.137.10 -Port 9000

If ping works, start the node for (Linux/macOS):

cd minichain
source .venv/bin/activate
python3 cli.py --port 9001 --peers 192.168.137.10:9000

Windows:

cd minichain
python cli.py --port 9001 --peers 192.168.137.10:9000

What this exactly does:

  • Run one miner node and one observer node
  • Connect them over TCP on the same LAN
  • Sync the chain and watch blocks propagate between nodes
minichain_node.mp4

Summary by CodeRabbit

New Features

  • Added command-line interface for running a testnet node with configurable port, peer connections, and mining capability
  • Nodes automatically synchronize and validate blockchain state with peers
  • Enhanced chain validation and replacement logic to ensure network consensus
  • Improved peer-to-peer communication for reliable transaction and block broadcasting

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 24, 2026

Walkthrough

This pull request implements a complete MiniChain testnet node with a Python CLI interface, P2P networking infrastructure, and blockchain synchronization capabilities. A new Node class manages lifecycle operations, network communication, chain validation, and optional mining, while the P2P module is rewritten as a full TCP-based implementation with peer management and message broadcasting.

Changes

Cohort / File(s) Summary
CLI Node Implementation
cli.py
Introduces Node class with async lifecycle management (start/stop), mining loop, P2P message routing for transactions/blocks/chain requests, and chain synchronization. Provides command-line interface with port, peers, mining, and miner address configuration.
Blockchain Core
minichain/chain.py
Adds GENESIS_HASH constant, height property, and validation/replacement logic for chain consensus. Implements validate_chain() for integrity checks and replace_chain() for longest-valid-chain rule with atomic state transitions.
P2P Network Layer
minichain/p2p.py
Complete rewrite from placeholder to production TCP-based P2P implementation. Introduces Peer class for connection management, adds length-prefixed JSON message framing, peer discovery via handshake, message deduplication, and broadcast/request methods for transactions and chain data.

Sequence Diagram

sequenceDiagram
    participant CLI as CLI Entry
    participant Node as Node Instance
    participant P2P as P2PNetwork
    participant Peers as Connected Peers
    participant BC as Blockchain
    
    CLI->>Node: start()
    Node->>P2P: configure(host, port)
    Node->>P2P: connect_to_peer(peer_address)
    activate P2P
    P2P->>Peers: TCP handshake (hello)
    Peers-->>P2P: hello response
    P2P-->>Node: connected
    deactivate P2P
    
    Node->>Node: _sync_chain()
    Node->>P2P: request_chain(peer)
    P2P->>Peers: get_chain message
    Peers-->>P2P: chain_data
    P2P-->>Node: chain_data
    
    Node->>BC: validate_chain(chain_data)
    activate BC
    BC-->>Node: validation result
    deactivate BC
    
    alt Valid chain received
        Node->>BC: replace_chain(chain_data)
        BC->>BC: rebuild state
    end
    
    Node->>Node: _run() mining loop
    loop Mining & Message Processing
        Node->>P2P: broadcast_transaction(tx)
        P2P->>Peers: tx message
        Node->>Node: _try_mine_block()
        Note over Node: If block mined
        Node->>P2P: broadcast_block(block)
        P2P->>Peers: block message
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested labels

Python Lang

Suggested reviewers

  • Zahnentferner

Poem

🐰 A node joins the warren's grand chain,
With P2P whispers through the domain,
Mining blocks in cryptographic dance,
Where peers synchronize in perfect stance!

🚥 Pre-merge checks | ✅ 2
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main feature addition: a complete node implementation with TCP-based peer connectivity and testnet demonstration capabilities.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cli.py`:
- Around line 78-93: _sync_chain currently uses a hardcoded asyncio.sleep(2)
after calling network.request_chain which can mark sync complete before any peer
responds; replace the sleep with an await on an asyncio.Event (e.g.,
self._chain_received_event) that is set by the code path that processes an
incoming chain (update the chain-receiving handler to call
self._chain_received_event.set()), optionally use asyncio.wait_for to bound
waiting with a timeout, then set self._sync_complete = True and log the height
after the event completes; reference functions/attributes: _sync_chain,
network.request_chain, and the chain-response handler (the method that applies
incoming chain) to set the event.
- Around line 154-161: The _handle_get_chain method is iterating
self.chain.chain without acquiring the Blockchain lock; replace that iteration
with the thread-safe helper by calling self.chain.to_dict_list() to get
chain_data (preserving the existing await self.network.send_chain(sender,
chain_data) and logger.info usage). Specifically, update the chain data
assignment in _handle_get_chain to use Blockchain.to_dict_list() so access is
protected by the lock.
- Around line 297-309: The signal handler creates an unreferenced task
asyncio.create_task(node.stop()) which can be garbage-collected and may cause
node.stop() to run twice (once from the handler and once in the main finally).
Modify shutdown_handler to store the created task in a module/local variable
(e.g., shutdown_task) so the task is retained, and then in the main
shutdown/finally logic check that shutdown_task: if it exists await it (or await
it with timeout) instead of calling node.stop() again, or guard node.stop() by
checking shutdown_task.done() to avoid duplicate stops; update references to
shutdown_handler, node.stop, and loop.add_signal_handler accordingly.
- Around line 205-230: The mining flow in _try_mine_block is wrong: create a
coinbase transaction for self.miner_address and insert it into the block's
transactions list before calling mine_block (so the reward is recorded
on-chain), do not call chain.state.credit_mining_reward directly (remove that
out-of-band call), and ensure pending transactions aren't lost on mining failure
by returning them to the mempool if mine_block raises/returns a timeout (e.g.,
call mempool.add_transactions or mempool.return_transactions with the list
returned from get_transactions_for_block in the except path); keep using
mine_block, chain.add_block, and network.broadcast_block as the success path
only.

In `@minichain/chain.py`:
- Around line 89-163: validate_chain currently verifies block hashes match
computed values but misses a proof-of-work difficulty check, allowing unmined
blocks to pass; update validate_chain (and similarly add_block) to compute the
block header hash via calculate_hash(Block.to_header_dict()) and then verify it
meets the difficulty target (e.g., check computed_hash has the required leading
zeros based on block_data.get("difficulty", 0)); if the check fails, log via
logger.warning (include block index) and return False; ensure you reference
Block, Transaction and temp_state usage remain unchanged and perform the
difficulty check immediately after computing computed_hash and before applying
transactions.
- Around line 189-225: The current rebuild clears self.chain and self.state
before reconstructing, risking a corrupted partial state if Transaction(**tx) or
state application fails; instead, construct a new_chain list and new_state =
State() locally: create the genesis Block (use GENESIS_HASH) and append
subsequent Block objects (Block(...), set nonce/hash) while converting
transactions with Transaction(**tx) and calling new_state.validate_and_apply(tx)
and checking its return (raise/propagate on failure); only after the entire
new_chain and new_state are built successfully, atomically assign self.chain =
new_chain and self.state = new_state; also remove the unused genesis_data
variable.

In `@minichain/p2p.py`:
- Around line 370-382: The function _legacy_handle_message is defined at module
level but expects a P2PNetwork instance via self and accesses
self._handler_callback, so either remove this dead legacy function or convert it
into an instance method on P2PNetwork: move/indent _legacy_handle_message into
the P2PNetwork class definition so it becomes async def
_legacy_handle_message(self, msg): and uses self._handler_callback normally;
alternatively, if you must keep a module-level helper, change its signature to
accept an explicit network parameter (e.g., network) and reference
network._handler_callback instead and update all callers accordingly.
- Around line 132-194: connect_to_peer duplicates the HELLO framing/parsing
logic that already exists on Peer (Peer.send and Peer.receive) and similarly in
_handle_incoming_connection; refactor by instantiating a Peer early (e.g.,
Peer(node_id=None or temporary id, reader, writer, address)), use peer.send(...)
to send the hello and peer.receive() to read the response, then validate/update
peer.node_id after the handshake, handle self-connection/duplicate checks using
the resolved peer_node_id, and only add the Peer to self.peers and start
_listen_to_peer after the handshake succeeds; apply the same change to
_handle_incoming_connection to remove duplicated framing code.
- Around line 344-352: The return type annotation of request_chain is
misleading: update the signature of async def request_chain(self, peer: Peer) to
return None (-> None) and adjust its docstring to state that the method only
sends a "get_chain" request and that the actual chain is delivered
asynchronously via the message callback/handler (the code path that processes
incoming messages), so callers should not expect a direct return value; keep the
existing try/except and logger.error behavior intact.
- Around line 35-51: The receive() implementation in p2p.py currently returns
None when length > MAX_MESSAGE_SIZE but leaves the message body on the TCP
stream, desynchronizing future reads; update receive() (the async def receive
method using HEADER_SIZE, MAX_MESSAGE_SIZE, self.reader and self.address) to
either (preferred) close the connection (call self.writer.close() and await
self.writer.wait_closed() or equivalent) immediately when an oversized length is
detected, or alternatively read and discard exactly length bytes from
self.reader before returning None, and ensure any exceptions are logged with
context before returning; make sure the chosen approach is applied where the
oversized length check currently returns None so the stream is not left with the
body bytes.
- Around line 270-299: The local variable msg_data assigned in _process_message
is unused and should be removed to satisfy linting (Ruff F841); delete the
assignment message.get("data", {}) and any references to msg_data, and update
the comment for the "get_chain" branch (inside _process_message where msg_type
== "get_chain") to clarify that it intentionally does nothing here (no-op/pass)
because the actual response is handled by the external handler callback
(_handler_callback); ensure only the msg_id/seen_messages logic, msg_type check,
and forwarding to _handler_callback remain.
- Around line 282-286: The current deduplication uses an unordered set
(self.seen_messages) so slicing list(self.seen_messages)[-5000:] discards
arbitrary entries; change self.seen_messages to an ordered container (e.g.,
collections.OrderedDict or collections.deque) and update places where msg_id is
added: when receiving msg_id in the handler (the block referencing msg_id and
self.seen_messages) add the id via OrderedDict[self.seen_messages][msg_id]=None
and call move_to_end(msg_id) or append to deque, and when length exceeds 10000
evict the oldest entry with popitem(last=False) for OrderedDict (or popleft()
for deque) until size is within target (trim to 5000 or remove one each time) so
eviction removes the oldest seen IDs instead of random ones; keep references to
self.seen_messages and msg_id to locate the change.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c41261f and 2bf781f.

📒 Files selected for processing (3)
  • cli.py
  • minichain/chain.py
  • minichain/p2p.py

@anuragShingare30 anuragShingare30 changed the title [feature]: Demonstration for node connectivity and peer connection [feature]: Demonstration for testnet blockchain, node connectivity and peer connections across different machines Feb 25, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant