Skip to content

Add event loop pool with process affinity#42

Merged
benoitc merged 1 commit intomainfrom
feature/event-loop-pool-affinity
Mar 19, 2026
Merged

Add event loop pool with process affinity#42
benoitc merged 1 commit intomainfrom
feature/event-loop-pool-affinity

Conversation

@benoitc
Copy link
Owner

@benoitc benoitc commented Mar 19, 2026

Summary

  • Event loop worker pool with process affinity for parallel Python coroutine execution
  • Same PID always routes to same loop, guaranteeing ordered execution per process
  • Uses persistent_term for O(1) loop access
  • New distributed task API: create_task, run, spawn_task, await

Benchmarks (Python 3.14, 14 schedulers)

Benchmark Throughput
Sequential (single loop) 83k tasks/sec
Sequential (pool) 150k tasks/sec
Concurrent (50 procs) 164k tasks/sec
Fire-and-collect 417k tasks/sec

Configuration

{event_loop_pool_size, N}  % default: schedulers count

Event loop worker pool inspired by libuv's "one loop per thread" model.
Each loop has its own worker and maintains event ordering.

Process affinity: All tasks from the same Erlang process are routed to
the same event loop (via PID hash), guaranteeing that timers and related
async operations execute in order.

Changes:
- py_event_loop_pool.erl: Rewrite with process affinity, persistent_term
  for O(1) access, distributed task API (create_task, run, spawn_task, await)
- py_event_loop.erl: Export get_process_env/0 for pool use
- py_event_loop_pool_SUITE.erl: New test suite (9 tests)
- bench_event_loop_pool.erl: New benchmark

Configuration:
- event_loop_pool_size: Number of loops (default: schedulers count)

Benchmarks (Python 3.14, 14 schedulers):
- Sequential (pool): 150k tasks/sec (vs 83k single loop)
- Concurrent (50 procs): 164k tasks/sec
- Fire-and-collect: 417k tasks/sec
@benoitc benoitc merged commit a8f3cbf into main Mar 19, 2026
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant