Skip to content

Conversation

@RichardWooSJTU
Copy link
Collaborator

@RichardWooSJTU RichardWooSJTU commented Feb 10, 2026

Motivation

  1. This PR fixes a critical performance issue in KV cache transfer where unnecessary conversion of src_block_ids to paddle.Tensor in Python caused significant performance degradation in pybind parsing. The issue was identified during KV cache transfer optimization work.
  2. This PR introduces a comprehensive benchmark tool for measuring KV cache transfer performance between prefill and decode stages in splitwise inference. The benchmark helps evaluate the throughput and latency of RDMA-based KV cache transfers, which is critical for optimizing distributed inference performance.

💡 If this PR is a Cherry Pick, the PR title needs to follow the format by adding the [Cherry-Pick] label at the very beginning and appending the original PR ID at the end. For example, [Cherry-Pick][CI] Add check trigger and logic(#5191)

💡 如若此PR是Cherry Pick,PR标题需遵循格式,在最开始加上[Cherry-Pick]标签,以及最后面加上原PR ID,例如[Cherry-Pick][CI] Add check trigger and logic(#5191)

Modifications

  1. Performance Fix in cache_messager.py:

    • Removed unnecessary paddle.to_tensor() conversion for src_block_ids and dest_block_ids
    • Changed timing from time.time() to time.perf_counter() for more accurate performance measurement
    • These changes eliminate the performance bottleneck in pybind layer
  2. Added Benchmark Tool:

    • Introduced benchmark.py for comprehensive KV cache transfer performance testing
    • Provides throughput measurement capabilities for RDMA-based transfers
    • Includes configurable parameters and detailed result reporting

Usage or Command

# Performance test after fix
python benchmark.py --splitwise_role prefill --decode_ip 127.0.0.1 --num_layers 61 --blocks_per_transfer 64

Accuracy Tests

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@paddle-bot
Copy link

paddle-bot bot commented Feb 10, 2026

Thanks for your contribution!

@RichardWooSJTU RichardWooSJTU changed the title fix cache messager performance problem [PD Disaggregation] Fix cache messager performance problem & add kv transfer benchmark tool Feb 10, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant