-
Notifications
You must be signed in to change notification settings - Fork 98
Add MiniMax-M2.5 FP8 vLLM benchmark for B200 #757
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
functionstackx
wants to merge
19
commits into
main
Choose a base branch
from
claude/issue-756-20260219-0235
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
19 commits
Select commit
Hold shift + click to select a range
1912b8b
Add MiniMax-M2.5 FP8 vLLM benchmark for B200
github-actions[bot] cbc5d0b
Update runner name in nvidia-master.yaml
functionstackx 1dde04d
Merge branch 'main' into claude/issue-756-20260219-0235
kedarpotdar-nv 1d38392
move launch file and enable expert parallel
kedarpotdar-nv 567a840
add PR #
kedarpotdar-nv 700c15c
Merge branch 'main' into claude/issue-756-20260219-0235
kedarpotdar-nv 13453b4
fix typo that claude made
kedarpotdar-nv 836b89a
Merge branch 'main' into claude/issue-756-20260219-0235
kedarpotdar-nv 33b498f
newline in perf-changelog
kedarpotdar-nv 61228ef
change to b200 tag
functionstackx fd89de6
Merge branch 'main' into claude/issue-756-20260219-0235
functionstackx 43800e6
fix: update HF_HUB_CACHE_MOUNT path in launch_b200-dgxc-slurm.sh
github-actions[bot] 8823744
Merge branch 'main' into claude/issue-756-20260219-0235
functionstackx bb215cd
Merge branch 'main' into claude/issue-756-20260219-0235
cquil11 7551200
Merge branch 'main' into claude/issue-756-20260219-0235
ankursingh-nv 735dd73
Update minimaxm2.5-fp8-b200-vllm image to v0.16.0-cu130
ankursingh-nv b56773c
disable flashinfer fp8 MoE
ankursingh-nv 2e99ed7
disable VLLM_MOE_USE_DEEP_GEM
kedarpotdar-nv c00ffa4
ep
ankursingh-nv File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,69 @@ | ||
| #!/usr/bin/env bash | ||
|
|
||
| source "$(dirname "$0")/../benchmark_lib.sh" | ||
|
|
||
| check_env_vars \ | ||
| MODEL \ | ||
| TP \ | ||
| EP_SIZE \ | ||
| CONC \ | ||
| ISL \ | ||
| OSL \ | ||
| MAX_MODEL_LEN \ | ||
| RANDOM_RANGE_RATIO \ | ||
| RESULT_FILENAME | ||
|
|
||
| if [[ -n "$SLURM_JOB_ID" ]]; then | ||
| echo "JOB $SLURM_JOB_ID running on $SLURMD_NODENAME" | ||
| fi | ||
|
|
||
| nvidia-smi | ||
|
|
||
| hf download "$MODEL" | ||
|
|
||
| SERVER_LOG=/workspace/server.log | ||
| PORT=${PORT:-8888} | ||
|
|
||
| export VLLM_USE_FLASHINFER_MOE_FP8=0 | ||
| export VLLM_MOE_USE_DEEP_GEMM=0 | ||
|
|
||
| if [ "$EP_SIZE" -ge 1 ]; then | ||
| EP=" --enable-expert-parallel" | ||
| else | ||
| EP=" " | ||
| fi | ||
|
|
||
| set -x | ||
| vllm serve $MODEL --port $PORT \ | ||
| --tensor-parallel-size=$TP \ | ||
| $EP \ | ||
| --gpu-memory-utilization 0.95 \ | ||
| --max-model-len $MAX_MODEL_LEN \ | ||
| --block-size=32 \ | ||
| --disable-log-requests \ | ||
| --trust-remote-code > $SERVER_LOG 2>&1 & | ||
|
|
||
| SERVER_PID=$! | ||
|
|
||
| # Wait for server to be ready | ||
| wait_for_server_ready --port "$PORT" --server-log "$SERVER_LOG" --server-pid "$SERVER_PID" | ||
|
|
||
| run_benchmark_serving \ | ||
| --model "$MODEL" \ | ||
| --port "$PORT" \ | ||
| --backend vllm \ | ||
| --input-len "$ISL" \ | ||
| --output-len "$OSL" \ | ||
| --random-range-ratio "$RANDOM_RANGE_RATIO" \ | ||
| --num-prompts "$((CONC * 10))" \ | ||
| --max-concurrency "$CONC" \ | ||
| --result-filename "$RESULT_FILENAME" \ | ||
| --result-dir /workspace/ \ | ||
| --trust-remote-code | ||
|
|
||
| # After throughput, run evaluation only if RUN_EVAL is true | ||
| if [ "${RUN_EVAL}" = "true" ]; then | ||
| run_eval --framework lm-eval --port "$PORT" --concurrent-requests $CONC | ||
| append_lm_eval_summary | ||
| fi | ||
| set +x | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
document this too please in the vllm recipes