Bump vllm from 0.6.3.post1 to 0.20.0 in /experiments/agentcompany/openhands#24
Conversation
Bumps [vllm](https://github.com/vllm-project/vllm) from 0.6.3.post1 to 0.20.0. - [Release notes](https://github.com/vllm-project/vllm/releases) - [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md) - [Commits](vllm-project/vllm@v0.6.3.post1...v0.20.0) --- updated-dependencies: - dependency-name: vllm dependency-version: 0.20.0 dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com>
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub.
|
|
Warning Review the following alerts detected in dependencies. According to your organization's Security Policy, it is recommended to resolve "Warn" alerts. Learn more about Socket for GitHub.
|
Bumps vllm from 0.6.3.post1 to 0.20.0.
Release notes
Sourced from vllm's releases.
... (truncated)
Commits
88d34c6[Docker] Install numactl CLI in CUDA runtime image (#41032)b816087[DSV4] Add silu clamp limit to shared expert (#40950)84c276d[Bugfix] Cap SWA/chunked-local runtime admission to startup pool-sizing bound...5eb3657Revert "[Frontend] Remove frontend pooling multi task support. (#37861)"4d51588[Feat] DeepSeek V4 Rebased (#40860)32e4563[torch.compile]: Disable Sequence Parallelism (SP) for piecewise compilation ...b39c266[KV Offload] Offload all KV blocks when doing prefill in P/D (#40346)9558f43[Bugfix] Size FlashInfer NVLink MNNVL workspace to EP group (#40893)8cd174f[LoRA] MoE LoRA Refactor (#40338)c798593[Bugfix] Fix the DSML token leakage in DSV4/3.2 (#40806)Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)You can disable automated security fix PRs for this repo from the Security Alerts page.
Greptile Summary
This dependabot PR bumps
vllmfrom0.6.3.post1to0.20.0— a very large jump spanning many minor versions — but does not update the other pinned dependencies that vllm 0.20.0 requires.torch==2.4.0+cu121conflicts with vllm 0.20.0's requirement of PyTorch 2.11 (and its default CUDA 13.0);torchaudioandtorchvisionare similarly stale.transformersis pinned to a v4 release while vllm 0.20.0 now requirestransformers>=5, which will cause import errors or silent misbehavior at runtime.Confidence Score: 2/5
Not safe to merge — the updated vllm version is incompatible with the still-pinned torch and transformers versions, making the environment non-functional.
Two P1 findings: torch is pinned to 2.4.0 while vllm 0.20.0 requires 2.11, and transformers is pinned to v4 while vllm 0.20.0 requires v5. Both will prevent a working installation.
experiments/agentcompany/openhands/requirements.txt — torch, torchaudio, torchvision, transformers, tokenizers, huggingface-hub, xformers, and triton all need version bumps to match vllm 0.20.0's requirements.
Important Files Changed
Flowchart
%%{init: {'theme': 'neutral'}}%% flowchart TD A[vllm 0.20.0] -->|requires| B[torch >= 2.11\nCUDA 13.0] A -->|requires| C[transformers >= 5] D[requirements.txt] -->|pins| E[torch 2.4.0+cu121 ❌] D -->|pins| F[transformers v4 ❌] D -->|pins| G[xformers 0.0.27.post2\ntied to torch 2.4] B -.->|conflict| E C -.->|conflict| F B -.->|conflict| GComments Outside Diff (2)
experiments/agentcompany/openhands/requirements.txt, line 177 (link)vllm 0.20.0 requires PyTorch 2.11 (and ships with CUDA 13.0 by default), but this file pins
torch==2.4.0+cu121. Installing these two together will fail pip's dependency resolver or produce a non-functional environment where vllm's CUDA kernels miscompile against the wrong torch ABI. The same applies totorchaudio==2.4.0+cu121andtorchvision==0.19.0, both of which are tied to torch 2.4.Prompt To Fix With AI
experiments/agentcompany/openhands/requirements.txt, line 183 (link)vllm 0.20.0 now requires
transformers>=5(v5), but this file pins transformers to a v4 release. vllm's model loading code targeting the v5 API will raise import errors or produce incorrect results at runtime. The pinnedtokenizersandhuggingface-hubpackages are also tied to the v4 ecosystem and will need updating in tandem.Prompt To Fix With AI
Prompt To Fix All With AI
Reviews (1): Last reviewed commit: "Bump vllm in /experiments/agentcompany/o..." | Re-trigger Greptile