Skip to content

Bump vllm from 0.6.3.post1 to 0.20.0 in /experiments/agentcompany/openhands#24

Open
dependabot[bot] wants to merge 1 commit intomainfrom
dependabot/pip/experiments/agentcompany/openhands/vllm-0.20.0
Open

Bump vllm from 0.6.3.post1 to 0.20.0 in /experiments/agentcompany/openhands#24
dependabot[bot] wants to merge 1 commit intomainfrom
dependabot/pip/experiments/agentcompany/openhands/vllm-0.20.0

Conversation

@dependabot
Copy link
Copy Markdown

@dependabot dependabot Bot commented on behalf of github May 6, 2026

Bumps vllm from 0.6.3.post1 to 0.20.0.

Release notes

Sourced from vllm's releases.

v0.20.0

vLLM v0.20.0

Highlights

This release features 752 commits from 320 contributors (123 new)!

  • DeepSeek V4: Initial DeepSeek V4 support landed (#40860), with DSML token-leakage fix in DSV4/3.2 (#40806), DSA + MTP IMA fix (#40772), and a silu clamp limit on the shared expert (#40950).
  • CUDA 13.0 default: Default CUDA wheel on PyPI and vllm/vllm-openai:v0.20.0 image switched to CUDA 13.0; architecture lists and build-args cleaned up (#39878), and CUDA bumped to 13.0.2 to match PyTorch 2.11.0 (#40669). As a general rule of thumb, our CUDA version policy follows PyTorch's. We highly recommend to install vLLM with uv and use --torch-backend=cu129 if you are on CUDA 12.9.
  • PyTorch 2.11 upgrade (#34644): vLLM ships on torch 2.11 for CUDA, and XPU is now also on torch 2.11 (#37947) — XPU is no longer pinned to 2.10. This is a breaking change for environment dependency.
  • Python 3.14: Added to the supported Python version list (#34770).
  • Transformers v5: vLLM now runs on HuggingFace transformers>=5 (#30566), with vision-encoder torch.compile bypass (#30518) and continued v4/v5 compat fixes including PaddleOCR-VL image processor max_pixels (#38629), Mistral YaRN warning (#37292), and Jina ColBERT rotary inv_freq recompute (#39176).
  • New large models: Hunyuan v3 (Hy3) preview (#40681) with HYV3 reasoning parser (#40713); Granite 4.1 Vision as a built-in multimodal model (#40282).
  • FlashAttention 4 as default MLA prefill: FA4 re-enabled as the default MLA prefill backend (#38819) with head-dim 512 and paged-KV support on SM90+ (#38835), plus an upstream FA4 sync (#38690).
  • TurboQuant 2-bit KV cache: New attention backend delivering 2-bit KV cache compression with 4× capacity (#38479), now with FA3/FA4 prefill support (#40092).
  • Online quantization frontend: New end-to-end online quantization frontend (#38138), with docs (#39736); experts_int8 consolidated into the FP8 online path (#38463); MXFP8 online quant moved to the new frontend (#40152).
  • vLLM IR: Initial IR skeleton with rms_norm op (#33825), OOT-platform kernel imports (#38807), gemma_rms_norm reworked on IR (#39014), and IR op testing/benchmarking infra added (#40167) — foundation for future kernel work.
  • Model Runner V2 advances: Eagle prefill full-CUDA-graph (#37588), auto-resolve cudagraph mode/sizes from attention backend (#32936), fused probabilistic rejection sample kernels (#38496), config validation for unsupported features (#38758), piecewise-fallback disabled for eagle draft decodes (#39773), multiple prompt-logprobs support (#39937), prefill warmup coverage (#40746), and a fix for accuracy regression caused by stale sampled/draft tokens (#39833).
  • MoE refactor series: Unquantized migrated to Full Oracle Flow (#36286), CT W8A8 to Oracle (#39187), SharedExperts class (#35153), SharedFusedMoE removed (#35782), DefaultMoERunner split (#35326) and later combined back into MoERunnerBase (#40560), shared/fused expert output sum moved into MoERunnerBase (#35949), ZeroExpertFusedMoE in new framework (#35549), compressed_tensors_moe.py split (#38960), GPTQMarlinMoEMethod reworked with MK (#37990), XPU & CUTLASS MoE relocated to fused_moe/experts/ (#40568, #40574), make_expert_params_mapping renamed (#40671), MoE LoRA refactor (#40338), and MoE DP chunking removed (#39107).
  • Performance: Optimize batch invariant with fused rms norm — 2.1% E2E latency improvement (#40413); avoid seq_lens_cpu GPU→CPU sync (#40654); cache InductorPass.hash_source (#39328); skip FX-graph deserialization on loading for faster warm compile (#40151); CUDAGraph memory profiling enabled by default for clearer startup memory accounting (#38284).

Model Support

  • New architectures: DeepSeek V4 (#40860), Hunyuan v3 preview (#40681), Granite 4.1 Vision (#40282), EXAONE-4.5 (#39388), BharatGen Param2MoE (#38000), Phi-4-reasoning-vision-15B (#38306), Cheers multimodal (#38788), telechat3 (#38510), FireRedLID (#39290), jina-reranker-v3 (#38800), Jina Embeddings v5 (#39575), Nemotron-v3 VL Nano/Super (#39747).
  • Gemma4 series: fast prefill (#38879), quantized MoE (#39045), Eagle3 (#39450), block-local attention + YaRN for Gemma3 (#39823), bidirectional vision attention for sliding layers (#40534), token-repetition fix via dynamic BOS (#39842), multimodal embedder norm-order fix (#40411), plus a string of streaming/tool-call fixes (#38844, #38909, #38992, #39114, #39679, #39027).
  • Quantization formats: GGUF support for MiniMax-M2.1 (#36965), non-standard GGUF quant types with prefix such as UD-IQ1_S (#39471).
  • Speculative decoding: Eagle3 for MiniMax-M2 (#37512), Eagle3 for Gemma4 (#39450).
  • LoRA: Qwen3ASRForConditionalGeneration (#37247), Gemma4ForConditionalGeneration (#39291, #38844), DeepSeek V3.2 (#35077), Qwen3.5 / Step3.x expert base_layer extension (#37114), MoE LoRA refactor (#40338), dual-CUDA-streams linear layer (#35721).
  • Multimodal MRoPE refresh: mm_features-based MRoPE for Ernie-4.5 VL (#39753), Keye-VL / Keye-1.5-VL (#39869), PaddleOCR-VL (#39888).
  • Other: Nano-Nemotron-VL static image inputs fix (#40724); Qwen3 MoE no longer calls gate twice (#40664); DeepSeek V2-Lite accuracy drop fix (#40673); Parakeet UX / perf enhancements (#39423); ColModernVBERT updated for latest HF checkpoint (#39307); NemotronH default mamba_ssm_cache_dtype=float32 with NemotronHNanoVLV2 auto-hook (#39032); new TP plan styles for the Transformers backend (#40467); GLM-5.1 fix on ROCm (#40763).

Engine Core

  • Model Runner V2: Full CUDA graph for eagle prefill (#37588), auto cudagraph mode/sizes based on attention backend (#32936), fused probabilistic rejection-sample kernels (#38496), config validation (#38758), eagle-draft piecewise fallback disabled (#39773), multiple prompt logprobs (#39937), prefill warmup coverage (#40746), stale sampled/draft tokens accuracy fix (#39833).
  • vLLM IR: IR skeleton + rms_norm (#33825), OOT kernel import hooks (#38807), gemma_rms_norm on IR (#39014), IR op testing/benchmarking infra (#40167).
  • torch.compile: Opaque Objects on torch 2.11 (#39286), AOT compile with batch-invariance mode (#39201), Inductor cache nested under AOT dir (#39718), split FX graph via codegen (#38657), Inductor pre-grad passes re-enabled for torch≥2.12 (#38944), strings in custom ops without compile regressions (#38123), MLA + group FP8 fusion (#38877), SiluMul activation+quant fusion refactor (#39684), donate_graph_module=True for standalone_compile (#39733), skip FX graph deserialization on loading (#40151), include Inductor & functorch configs in compile-cache key (#40627), respect TORCH_COMPILE_DISABLE at vLLM config level (#40715), disable Sequence Parallelism for piecewise compilation (#38373).
  • Attention: FA4 as default MLA prefill (#38819), head-dim 512 + paged-KV on sm90+FA4 (#38835), FA4 upstream sync (#38690), full CUDA graph for FlexAttention (#36298), FlexAttention non-causal support (#40394), unified 2D/3D triton_unified_attention (#40631), TRTLLM minimax_allreduce_rms ported (#37045), concat_mla_q half-types only (#37892), batch-invariance-aware backend auto-selection (#40193), avoid seq_lens_cpu GPU→CPU sync (#40654).
  • Helion kernels: torch.compile support for Helion kernels (#38592).
  • HMA / KV offload: GPU-side KV events for HMA (#37688), group block hashes/IDs tracked (#37109), unified memory layout for offloading workers (#37206), shutdown() on OffloadingConnector (#39182), request context passed through KV offload (#39185), sliding-window lookup (#36645), multi-group worker transfer (#38453), multi-KV-group lookup/load/store (#39401, #39402, #39403).
  • Features: NUMA binding for GPU workers (#38635), opt-in VLLM_MEDIA_CACHE media URL caching (#37123), safe request abort when FSM fails to advance (#38663), KV connector prioritized over internal registry (#38301), CUDAGraph memory profiling on by default (#38284), shared-expert overlap restored (#39222), CONFIG_REGISTRY config-class lookup fix when on-disk model_type differs (#39554), workspace-resize GPU memory leak fix (#39226), SWA/chunked-local runtime admission capped to startup pool-sizing bound (#40946).
  • Pluggable layers: Applied to llm_head / vocab embedding (#33465) and MoE layers (#33556).
  • Mamba: Stochastic rounding (#35753), different Conv state layouts (#37416), FlashInfer selective_state_update (#36162).
  • Metrics & scheduling: Labeled waiting-breakdown (capacity/deferred) metric (#38435), API server handshake simplified (#39364), mm-scheduler get_num_embed overhead reduced (#40143), request_id on FinishedRequestStats (#39710).
  • Executor: RayExecutorV2 introduced (#36836); unified engine process monitoring with Ray backend (#35862).

Hardware & Performance

  • NVIDIA: swapAB support for SM120 CUTLASS blockwise FP8 GEMM (#38325), MXFP4 W4A4 CUTLASS MoE for SM100 (#37463), TRTLLM GEN NVFP4 MoE with non-512-aligned hidden dims via weight padding (#39510), TRTLLM FP8 MoE with shuffled weights + BlockMajorK layout (#38993), fused qknorm+rope kernel on SM9.0 (#37376), tuned fused_moe config for RTX PRO 6000 Blackwell (#39183), ViT full CUDA graph for Qwen3-VL video (#38061), --enable-vit-cuda-graph for VLM examples (#40580), default max_frames_per_batch auto-infer for ViT CG video (#40445), fused FP8 output quantization into merge_attn_states (#36518), batched KV-cache swap via cuMemcpyBatchAsync (#38460), sm_110 (Jetson Thor) added to CUDA 13.0 build targets (#39233).
  • AMD ROCm: ZenCPU / AMD Zen CPU backend via zentorch (#39967), RDNA 3.5/4 device IDs (gfx1150/1151/1201) (#38455), gfx1102/gfx1103 added (#40037), MORI EP for unquantized MoE with AITER (#37529), MoRI build with AMD AINIC stack (#38371), MoRI-IO message format aligned with P2pNcclConnector and vllm-router (#39565), MORI prefill/decode API correction (#39835), AITER gemm w8a8 ptpc integration (#33773), TritonW4A16LinearKernel (#37352), asymmetric INT8 in TritonInt8ScaledMMLinearKernel (#38501), fused_silu_mul_block_quant enabled (#38817), KV-cache shuffle for paged_attention_common (#32914), MLA decode output zero-fill removed in AITER (#37539), MLA dual RMS norm fusion pass for DeepSeek/Kimi-K2 (#39242, with older-AITer guard #40386), AITER MLA + Eagle3 spec decode (#39616), DFlash on ROCm (#39703), wvSplitK FP8 path for RDNA (#37712), GPU↔NUMA-node detection (#40015), non-causal attention in ROCM_ATTN (#40176), engine-shutdown GPU memory leak fix (#38503), score-correction-bias dtype cast for DeepSeek/Kimi-K2 (#39999).
  • Intel XPU: torch 2.11 upgrade for XPU (#37947) — no longer pinned to 2.10, initial GDN attention for Qwen3-Next / Qwen3.5 (#33657), torch.compile for XPU GDN attention (#39466), XPU MXFP8 quant op (#38682), XPU MXFP4 quant op (#39857), per-channel FP8 linear (#38316), FP8 KV cache on XPU (#37731), round_int8 for Intel Triton (#38825), MoE Triton in online FP8 quantization fix (#40109), current_platform.supports_fp8() updated for TritonExperts (#40132), NIXL import on XPU fix (#40430), fusion-pattern support disabled on XPU (#39789).
  • CPU: CPU draft-model speculative decoding (#32662), CPU int8 compute mode in AWQ (#35697), head_size 512 in cpu_attn (#38676), gelu in cpu_fused_moe (#38770), OMP replacement (#36487), BF16 GELU LUT on ARM (#37469), W4A16 Autoround on CPU (#38192), CPU affinity/memory mgmt refactor (#39781), IBM Z s390x torch 2.11 builds (#39910), faster exp routine for lower-precision dtypes (#38112), inter-node pipeline parallel fix (#40150), RISC-V multiple RVV VLEN targets (#39478), RISC-V platform detection fix (#40427), exp() input clamp to prevent NaN on CPU/RISC-V (#40428).
  • TPU: tpu-inference upgraded to 0.18.0 (#40395).
  • DeepSeek / MLA / Indexer: Persistent TopK scheduler for DSV3.2 DSA decode (#37421), DSV3.2 indexer fused weights projection (#38684), Triton MLA perf fixes (#33529), indexer WK upcast to BF16 for fusion (#38928), MLA indexer uniform-decode optimization for MTP>1 (#39458), DSA + MTP IMA fix (#40772).
  • GDN / Mamba: Kernel fusion in GDN (#37813), TMA aligned with upstream FLA (#38981), GPU↔CPU syncs eliminated in prefill and spec-decode paths (#38361, #38047).

... (truncated)

Commits
  • 88d34c6 [Docker] Install numactl CLI in CUDA runtime image (#41032)
  • b816087 [DSV4] Add silu clamp limit to shared expert (#40950)
  • 84c276d [Bugfix] Cap SWA/chunked-local runtime admission to startup pool-sizing bound...
  • 5eb3657 Revert "[Frontend] Remove frontend pooling multi task support. (#37861)"
  • 4d51588 [Feat] DeepSeek V4 Rebased (#40860)
  • 32e4563 [torch.compile]: Disable Sequence Parallelism (SP) for piecewise compilation ...
  • b39c266 [KV Offload] Offload all KV blocks when doing prefill in P/D (#40346)
  • 9558f43 [Bugfix] Size FlashInfer NVLink MNNVL workspace to EP group (#40893)
  • 8cd174f [LoRA] MoE LoRA Refactor (#40338)
  • c798593 [Bugfix] Fix the DSML token leakage in DSV4/3.2 (#40806)
  • Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    You can disable automated security fix PRs for this repo from the Security Alerts page.

Greptile Summary

This dependabot PR bumps vllm from 0.6.3.post1 to 0.20.0 — a very large jump spanning many minor versions — but does not update the other pinned dependencies that vllm 0.20.0 requires.

  • torch==2.4.0+cu121 conflicts with vllm 0.20.0's requirement of PyTorch 2.11 (and its default CUDA 13.0); torchaudio and torchvision are similarly stale.
  • transformers is pinned to a v4 release while vllm 0.20.0 now requires transformers>=5, which will cause import errors or silent misbehavior at runtime.

Confidence Score: 2/5

Not safe to merge — the updated vllm version is incompatible with the still-pinned torch and transformers versions, making the environment non-functional.

Two P1 findings: torch is pinned to 2.4.0 while vllm 0.20.0 requires 2.11, and transformers is pinned to v4 while vllm 0.20.0 requires v5. Both will prevent a working installation.

experiments/agentcompany/openhands/requirements.txt — torch, torchaudio, torchvision, transformers, tokenizers, huggingface-hub, xformers, and triton all need version bumps to match vllm 0.20.0's requirements.

Important Files Changed

Filename Overview
experiments/agentcompany/openhands/requirements.txt vllm bumped from 0.6.3.post1 to 0.20.0, but torch (2.4.0+cu121 vs required 2.11), transformers (4.46.2 vs required >=5), xformers, torchaudio, and torchvision are not updated to compatible versions, making the environment unresolvable.

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[vllm 0.20.0] -->|requires| B[torch >= 2.11\nCUDA 13.0]
    A -->|requires| C[transformers >= 5]
    D[requirements.txt] -->|pins| E[torch 2.4.0+cu121 ❌]
    D -->|pins| F[transformers v4 ❌]
    D -->|pins| G[xformers 0.0.27.post2\ntied to torch 2.4]
    B -.->|conflict| E
    C -.->|conflict| F
    B -.->|conflict| G
Loading

Comments Outside Diff (2)

  1. experiments/agentcompany/openhands/requirements.txt, line 177 (link)

    P1 PyTorch version incompatible with vllm 0.20.0

    vllm 0.20.0 requires PyTorch 2.11 (and ships with CUDA 13.0 by default), but this file pins torch==2.4.0+cu121. Installing these two together will fail pip's dependency resolver or produce a non-functional environment where vllm's CUDA kernels miscompile against the wrong torch ABI. The same applies to torchaudio==2.4.0+cu121 and torchvision==0.19.0, both of which are tied to torch 2.4.

    Prompt To Fix With AI
    This is a comment left during a code review.
    Path: experiments/agentcompany/openhands/requirements.txt
    Line: 177
    
    Comment:
    **PyTorch version incompatible with vllm 0.20.0**
    
    vllm 0.20.0 requires PyTorch 2.11 (and ships with CUDA 13.0 by default), but this file pins `torch==2.4.0+cu121`. Installing these two together will fail pip's dependency resolver or produce a non-functional environment where vllm's CUDA kernels miscompile against the wrong torch ABI. The same applies to `torchaudio==2.4.0+cu121` and `torchvision==0.19.0`, both of which are tied to torch 2.4.
    
    How can I resolve this? If you propose a fix, please make it concise.

    Fix in Cursor Fix in Claude Code Fix in Codex

  2. experiments/agentcompany/openhands/requirements.txt, line 183 (link)

    P1 transformers version incompatible with vllm 0.20.0

    vllm 0.20.0 now requires transformers>=5 (v5), but this file pins transformers to a v4 release. vllm's model loading code targeting the v5 API will raise import errors or produce incorrect results at runtime. The pinned tokenizers and huggingface-hub packages are also tied to the v4 ecosystem and will need updating in tandem.

    Prompt To Fix With AI
    This is a comment left during a code review.
    Path: experiments/agentcompany/openhands/requirements.txt
    Line: 183
    
    Comment:
    **transformers version incompatible with vllm 0.20.0**
    
    vllm 0.20.0 now requires `transformers>=5` (v5), but this file pins transformers to a v4 release. vllm's model loading code targeting the v5 API will raise import errors or produce incorrect results at runtime. The pinned `tokenizers` and `huggingface-hub` packages are also tied to the v4 ecosystem and will need updating in tandem.
    
    How can I resolve this? If you propose a fix, please make it concise.

    Fix in Cursor Fix in Claude Code Fix in Codex

Fix All in Cursor Fix All in Claude Code Fix All in Codex

Prompt To Fix All With AI
Fix the following 2 code review issues. Work through them one at a time, proposing concise fixes.

---

### Issue 1 of 2
experiments/agentcompany/openhands/requirements.txt:177
**PyTorch version incompatible with vllm 0.20.0**

vllm 0.20.0 requires PyTorch 2.11 (and ships with CUDA 13.0 by default), but this file pins `torch==2.4.0+cu121`. Installing these two together will fail pip's dependency resolver or produce a non-functional environment where vllm's CUDA kernels miscompile against the wrong torch ABI. The same applies to `torchaudio==2.4.0+cu121` and `torchvision==0.19.0`, both of which are tied to torch 2.4.

### Issue 2 of 2
experiments/agentcompany/openhands/requirements.txt:183
**transformers version incompatible with vllm 0.20.0**

vllm 0.20.0 now requires `transformers>=5` (v5), but this file pins transformers to a v4 release. vllm's model loading code targeting the v5 API will raise import errors or produce incorrect results at runtime. The pinned `tokenizers` and `huggingface-hub` packages are also tied to the v4 ecosystem and will need updating in tandem.

Reviews (1): Last reviewed commit: "Bump vllm in /experiments/agentcompany/o..." | Re-trigger Greptile

Bumps [vllm](https://github.com/vllm-project/vllm) from 0.6.3.post1 to 0.20.0.
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.6.3.post1...v0.20.0)

---
updated-dependencies:
- dependency-name: vllm
  dependency-version: 0.20.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot Bot added dependencies Pull requests that update a dependency file python Pull requests that update python code labels May 6, 2026
@socket-security
Copy link
Copy Markdown

Review the following changes in direct dependencies. Learn more about Socket for GitHub.

Diff Package Supply Chain
Security
Vulnerability Quality Maintenance License
Updatedvllm@​0.6.3.post1 ⏵ 0.20.076 -8100 +75100100100

View full report

@socket-security
Copy link
Copy Markdown

Warning

Review the following alerts detected in dependencies.

According to your organization's Security Policy, it is recommended to resolve "Warn" alerts. Learn more about Socket for GitHub.

Action Severity Alert  (click "▶" to expand/collapse)
Warn Medium
Potential vulnerability: pypi vllm with risk level "medium"

Location: Package overview

From: experiments/agentcompany/openhands/requirements.txtpypi/vllm@0.20.0

ℹ Read more on: This package | This alert | Navigating potential vulnerabilities

Next steps: Take a moment to review the security alert above. Review the linked package source code to understand the potential risk. Ensure the package is not malicious before proceeding. If you're unsure how to proceed, reach out to your security team or ask the Socket team for help at support@socket.dev.

Suggestion: It is advisable to proceed with caution. Engage in a review of the package's security aspects and consider reaching out to the package maintainer for the latest information or patches.

Mark the package as acceptable risk. To ignore this alert only in this pull request, reply with the comment @SocketSecurity ignore pypi/vllm@0.20.0. You can also ignore all packages with @SocketSecurity ignore-all. To ignore an alert for all future pull requests, use Socket's Dashboard to change the triage state of this alert.

Warn Medium
Potential vulnerability: pypi vllm with risk level "medium"

Location: Package overview

From: experiments/agentcompany/openhands/requirements.txtpypi/vllm@0.20.0

ℹ Read more on: This package | This alert | Navigating potential vulnerabilities

Next steps: Take a moment to review the security alert above. Review the linked package source code to understand the potential risk. Ensure the package is not malicious before proceeding. If you're unsure how to proceed, reach out to your security team or ask the Socket team for help at support@socket.dev.

Suggestion: It is advisable to proceed with caution. Engage in a review of the package's security aspects and consider reaching out to the package maintainer for the latest information or patches.

Mark the package as acceptable risk. To ignore this alert only in this pull request, reply with the comment @SocketSecurity ignore pypi/vllm@0.20.0. You can also ignore all packages with @SocketSecurity ignore-all. To ignore an alert for all future pull requests, use Socket's Dashboard to change the triage state of this alert.

View full report

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file python Pull requests that update python code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants