Skip to content

Conversation

@RichardWooSJTU
Copy link
Collaborator

Motivation

This PR adds support for loading DeepGEMM's static quantized ue8m0 scales through v0_loader, which is required for the new DeepGEMM quantization format. The changes enable proper handling of ue8m0 scale tensors in the model loading process.

Modifications

  1. Added conditional logic in fused_moe_deepgemm_backend.py to handle ue8m0 scale tensors differently:

    • Changed tensor data assignment from set_value() to direct .data assignment
    • Added ue8m0 specific scale shape calculation
  2. Modified fused_moe_triton_backend.py to:

    • Support int32 dtype for ue8m0 scales
    • Calculate different scale shapes based on ue8m0 flag
  3. Updated block_wise_fp8.py to:

    • Add ue8m0 specific scale shape calculation
    • Support int32 dtype for scale parameters
    • Add ue8m0 specific quantization and scale transformation functions

Usage or Command

To use this feature, set quant_config.deepgemm_scale_ue8m0=True when loading models with DeepGEMM ue8m0 quantized weights.

Accuracy Tests

Accuracy tests should be performed using the standard model testing pipeline with models quantized using the new ue8m0 format.

Checklist

  • Add at least a tag in the PR title.
    • Tag: [Quantization]
  • Format your code, run pre-commit before commit.
  • Add unit tests. (Will be added in follow-up PR)
  • Provide accuracy results. (Will be provided after testing)
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@paddle-bot
Copy link

paddle-bot bot commented Feb 10, 2026

Thanks for your contribution!

@codecov-commenter
Copy link

Codecov Report

❌ Patch coverage is 24.24242% with 25 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (develop@62ac1e5). Learn more about missing BASE report.

Files with missing lines Patch % Lines
..._executor/layers/moe/fused_moe_deepgemm_backend.py 0.00% 10 Missing ⚠️
...del_executor/layers/quantization/block_wise_fp8.py 30.76% 6 Missing and 3 partials ⚠️
...el_executor/layers/moe/fused_moe_triton_backend.py 40.00% 4 Missing and 2 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #6433   +/-   ##
==========================================
  Coverage           ?   68.46%           
==========================================
  Files              ?      391           
  Lines              ?    52768           
  Branches           ?     8220           
==========================================
  Hits               ?    36126           
  Misses             ?    14005           
  Partials           ?     2637           
Flag Coverage Δ
GPU 68.46% <24.24%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants