[wip] Support multimethod in export_llama_lib#17231
[wip] Support multimethod in export_llama_lib#17231lucylq wants to merge 2 commits intogh/lucylq/134/basefrom
Conversation
TODO: add CI test. Differential Revision: [D92315602](https://our.internmc.facebook.com/intern/diff/D92315602/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17231
Note: Links to docs will display an error until the docs builds have been completed. ❌ 3 New Failures, 4 Unrelated FailuresAs of commit 1de0598 with merge base eac0673 ( NEW FAILURES - The following jobs have failed:
FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
TODO: add CI test. Differential Revision: [D92315602](https://our.internmc.facebook.com/intern/diff/D92315602/) ghstack-source-id: 338339885 Pull Request resolved: #17231
This PR needs a
|
TODO: add CI test. Differential Revision: [D92315602](https://our.internmc.facebook.com/intern/diff/D92315602/) [ghstack-poisoned]
Pull Request resolved: #17231 TODO: add CI test. Note: multimethod export is currently limited to: - xnnpack or portable lib - only lora (does not support arbitrary nn.Modules in each method) - if quant is enabled, lora models must share quant schemes at source transformation time - no pt2e quant, as each model could have slightly different results after calibration ghstack-source-id: 338339885 @exported-using-ghexport Differential Revision: [D92315602](https://our.internmc.facebook.com/intern/diff/D92315602/)
There was a problem hiding this comment.
Pull request overview
This pull request adds multimethod export support to the Llama export library, enabling the export of multiple methods (base model and LoRA variants) into a single .pte file. This is part of a stack of PRs (#17228-#17231) that collectively add multimethod and LoRA support to the ExecuTorch runtime.
Changes:
- Added
_export_llama_multimethod()function to handle exporting multiple methods to a single .pte file - Added helper functions
_get_xnnpack_partitioners()and_get_output_filename()to support multimethod export - Added validation logic to ensure multimethod export only works with XNNPACK backend or portable ops
- Added configuration file for Qwen3 multimethod export with LoRA
- Added comprehensive test script for validating multimethod export functionality
- Updated build dependencies to include required tokenizer and weight conversion modules
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
examples/models/llama/export_llama_lib.py |
Core implementation of multimethod export logic, including validation, helper functions, and the main export function |
examples/models/qwen3/config/qwen3_multimethod.yaml |
Configuration file demonstrating multimethod export with LoRA and base methods using environment variable interpolation |
examples/models/llama/BUCK |
Added convert_weights.py to export library sources for proper dependency resolution |
examples/models/llama/runner/targets.bzl |
Added regex_lookahead tokenizer dependency for enhanced tokenizer support |
.ci/scripts/test_lora_multimethod.sh |
Comprehensive test script validating both LoRA and base method execution with expected output verification |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| "Multimethod export only supports XNNPACK backend or portable ops" | ||
| "Please disable other backends (coreml, vulkan, qnn, mps, openvino)." |
There was a problem hiding this comment.
Missing period between sentences in error message. Should be: "Multimethod export only supports XNNPACK backend or portable ops. Please disable other backends..."
| "Multimethod export only supports XNNPACK backend or portable ops" | |
| "Please disable other backends (coreml, vulkan, qnn, mps, openvino)." | |
| "Multimethod export only supports XNNPACK backend or portable ops." | |
| " Please disable other backends (coreml, vulkan, qnn, mps, openvino)." |
| return builder | ||
|
|
||
|
|
||
| def _get_xnnpack_partitioners(llm_config: LlmConfig) -> Optional[List]: |
There was a problem hiding this comment.
Return type annotation is too generic. The function returns either a list of partitioners or None, so the type should be Optional[List[Partitioner]] instead of Optional[List]. This would provide better type safety and match the usage pattern where partitioners are passed to to_edge_transform_and_lower.
| method_to_program, | ||
| partitioner=partitioners, | ||
| compile_config=edge_config, | ||
| constant_methods=first_builder.metadata, |
There was a problem hiding this comment.
The generate_etrecord configuration option is not being passed to the to_edge_transform_and_lower call. This means that ETRecord generation will not work with multimethod export even if llm_config.debug.generate_etrecord is set to True. Add generate_etrecord=llm_config.debug.generate_etrecord, to the function call.
| constant_methods=first_builder.metadata, | |
| constant_methods=first_builder.metadata, | |
| generate_etrecord=llm_config.debug.generate_etrecord, |
| } | ||
|
|
||
| # Download LoRA adapter. | ||
| python -m pip install -q huggingface_hub |
There was a problem hiding this comment.
This script installs the huggingface_hub package directly from PyPI at runtime without any version pinning or integrity verification, which introduces a supply-chain risk: if the package (or the index it is fetched from) is compromised, arbitrary code could execute in the CI environment with access to repository data and any configured secrets. To reduce this risk, pin huggingface_hub to a known-good version (e.g., via a requirements/constraints file) and, where possible, enable hash-based verification or use a vetted internal package mirror instead of an unpinned direct install.
Stack from ghstack (oldest at bottom):
TODO: add CI test.
Differential Revision: D92315602