Cortex-M quantizer: Use is_int8_activations helper in remaining checkers#17282
Cortex-M quantizer: Use is_int8_activations helper in remaining checkers#17282
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17282
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 2 Cancelled Jobs, 2 Unrelated FailuresAs of commit aadfb88 with merge base 02fe862 ( NEW FAILURES - The following jobs have failed:
CANCELLED JOBS - The following jobs were cancelled. Please retry:
FLAKY - The following job failed but was likely due to flakiness present on trunk:
BROKEN TRUNK - The following job failed but was present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
There was a problem hiding this comment.
Pull request overview
This PR refactors Cortex-M quantizer pattern checkers to consistently use the shared is_int8_activations helper when validating activation dtypes, reducing duplicated logic in quantization-config checks.
Changes:
- Replaced inline int8 dtype checks in
CortexMAddMulCheck.check_quantization_configwithcls.is_int8_activations(...). - Replaced inline int8 dtype checks in
CortexMLinearCheck.check_quantization_configwithcls.is_int8_activations(...).
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
No description provided.