Skip to content

Metal backend: test modules#17076

Merged
manuelcandales merged 7 commits intomainfrom
gh/manuelcandales/151/head
Feb 5, 2026
Merged

Metal backend: test modules#17076
manuelcandales merged 7 commits intomainfrom
gh/manuelcandales/151/head

Conversation

@manuelcandales
Copy link
Contributor

@manuelcandales manuelcandales commented Jan 31, 2026

This pull request adds new support for building and testing the Metal backend modules on macOS, including automation in the CI workflow and a dedicated test runner script. The main changes are the addition of a reusable shell script for building and running Metal backend tests, and the integration of a new GitHub Actions job to automate module testing for the Metal backend.

CI/CD and Testing Automation:

  • Added a new test-metal-backend-modules job to the .github/workflows/metal.yml workflow, which runs on a macOS M2 runner, sets up the environment, builds the Metal runtime, and runs Python unit tests for the Metal backend modules.

Tooling and Scripts:

  • Introduced a new script backends/apple/metal/tests/run_metal_test.sh that provides commands to build the Metal runtime, check if it is built, and run inference with given model files. This script is now used in the CI workflow to automate Metal backend testing.

[ghstack-poisoned]
Copilot AI review requested due to automatic review settings January 31, 2026 00:25
@pytorch-bot
Copy link

pytorch-bot bot commented Jan 31, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17076

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures, 2 Unrelated Failures

As of commit 646b4b3 with merge base 267a59d (image):

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 31, 2026
manuelcandales added a commit that referenced this pull request Jan 31, 2026
ghstack-source-id: c8aec0d
ghstack-comment-id: 3826704927
Pull-Request: #17076
@github-actions
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds comprehensive test coverage for the Metal backend by introducing module-level export and runtime execution tests.

Changes:

  • Adds test_modules.py with 27 test modules covering operations like matrix multiplication, convolution, attention (SDPA), normalization, and composite blocks
  • Adds run_metal_test.sh build script to compile Metal runtime and run inference tests
  • Extends CI workflow with test-metal-modules job that builds Metal runtime and runs module tests

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 12 comments.

File Description
backends/apple/metal/tests/test_modules.py New comprehensive test suite with 27 model modules tested across float32 and bfloat16, including export validation and runtime output consistency checks
backends/apple/metal/tests/run_metal_test.sh New bash script to build Metal runtime with CMake and run inference with executor_runner binary
.github/workflows/metal.yml Adds new CI job to build Metal runtime and run module tests on macos-m2-stable runner

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

[ghstack-poisoned]
manuelcandales added a commit that referenced this pull request Jan 31, 2026
ghstack-source-id: e853016
ghstack-comment-id: 3826704927
Pull-Request: #17076
[ghstack-poisoned]
Copilot AI review requested due to automatic review settings January 31, 2026 03:01
manuelcandales added a commit that referenced this pull request Jan 31, 2026
ghstack-source-id: 8ff1636
ghstack-comment-id: 3826704927
Pull-Request: #17076
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

EXECUTOR_RUNNER = BUILD_DIR / "executor_runner"
RUN_METAL_TEST_SCRIPT = TESTS_DIR / "run_metal_test.sh"

# Test output directory - use current working directory in CI for reliable write access
Copy link

Copilot AI Jan 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment mentions 'aoti_debug_data' directory but doesn't explain why this specific directory name is used or what 'aoti' stands for (Ahead-Of-Time Inference). Consider adding a comment explaining that this directory name is used for Metal backend test outputs to maintain consistency with AOTI conventions.

Suggested change
# Test output directory - use current working directory in CI for reliable write access
# Test output directory - use current working directory in CI for reliable write access.
# We use the 'aoti_debug_data' directory name here to keep Metal backend test outputs
# consistent with Ahead-Of-Time Inference (AOTI) debug data conventions used elsewhere.

Copilot uses AI. Check for mistakes.
[ghstack-poisoned]
manuelcandales added a commit that referenced this pull request Jan 31, 2026
ghstack-source-id: c36fc89
ghstack-comment-id: 3826704927
Pull-Request: #17076
Copy link
Contributor

@mergennachin mergennachin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are your thoughts on reusing Backend Test Harness?

https://github.com/pytorch/executorch/tree/main/backends/test/suite

test_modules.py almost look like this:

https://github.com/pytorch/executorch/tree/main/backends/test/suite/operators

I think we should try to reuse as much as possible, and if possible contribute back to test harness

cc @GregoryComer

@manuelcandales
Copy link
Contributor Author

What are your thoughts on reusing Backend Test Harness?

Every backend has its own tests, independent of our more comprehensive testing frameworks. E.g. xnnpack, vulkan, qualcomm, arm

The intent of frameworks like FACTO or Backend Test Harness is to find unknown issues, on mature operators/backends where certain level of coverage and correctness is expected.

On the other hand, having the ability to add specific tests remains useful, as we develop and want to verify that a given PR intended to enable certain feature, is indeed providing the desired functionality.

It is also a minimum bar that the backend/operator should pass, and that will be enforced on CI.

@mergennachin
Copy link
Contributor

Discussed offline with @manuelcandales

Agreed that we need both approaches (backend specific unit testing, and backend test harness)

As discussed, let's remove unnecessary tests that's not testing what you're developing right now and could be enabled by backend test harness in the future.

[ghstack-poisoned]
Copilot AI review requested due to automatic review settings February 2, 2026 21:27
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

[ghstack-poisoned]
Comment on lines +47 to +56
CMAKE_ARGS="-DEXECUTORCH_BUILD_METAL=ON \
-DEXECUTORCH_BUILD_EXTENSION_TENSOR=ON \
-DEXECUTORCH_BUILD_EXECUTOR_RUNNER=ON \
-DEXECUTORCH_BUILD_EXTENSION_FLAT_TENSOR=ON \
-DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \
-DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \
-DEXECUTORCH_BUILD_EXTENSION_NAMED_DATA_MAP=ON \
-DAOTI_METAL=ON \
-DEXECUTORCH_LOG_LEVEL=Info \
-DCMAKE_BUILD_TYPE=Release"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not use preset? cmake --workflow --preset llm-metal-stats

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If not llm specific, also strongly recommend to add a base preset metal-release and let llm-metal-stats extend it.

echo "Running inference..."
echo " PTE: $pte_path"

"$EXECUTOR_RUNNER" --model_path "$pte_path"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it would be much better to add pybind and run tests in pure python environment. This way you don't need this script.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am using this script in CI as well (for building and updating ao).

[ghstack-poisoned]
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

value = torch.as_strided(self.value, size=self.v_size, stride=self.v_stride)
attn_mask = None
if self.attn_mask_size:
attn_mask = torch.zeros(self.attn_mask_size)
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The attention mask is created with torch.zeros(self.attn_mask_size) which will use the default dtype (float32), even when the model is converted to bfloat16 via model.to(dtype). Since the mask is created dynamically in the forward method and not stored as a parameter, it won't be converted by model.to(). This could lead to dtype mismatches when passing the mask to scaled_dot_product_attention with query/key/value tensors in bfloat16. Consider either storing the mask as a parameter that gets converted with the model, or using the dtype of the query tensor to ensure consistency.

Suggested change
attn_mask = torch.zeros(self.attn_mask_size)
attn_mask = torch.zeros(
self.attn_mask_size,
dtype=query.dtype,
device=query.device,
)

Copilot uses AI. Check for mistakes.
# Model parameters are initialized with their default dtype (typically float32) when the
# model class is instantiated. The parameters are then converted to the target dtype using
# model.to(dtype). For example:
# - nn.Parameter(torch.arange(20, dtype=torch.get_default_dtype()) creates float32 parameters
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment on line 96 has a typo: "nn.Parameter(torch.arange(20, dtype=torch.get_default_dtype())" is missing a closing parenthesis. This is a documentation/comment issue that could confuse readers trying to understand the parameter initialization pattern.

Suggested change
# - nn.Parameter(torch.arange(20, dtype=torch.get_default_dtype()) creates float32 parameters
# - nn.Parameter(torch.arange(20, dtype=torch.get_default_dtype())) creates float32 parameters

Copilot uses AI. Check for mistakes.
@manuelcandales manuelcandales merged commit 8e1a08c into main Feb 5, 2026
192 of 196 checks passed
@manuelcandales manuelcandales deleted the gh/manuelcandales/151/head branch February 5, 2026 16:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants