Skip to content

Comments

Adds QAT ConvBN fuse pass to utils#17599

Open
JakeStevens wants to merge 3 commits intopytorch:mainfrom
JakeStevens:export-D93904683
Open

Adds QAT ConvBN fuse pass to utils#17599
JakeStevens wants to merge 3 commits intopytorch:mainfrom
JakeStevens:export-D93904683

Conversation

@JakeStevens
Copy link
Contributor

Summary:
Earlier PR adds support for a pass that quantizes the bias resulting from QAT ConvBN fusion without an initial bias.

This PR adds it to the NXP calibrate_and_quantize method.

Differential Revision: D93904683

Summary:

When performing QAT with a conv layer (bias=False) followed by batch norm, the fusion process introduces a bias after observers are attached, so the bias remains unquantized. These passes find such biases, compute the correct scale from the input and weight dequantize nodes, and insert proper quantize/dequantize nodes for the bias. 
                                                                                                                                                                
                                                                                                                                                                        
Two pass variants are provided:                                                                                                                                       
  - QuantizeFusedConvBnBiasPass (ExportPass) — operates on edge dialect graphs after to_edge()
  - QuantizeFusedConvBnBiasAtenPass (PassBase) — operates on aten dialect graphs, supporting both plain GraphModules (get_attr nodes) and ExportedPrograms (placeholder nodes)

Differential Revision: D92733079
Summary:

Make the NXP test_batch_norm_fusion tests compatible with the BUCK build
system. The tflite import in executors.py is made optional since
tensorflow/tflite_runtime are not available in the BUCK environment. The tests enabled do not rely on the functionality enabled by tflite.

Differential Revision: D93880277
Summary:
Earlier PR adds support for a pass that quantizes the bias resulting from QAT ConvBN fusion without an initial bias.

This PR adds it to the NXP calibrate_and_quantize method.

Differential Revision: D93904683
@pytorch-bot
Copy link

pytorch-bot bot commented Feb 20, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17599

Note: Links to docs will display an error until the docs builds have been completed.

❌ 12 New Failures, 1 Unrelated Failure

As of commit 4eb68eb with merge base 89341d7 (image):

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Feb 20, 2026
@meta-codesync
Copy link
Contributor

meta-codesync bot commented Feb 20, 2026

@JakeStevens has exported this pull request. If you are a Meta employee, you can view the originating Diff in D93904683.

@github-actions
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant