Skip to content

Update excluded modules for Qwen3.5 dense PTQ#1284

Open
amukkara wants to merge 1 commit intoNVIDIA:mainfrom
amukkara:qwen3.5-fix
Open

Update excluded modules for Qwen3.5 dense PTQ#1284
amukkara wants to merge 1 commit intoNVIDIA:mainfrom
amukkara:qwen3.5-fix

Conversation

@amukkara
Copy link
Copy Markdown

@amukkara amukkara commented Apr 17, 2026

What does this PR do?

Type of change: Bug fix

For Qwen3.5 dense models, in_proj modules in linear attention are to be left unquantized.
Example in Qwen3.5-27B-FP8: https://huggingface.co/Qwen/Qwen3.5-27B-FP8/blob/main/config.json#L148
This PR updates _default_disabled_quantizer_config so that all Qwen3.5 dense models are quantized with the same exclusion pattern.

Usage

bash examples/llm_ptq/scripts/huggingface_example.sh --model Qwen/Qwen3.5-4B  --quant fp8 --tasks quant

Testing

Before your PR is "Ready for review"

Make sure you read and follow Contributor guidelines and your commits are signed (git commit -s -S).

Make sure you read and follow the Security Best Practices (e.g. avoiding hardcoded trust_remote_code=True, torch.load(..., weights_only=False), pickle, etc.).

  • Is this change backward compatible?: ✅
  • If you copied code from any other sources or added a new PIP dependency, did you follow guidance in CONTRIBUTING.md: ✅
  • Did you write any new necessary tests?: ❌
  • Did you update Changelog?: ❌

Additional Information

Summary by CodeRabbit

  • Bug Fixes
    • Updated quantization configuration to exclude specific linear attention projection layers from quantization, improving model accuracy.

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Apr 17, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 17, 2026

📝 Walkthrough

Walkthrough

Two new configuration entries were added to the default disabled quantizer configuration to prevent quantization of linear attention projection layer variants. These entries target quantizer names matching specific patterns for in_proj_a and in_proj_b sub-layers.

Changes

Cohort / File(s) Summary
Quantization Configuration
modelopt/torch/quantization/config.py
Added two deny-all entries to _default_disabled_quantizer_cfg to disable quantization for *linear_attn.in_proj_a* and *linear_attn.in_proj_b* quantizer patterns.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

🚥 Pre-merge checks | ✅ 4
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Update excluded modules for Qwen3.5 dense PTQ' accurately summarizes the main change: updating the default quantizer configuration to exclude specific linear attention modules for Qwen3.5 dense models.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Security Anti-Patterns ✅ Passed Pull request contains only configuration changes to disable quantization for specific model attention layers with no security anti-patterns detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Signed-off-by: Anurag Mukkara <134339030+amukkara@users.noreply.github.com>
@amukkara amukkara marked this pull request as ready for review April 17, 2026 00:26
@amukkara amukkara requested a review from a team as a code owner April 17, 2026 00:26
@amukkara amukkara requested a review from meenchen April 17, 2026 00:26
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
modelopt/torch/quantization/config.py (1)

231-232: Add a focused regression test for these new exclusion patterns.

Line 231-232 update global default exclusions; please add a test that verifies quantizers matching *linear_attn.in_proj_a* and *linear_attn.in_proj_b* are disabled after config application. This helps lock in the intended Qwen3.5 PTQ behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@modelopt/torch/quantization/config.py` around lines 231 - 232, Add a unit
test that verifies the new exclusion patterns "*linear_attn.in_proj_a*" and
"*linear_attn.in_proj_b*" actually disable matching quantizers: import the
exclusion patterns from modelopt.torch.quantization.config (e.g.,
DEFAULT_EXCLUSIONS or the global exclusions variable), create mock quantizer
names like "encoder.linear_attn.in_proj_a.weight" and
"decoder.linear_attn.in_proj_b.bias" and then use the module's
exclusion-matching helper (e.g., matches_exclusion, is_excluded, or the function
that decides quantizer enablement) to assert those names are considered
excluded/disabled after applying the config; fail the test if any of those
quantizers remain enabled.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@modelopt/torch/quantization/config.py`:
- Around line 231-232: Add a unit test that verifies the new exclusion patterns
"*linear_attn.in_proj_a*" and "*linear_attn.in_proj_b*" actually disable
matching quantizers: import the exclusion patterns from
modelopt.torch.quantization.config (e.g., DEFAULT_EXCLUSIONS or the global
exclusions variable), create mock quantizer names like
"encoder.linear_attn.in_proj_a.weight" and "decoder.linear_attn.in_proj_b.bias"
and then use the module's exclusion-matching helper (e.g., matches_exclusion,
is_excluded, or the function that decides quantizer enablement) to assert those
names are considered excluded/disabled after applying the config; fail the test
if any of those quantizers remain enabled.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro Plus

Run ID: 3344bc77-0606-4684-a620-e45bc3886169

📥 Commits

Reviewing files that changed from the base of the PR and between 04fcf24 and f532a9b.

📒 Files selected for processing (1)
  • modelopt/torch/quantization/config.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant