Skip to content

Conversation

chichun-charlie-liu
Copy link
Contributor

@chichun-charlie-liu chichun-charlie-liu commented Sep 12, 2025

SUMMARY:
Create an example for Granite4 FP8 quantization. Mainly to handle the two "Linear-like" layers in MOE block, which llm-compressor had problem to identify and quantize.

TEST PLAN:

  1. This example was tested with no errors.
  2. The resulting checkpoint was tested with vllm as well. See details in granite4_example.py's docstring.
  3. new codes were formatted as suggested by make quality
  4. Only 3 new files were added, no other files were changed or impacted.

Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @chichun-charlie-liu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces an example demonstrating FP8 quantization for Granite4 models, specifically addressing the challenge of quantizing the "Linear-like" GraniteMoeHybridParallelExperts layers within the Mixture-of-Experts (MoE) block. It provides a novel approach by temporarily converting these 3D-weighted expert layers into a 2D nn.Linear compatible format for llm-compressor quantization, then reverting them to their original 3D structure for model saving. This enables efficient FP8 quantization of the input and output linear layers while maintaining the router layer in high precision, and includes performance validation with vllm.

Highlights

  • FP8 Quantization Example for Granite4: Adds a new example demonstrating how to apply FP8 quantization to Granite4 models, specifically targeting the Mixture-of-Experts (MoE) block's 'Linear-like' layers.
  • Custom Linear Layer for MoE Experts: Introduces GraniteMoeHybridParallelExpertsLinear, a custom torch.nn.Linear subclass, to facilitate quantization of GraniteMoeHybridParallelExperts modules which originally store weights in a 3D format incompatible with llm-compressor.
  • 3D to 2D Weight Reshaping Workaround: Implements a temporary reshaping mechanism where 3D expert weights are converted to 2D for quantization by llm-compressor, and then reverted back to their original 3D structure before saving the quantized model.
  • Selective Quantization of MoE Layers: The example demonstrates how to quantize the input_linear and output_linear layers within the MoE block while intentionally skipping the router layer to maintain its precision.
  • Performance Validation and Compatibility Notes: Includes test results showing the performance of the FP8 quantized model on the gsm8k task using vllm (requiring version >= 0.10.1.1) and notes incompatibility with hf for loading the FP8 checkpoint.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an example for FP8 quantization of Granite-4 models, which is a valuable addition. The approach to handle the non-standard nn.Linear MoE layers by creating a temporary wrapper class, GraniteMoeHybridParallelExpertsLinear, is a clever and effective solution. The example script is well-documented and clearly demonstrates the process. The new modeling utility is also well-implemented. I have one suggestion to improve the robustness of the new module by avoiding a hardcoded data type, making it more generally applicable.

Copy link
Collaborator

@dsikka dsikka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your contribution!
Some initial comments.

Copy link
Collaborator

@brian-dellabetta brian-dellabetta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this really nice contribution! Cool to see that the 3D linear-like weights can be made compatible within llm-compressor and back for inference, with a little massaging. One suggestion for the README, otherwise this looks good to me

@dsikka dsikka added the ready When a PR is ready for review label Sep 15, 2025
Copy link
Collaborator

@dsikka dsikka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One comment otherwise, LGTM.

Thank you!

@dsikka dsikka merged commit 84575f2 into vllm-project:main Sep 16, 2025
7 of 8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready When a PR is ready for review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants