-
Notifications
You must be signed in to change notification settings - Fork 229
Add FP8 quantization example for Granite4 #1814
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: cliu-us <[email protected]>
Signed-off-by: cliu-us <[email protected]>
Signed-off-by: cliu-us <[email protected]>
Signed-off-by: Andrea Fasoli <[email protected]>
Signed-off-by: Andrea Fasoli <[email protected]>
Signed-off-by: cliu-us <[email protected]>
granite4_example clean-up
Signed-off-by: cliu-us <[email protected]>
👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review. Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @chichun-charlie-liu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces an example demonstrating FP8 quantization for Granite4 models, specifically addressing the challenge of quantizing the "Linear-like" GraniteMoeHybridParallelExperts
layers within the Mixture-of-Experts (MoE) block. It provides a novel approach by temporarily converting these 3D-weighted expert layers into a 2D nn.Linear
compatible format for llm-compressor
quantization, then reverting them to their original 3D structure for model saving. This enables efficient FP8 quantization of the input and output linear layers while maintaining the router layer in high precision, and includes performance validation with vllm
.
Highlights
- FP8 Quantization Example for Granite4: Adds a new example demonstrating how to apply FP8 quantization to Granite4 models, specifically targeting the Mixture-of-Experts (MoE) block's 'Linear-like' layers.
- Custom Linear Layer for MoE Experts: Introduces
GraniteMoeHybridParallelExpertsLinear
, a customtorch.nn.Linear
subclass, to facilitate quantization ofGraniteMoeHybridParallelExperts
modules which originally store weights in a 3D format incompatible withllm-compressor
. - 3D to 2D Weight Reshaping Workaround: Implements a temporary reshaping mechanism where 3D expert weights are converted to 2D for quantization by
llm-compressor
, and then reverted back to their original 3D structure before saving the quantized model. - Selective Quantization of MoE Layers: The example demonstrates how to quantize the
input_linear
andoutput_linear
layers within the MoE block while intentionally skipping therouter
layer to maintain its precision. - Performance Validation and Compatibility Notes: Includes test results showing the performance of the FP8 quantized model on the
gsm8k
task usingvllm
(requiring version >= 0.10.1.1) and notes incompatibility withhf
for loading the FP8 checkpoint.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces an example for FP8 quantization of Granite-4 models, which is a valuable addition. The approach to handle the non-standard nn.Linear
MoE layers by creating a temporary wrapper class, GraniteMoeHybridParallelExpertsLinear
, is a clever and effective solution. The example script is well-documented and clearly demonstrates the process. The new modeling utility is also well-implemented. I have one suggestion to improve the robustness of the new module by avoiding a hardcoded data type, making it more generally applicable.
Signed-off-by: cliu-us <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your contribution!
Some initial comments.
Signed-off-by: cliu-us <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this really nice contribution! Cool to see that the 3D linear-like weights can be made compatible within llm-compressor and back for inference, with a little massaging. One suggestion for the README, otherwise this looks good to me
Signed-off-by: cliu-us <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One comment otherwise, LGTM.
Thank you!
Signed-off-by: cliu-us <[email protected]>
SUMMARY:
Create an example for Granite4 FP8 quantization. Mainly to handle the two "Linear-like" layers in MOE block, which llm-compressor had problem to identify and quantize.
TEST PLAN:
granite4_example.py
's docstring.make quality