-
-
Notifications
You must be signed in to change notification settings - Fork 8.3k
[Kernel] Masked act_mul and fp8-quant Kernels for Batched MoE #19721
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[Kernel] Masked act_mul and fp8-quant Kernels for Batched MoE #19721
Conversation
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Marking this draft -- These kernels are not a priority at the moment given that a masked-fused-act-mul-quant exists in https://github.com/vllm-project/vllm/tree/ll_deepgemm_opt . We can revive this when needed. |
Purpose
Optimization to reduce unnecessary compute
For the batched MoEs we allocate tensors of shape
[num_experts, max_tokens_per_expert, hidden_size]
. Onmain
we process all the elements(num_experts x max_tokens_per_expert x hidden_size)
- but not allmax_tokens_per_expert
are valid and we can skip some of these. To this effect, add batched versions ofsilu_mul
andper_token_quant fp8
kernels.Test Plan
Unit tests
Local E2E testing
commands:
DeepSeek V2 lite:-
Qwen FP8:-
Test Result
DeepSeek v2 lite
Qwen Fp8
Note:
The lm_eval results become a bit finicky when I try to use big
num_concurrent
values. This happens also onmain
- I have set it to 1 here to produce output that are a bit consistent.(Optional) Documentation Update