-
-
Notifications
You must be signed in to change notification settings - Fork 8.3k
[V1] Perf optimization for layers reusing shared KV cache #19719
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Yong Hoon Shin <[email protected]>
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
My concern is whether this optimization is too model specific. It works for models that the first k layers have kv cache. Does it work for models that every m layers share the same kv cache like Hunyuan? |
It only works for the case where the first k layers have kv cache as you said. For general KV sharing cases, it should also apply for last N layers that reuse the KV cache (ie there are no other layers afterwards that have its own KV cache). So I agree it will not apply to a majority of models, but then I'm not sure if there is a better way to implement this kind of functionality. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took a quick pass on this PR.
And I'm curious about your plan to support piecewise cuda graph. We need cuda graph for num_total_tokens in the first few layers, and num_decode_tokens in the following layers.
@@ -128,6 +128,7 @@ | |||
VLLM_TOOL_PARSE_REGEX_TIMEOUT_SECONDS: int = 1 | |||
VLLM_SLEEP_WHEN_IDLE: bool = False | |||
VLLM_MQ_MAX_CHUNK_BYTES_MB: int = 16 | |||
VLLM_V1_KV_SHARING_SKIP_PREFILL: bool = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer to add it as a cli arg.
@@ -602,6 +620,11 @@ def forward( | |||
# Profiling run. | |||
return output | |||
|
|||
if (self.kv_sharing_target_layer_name is not None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This branch is not true for hunyuan-style kv sharing.
This pull request has merge conflicts that must be resolved before it can be |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would really like to try to keep build
signature of the metadata builders as simple as possible so hopefully we can create some nice unit testing infrastructure in the future. Do we really need to add decode_only_common_attn_metadata
to the build
call signature? can we make the kv sharing layers a different KVSpec and have separate build calls at this level:
vllm/vllm/v1/worker/gpu_model_runner.py
Lines 691 to 709 in 257ab95
for kv_cache_group_id, kv_cache_group_spec in enumerate( | |
self.kv_cache_config.kv_cache_groups): | |
# Prepare for cascade attention if enabled & beneficial. | |
common_prefix_len = 0 | |
builder = self.attn_metadata_builders[kv_cache_group_id] | |
if self.cascade_attn_enabled: | |
common_prefix_len = self._compute_cascade_attn_prefix_len( | |
num_scheduled_tokens, | |
scheduler_output. | |
num_common_prefix_blocks[kv_cache_group_id], | |
kv_cache_group_spec.kv_cache_spec, | |
builder, | |
) | |
attn_metadata_i = (builder.build( | |
common_prefix_len=common_prefix_len, | |
common_attn_metadata=common_attn_metadata, | |
)) |
we should probably be doing this for local attention too but that was added before we had the hybrid-KV cache (which enabled different build calls for different layer groups). We should probably migrate local attention to a scheme like this too
self, | ||
common_prefix_len: int, | ||
common_attn_metadata: CommonAttentionMetadata, | ||
decode_only_common_attn_metadata: Optional[ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a reason we need to pass decode_only_common_attn_metadata
as a separate arg; is there a reason we can't just use a different build
call at the gpu model runner level? i.e. here-ish:
vllm/vllm/v1/worker/gpu_model_runner.py
Lines 691 to 709 in 257ab95
for kv_cache_group_id, kv_cache_group_spec in enumerate( | |
self.kv_cache_config.kv_cache_groups): | |
# Prepare for cascade attention if enabled & beneficial. | |
common_prefix_len = 0 | |
builder = self.attn_metadata_builders[kv_cache_group_id] | |
if self.cascade_attn_enabled: | |
common_prefix_len = self._compute_cascade_attn_prefix_len( | |
num_scheduled_tokens, | |
scheduler_output. | |
num_common_prefix_blocks[kv_cache_group_id], | |
kv_cache_group_spec.kv_cache_spec, | |
builder, | |
) | |
attn_metadata_i = (builder.build( | |
common_prefix_len=common_prefix_len, | |
common_attn_metadata=common_attn_metadata, | |
)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yea I initially had a separate build()
call at the model runner level, but I needed to set this as a property of attention metadata for all different backends, and they don't share a common schema. So I thought I could pass the info and let each backend decide what to do with it.
But I do agree that your approach is a better abstraction, will follow up on that
Purpose
When KV cache sharing is used, there is a performance optimization we can do for the last N layers that perform attention using K and V from an earlier layer's shared KV cache, whereby we skip prefill for those N layers. As an example, if we have request 0 and request 1 with 4 prompt tokens each, then we might have tokens batched as such:
Then we just need to do forward on those N layers with the last token for each request
[3,7]
, as these are the only positions where valid logits are required to sample output tokens from. To do this, changes are required from both the model code and the attention metadata (query_start_loc
,max_query_len
, etc will be different).Known limitations:
Test
Unit test show output roughly equivalent with and without this optimization (exact numerics will differ as batched mm op will yield slightly different results depending on batch size)