-
Notifications
You must be signed in to change notification settings - Fork 35
Tensor-parallel SSM #333
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: concatenated_dim
Are you sure you want to change the base?
Tensor-parallel SSM #333
Conversation
@@ -284,10 +284,15 @@ def test_load_pretrained( | |||
@pytest.mark.model_testing_group(ModelTestingGroup.convert) | |||
def test_huggingface_model(model_testing_config, get_convert_path): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my Vision+Hybrid-ssm PR, I updated the SSM conversion to copy the modeling files to the export directory https://github.com/ServiceNow/Fast-LLM/pull/332/files#diff-58be369d99e6722a68e734002686ae4afcfd423261e4d3d3b9d6aa552a6f2a14R729-R784
But this PR is far from being merged ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I managed to add them here too, but the external models don't seem to be working
FAILED tests/models/test_checkpoint.py::test_huggingface_model[hybrid_mamba2]@dependency_group_2 - AttributeError: 'DynamicCache' object has no attribute 'has_previous_state'
FAILED tests/models/test_checkpoint.py::test_huggingface_model[hybrid_discrete_mamba2]@dependency_group_3 - AttributeError: 'NoneType' object has no attribute 'ssm_states'
@@ -27,23 +27,7 @@ | |||
except (ImportError, RuntimeError): | |||
_causal_conv1d_available = False | |||
|
|||
|
|||
def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not keeping this function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From, what I'm seeing there is absolutely no benefit over calling repeat_interleave
directly. I tried to figure out why it's there, got two hypotheses:
expand
is preferable overrepeat
because of the absence of copy. That's pointless because the copy is still done inreshape
below. And there is a second copy on each usage (contiguous
), so the function actually makes things slower...repeat_interleave
may involve cuda synchronization because it supports tensor inputs. But that's not supposed to happen, and the explicitoutput_size
ensures it.
else: | ||
head_dim = state | ||
|
||
tensor_space.add_tensor_dim(head_groups := TensorDim(SSMDimNames.head_groups, num_head_groups, tensor)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These var names are somewhat confusing. Wouldn't this be clearer?
num_head_groups
-->num_heads_per_group
-- this is the number of heads in each group (e.g. div(self.d_xb, self.state_size), where head dim. is self.state_size)- TensorDim
head_groups
-->heads_per_group
group_heads
-->head_groups
--- this is number groups (like in GKV), so number of groups withnum_head_groups
heads in each group
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The names are supposed to be correct, i.e. head_groups
and num_ head_groups
refers to the different head groups and the number of such groups, and group_heads
refers to heads inside a group. I might have inverted them by mistake though, I'll double-check. (They should be right here.)
I'm not a huge fan of the term group_heads
though, so I'm open to renaming. (heads_per_group
? heads_in_group
?) What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, makes sense, so the current names are correct. I guess for group_heads
it would make sense to call it heads_per_group
bias=config.add_bias_linear, | ||
weight_init_method=init_kaiming_(self._config.d_inner), | ||
sequence_parallel=self._sequence_parallel, | ||
# TODO: lr_scale? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lr_scale=lr_scale?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just making sure it's not intentionally absent. It's ok to add?
Same about the normalization layer?
✨ Description
Please provide a brief summary of the changes, relevant motivation, and context.
Include any related issue numbers or links to discussions, and explain why this change is necessary.
Closes #
🔍 Type of change
Select all that apply: