-
Notifications
You must be signed in to change notification settings - Fork 44
[QEff Finetune]: Adding steps about how to fine tune on any custom dataset. #381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Swati Allabadi <[email protected]>
8485344
to
e757ab8
Compare
Signed-off-by: Swati Allabadi <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good work in listing down the detailed steps for custom dataset, Swati! Please check on the comments. :)
docs/source/finetune.md
Outdated
To run fine tuning for any user specific dataset, prepare the dataset using the following steps: | ||
|
||
1) Create a directory named 'dataset' inside efficient-transformers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
double space between "a" and "directory"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed.
|
||
def tokenize(): | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add some comment as "Implement tokenization and prepare inputs for the training."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added.
# load dataset | ||
# based on split, retrieve only the specific portion of the dataset (train or eval) either here or at the last | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add one more comment as "Define a prompt template"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's already there. ( # define prompt)
|
||
def apply_prompt_template(): | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add some comment as "Convert the raw input into format as per the template defined earlier."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added.
docs/source/finetune.md
Outdated
5) Inside get_custom_dataset(), dataset needs to prepared for fine tuning. So, the user needs to apply prompt and tokenize the dataset accordingly. Please refer the below template on how to define get_custom_dataset(). | ||
6) For examples, please refer python files present in efficient-transformers/QEfficient/finetune/dataset. In case of Samsum dataset, get_preprocessed_samsum() of efficient-transformers/QEfficient/finetune/dataset/samsum_dataset.py is called. | ||
7) In efficient-transformers/QEfficient/finetune/configs/dataset_config.py, for custom_dataset class, pass the appropriate value for train_split and test_split according to the dataset keys corresponding to train and test data points. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is no longer needed after PR#289. We can directly pass --train_split and --test_split from the CLI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, correct! Thanks for pointing this. After PR 289, user can do this through both ways. Updated step #7 accordingly.
Signed-off-by: Swati Allabadi <[email protected]>
f46f537
to
05a385a
Compare
@@ -28,7 +28,7 @@ class train_config: | |||
use_fp16: bool = True | |||
use_autocast: bool = True | |||
val_batch_size: int = 1 | |||
dataset = "samsum_dataset" | |||
dataset = "alpaca_dataset" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good that you have added this change in this gerrit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me.
docs/source/finetune.md
Outdated
To run fine tuning for any user specific dataset, prepare the dataset using the following steps: | ||
|
||
1) Create a directory named 'dataset' inside efficient-transformers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add the location "at root of the repo."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
docs/source/finetune.md
Outdated
2) Inside this directory, create a file named 'custom_dataset.py'. This is different than the custom_dataset.py present at efficient-transformers/QEfficient/finetune/dataset. | ||
3) Inside the newly created efficient-transformers/dataset/custom_dataset.py, define a function named 'get_custom_dataset'. | ||
4) get_custom_dataset() should have following 4 parameters: dataset_config, tokenizer, split, context_length. This function gets called twice through Qefficient/cloud/finetune.py with the name get_preprocessed_dataset. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
QEfficient not Qefficient
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
docs/source/finetune.md
Outdated
4) get_custom_dataset() should have following 4 parameters: dataset_config, tokenizer, split, context_length. This function gets called twice through Qefficient/cloud/finetune.py with the name get_preprocessed_dataset. | ||
5) Inside get_custom_dataset(), dataset needs to prepared for fine tuning. So, the user needs to apply prompt and tokenize the dataset accordingly. Please refer the below template on how to define get_custom_dataset(). | ||
6) For examples, please refer python files present in efficient-transformers/QEfficient/finetune/dataset. In case of Samsum dataset, get_preprocessed_samsum() of efficient-transformers/QEfficient/finetune/dataset/samsum_dataset.py is called. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since default dataset is changed, we should mention alpaca here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The steps I have mentioned matches with the format of samsum_dataset.py. It doesn't match with alpaca_dataset.py. Hence, I didn't change it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
too verbose. Make it simple pointed steps
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Added the detailed points in confluence and made them crisp in the PR.
Signed-off-by: Swati Allabadi <[email protected]>
docs/source/finetune.md
Outdated
4) get_custom_dataset() should have following 4 parameters: dataset_config, tokenizer, split, context_length. This function gets called twice through Qefficient/cloud/finetune.py with the name get_preprocessed_dataset. | ||
5) Inside get_custom_dataset(), dataset needs to prepared for fine tuning. So, the user needs to apply prompt and tokenize the dataset accordingly. Please refer the below template on how to define get_custom_dataset(). | ||
6) For examples, please refer python files present in efficient-transformers/QEfficient/finetune/dataset. In case of Samsum dataset, get_preprocessed_samsum() of efficient-transformers/QEfficient/finetune/dataset/samsum_dataset.py is called. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
too verbose. Make it simple pointed steps
docs/source/finetune.md
Outdated
5) Inside get_custom_dataset(), dataset needs to prepared for fine tuning. So, the user needs to apply prompt and tokenize the dataset accordingly. Please refer the below template on how to define get_custom_dataset(). | ||
6) For examples, please refer python files present in efficient-transformers/QEfficient/finetune/dataset. In case of Samsum dataset, get_preprocessed_samsum() of efficient-transformers/QEfficient/finetune/dataset/samsum_dataset.py is called. | ||
7) In efficient-transformers/QEfficient/finetune/configs/dataset_config.py, for custom_dataset class, pass the appropriate value for train_split and test_split according to the dataset keys corresponding to train and test data points. As an alternative, these values can be passed as command line arguemnets as well with the finetune command. For example "--train_split train". |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add hyperlinks to the relative paths annotated in the steps below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
``` | ||
|
||
## Fine-Tuning on custom dataset |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should include details on how we use gradient accumulation, how the dataset is shuffled, how activation checkpointing is enabled in separate sections.
In custom dataset, add a point that if any user wants to use these, refer xyz section
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added the section explaining how to use gradient accumulation and gradient checkpointing.
For single SOC, we are not doing any shuffling of data. By default, it is False.
For DDP, in case of sorting, shuffling is set to false.
For DDP without sorting, shuffling was set to True. Made it False to sync it up with single SOC run and also to be able to use 'resume finetuning from between' feature. this was not caught earlier because by default we run DDP with sorting only.
Hence, didn't add any information about shuffling in the doc.
43aab17
to
155bb77
Compare
Signed-off-by: Swati Allabadi <[email protected]>
ea0f63a
to
1be1053
Compare
Signed-off-by: Swati Allabadi <[email protected]>
1be1053
to
25a2ac5
Compare
Uh oh!
There was an error while loading. Please reload this page.