0.43.3: enabling LLama 405b with 8xH/A100 + 260MB RAM #1298
Titus-von-Koeller
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Improvements:
Params4bit.__new__
post PR Initial FSDP Support for QLoRA Finetuning #970. It supports models exported with non-defaultquant_storage
, such as this NF4 model with BF16 storage.This discussion was created from the release 0.43.3: enabling LLama 405b with 8xH/A100 + 260MB RAM.
Beta Was this translation helpful? Give feedback.
All reactions