Replies: 3 comments 4 replies
-
Made a slight edit to my original comment to eliminate ambiguity and confusion. H100GB should be H100 80GB. |
Beta Was this translation helpful? Give feedback.
0 replies
-
And what exactly happens when you try to launch this on some average PC config, lets say 3060ti/16gb RAM? |
Beta Was this translation helpful? Give feedback.
2 replies
-
Thanks for sharing! So is it possible to do a little bit modification to run on 2 nodes with 4 A100 80g on each node? |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Run grok-1 with less than π² 420 G VRAM
See: llama2.cpp / grok-1 support
@ibab_ml on X
What are some of the working setups?
llama2.cpp:
Mac
ggml-org/llama.cpp#6204 (comment)
AMD
ggml-org/llama.cpp#6204 (comment)
This repo:
Intel + Nvidia
#168 (comment)
AMD
https://github.com/xai-org/grok-1/issues/130#issuecomment-2005770022
Other / Container / Cloud
https://github.com/xai-org/grok-1/issues/6#issuecomment-2007301554
See:
https://github.com/xai-org/grok-1/issues/42
https://github.com/xai-org/grok-1/issues/130#issuecomment-2004399998
https://github.com/xai-org/grok-1/issues/172#issuecomment-2005593309
Beta Was this translation helpful? Give feedback.
All reactions