Is it possible to implement distributed CPUs inference? #4227
hyperbolic-c
announced in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Just as implement distributed training and inference on multiple nodes with multiple GPUs, is it possible to implement inference on multiple device nodes with multiple CPUs?
For distributed CPU training, for example, mindspore distributed cpu train, any solution about LLM inference ? thanks!!
Beta Was this translation helpful? Give feedback.
All reactions