Skip to content

GPU support #356

@mhrmsn

Description

@mhrmsn

As I recently looked into this and after discussion with @AdrianSosic and @Scienfitz, here's my observations:

I think there are two convenient ways to support GPUs, either by allowing the user to use torch.set_default_device("cuda") or by adding a configuration variable within the package, something like baybe.options.device = "cuda". I only tested the first one, I think the latter is a bit more complex, but would potentially allow to have different parts of code in the overall BayBe workflow to use different devices (this may be useful if you are generating embeddings from a torch-based neural network for use in BayBe etc.)

When experimenting with torch.set_default_device("cuda"), I noticed that the devices for the tensors are not consistently set in BayBE. For either solution I think these points would need to be addressed:

Metadata

Metadata

Assignees

Labels

No labels
No labels

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions