-
Notifications
You must be signed in to change notification settings - Fork 59
Description
As I recently looked into this and after discussion with @AdrianSosic and @Scienfitz, here's my observations:
I think there are two convenient ways to support GPUs, either by allowing the user to use torch.set_default_device("cuda") or by adding a configuration variable within the package, something like baybe.options.device = "cuda". I only tested the first one, I think the latter is a bit more complex, but would potentially allow to have different parts of code in the overall BayBe workflow to use different devices (this may be useful if you are generating embeddings from a torch-based neural network for use in BayBe etc.)
When experimenting with torch.set_default_device("cuda"), I noticed that the devices for the tensors are not consistently set in BayBE. For either solution I think these points would need to be addressed:
torch.from_numpy()calls that are used to construct botorch inputs share the same memory as the input, same goes fortorch.frombuffer()(not used in BayBE). See also: https://pytorch.org/docs/stable/generated/torch.set_default_device.html- Recommendations are often converted by using a construct like
pd.DataFrame(points, ...)wherepointsis a tensor from botorch. This will fail if the tensor is not on the CPU. - Botorch's handling of constraints might not be consistent with the torch device, I opened an issue here to ask about this: Consistent use of torch device in HitAndRunPolytopeSampler meta-pytorch/botorch#2500