-
Notifications
You must be signed in to change notification settings - Fork 59
New acquisition functions #203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
2a31adb to
d28b0fb
Compare
4054732 to
a516d2a
Compare
AdrianSosic
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @Scienfitz, thanks for this tricky PR. Here already the first chunk of comments so that you have something to work on while I review the rest
| raise NoMCAcquisitionFunctionError( | ||
| f"The '{self.__class__.__name__}' only works with Monte Carlo " | ||
| f"acquisition functions." | ||
| f"acquisition functions for batch sizes > 1." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it actually makes sense to remove the try-catch and instead raise this error in a guard clause at the top under the appropriate condition (i.e. batch_size>1 and non-MC function). Because I've already had weird situations several times that a different AttributeError was thrown (while developing) and this got me completely off track. And I don't see a reason really why should wait for the optimization to fail... Right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes this atttribute error ctach ehre can have several rasons and ive also encountered it for some of the (not included) new acqf functions that complain about something else than our error here. It def needs to be fixed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
having said that the batch size change you suggest would not be enough for that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What you you mean with batch size change? My suggestion was simply to have an if-condition that checks for batch_size>1 AND not is_mc ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes i could do that but it wont solve any of the other issues that could cause different attribute errors
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, and we can just explicitly link to the original function and then add stuff if that is fine with our linters
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I propose the following
"""Generate recommendations from a discrete search space.
Note: This method overwrites
:func:`baybe.recommender.pure.base.PureRecommender._recommend_discrete`
keeping the same characteristics except the ones mentioned here.
Raises:
NoMCAcquisitionFunctionError: If a non-Monte Carlo acquisition function is
used with a batch size > 1.
"""its part of this PR via amending the last commit
keep in mind this is wont shown in the doc anyway because its a private function
please let me know if that can go ahead
I notice also inconsistent behavior here, in the same file we have _recommend_hybrid which also overwrites the base class implementation, but has its own completely copypasta docstring with an amended raises section
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and the same problems were also part of conti and hyprid error raising so I changed it there too
Seems our previous workflow was to copy the entire docstirng+amend which I did for those as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I'm not a happy guy but I think there is simply no "good" approach, so I'll simply look away 🙈
AdrianSosic
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, looking great 👍🏼 Here the remaining comments
Co-authored-by: AdrianSosic <[email protected]>
a75b012 to
386ea9b
Compare
f47729d to
386ea9b
Compare
386ea9b to
30e9d13
Compare
30e9d13 to
8225e82
Compare
8225e82 to
6b4be22
Compare
6b4be22 to
4168475
Compare

Some progress towards refactoring the acqf interface and enabling more advanced acqfs
Done
debotorchizeNot Done Yet:
AdapterModel(does not make sense while Refactor surrogates and decorators #209 is open)Issues:
some tests with custom surrogates fail, see separate threadresolved since not using botorch factory anymoreNEII getbotorch.exceptions.errors.UnsupportedError: Only SingleTaskGP models with known observation noise are currently supported for fantasy-based NEI & LogNE. Also theLogPIis available in botorch, but it is not imported to the top level acqf package in botorch, I ignored it here.I included a reverted commit pair so it can be checked out quickly to reproduce