Skip to content

Conversation

@Hrovatin
Copy link
Collaborator

@Hrovatin Hrovatin commented Oct 22, 2025

Add two new TL benchmarks:

  • Inverted hartmann
  • Hartmann shifted in one dimension

@kalama-ai

Copy link
Collaborator

@kalama-ai kalama-ai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, @Hrovatin for implementing the new benchmarks. Except some clean up, I am fine with the implementation.

@Hrovatin Hrovatin requested a review from kalama-ai October 22, 2025 14:58
@AVHopp AVHopp requested a review from Copilot October 23, 2025 14:33
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds two new transfer learning benchmark variants for the Hartmann function: an inverted version and a shifted version. These benchmarks enable testing transfer learning scenarios where the source and target functions differ in systematic ways.

  • Introduces a utility function for creating shifted Hartmann functions with configurable dimension-wise shifts
  • Adds two new benchmark functions: hartmann_tl_inv_3_20_15 (inverted) and hartmann_tl_shift_3_20_15 (first dimension shifted by 0.2)
  • Refactors the original benchmark to share common logic through a new _compose_hartmann_tl_3_20_15 helper function

Reviewed Changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.

File Description
benchmarks/domains/hartmann/utils.py New utility module providing get_shifted_hartmann function for creating dimension-shifted Hartmann functions
benchmarks/domains/hartmann/convergence_tl.py Refactors existing benchmark and adds two new benchmark variants (inverted and shifted) using a shared composition function
benchmarks/domains/init.py Exports the two new benchmark functions
.github/workflows/benchmark.yml Updates benchmark lists to include the two new benchmarks in CI workflow

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Hrovatin and others added 2 commits October 24, 2025 08:29
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
Updated the docstring to clarify that the callable now shifts both dimensions and bounds for the Hartmann function.
Comment on lines 116 to 119
def _compose_hartmann_tl_3_20_15(
settings: ConvergenceBenchmarkSettings,
functions: callable,
) -> pd.DataFrame:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I appreciate the idea and effort to unite the different Hartmann functions under a shared "umbrella", I think the way that you did the implementation here can be improved a bit. As mentioned in the other comment, I am not a big fan of this internally defined functions function as this makes the code hard to read. Rather than these callable here, I would prefer two dedicated keyword arguments source and target. This would probably avoid the need for the internal function and make very clear which function is which

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And the code already shows that this would be better since you actually do exactly that extraction manually anyway :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue is that to construct the source and target we need some parameters from the _compose function. So they can not be constructed in advance.
These are bounds and dimensions which are defined globally in _compose

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I implemented it now as separate functions for source and target to get "clear separation of source and target functions". However, ruff does no like lambdas so I left it as internal functions.



def get_shifted_hartmann(
shifts: list[float], bounds: np.ndarray | None = None, **kwargs
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couldn't you hand over the Hartmann callable directly? You would then be able to access bounds and dim directly. It would also make clear that you will shift the Hartmann function that is handed over as an argument, i.e., when I give it a 6-dim Hartmann, I will get a shifted 6-dim Hartmann.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue is that new bounds must be used also in the Hartmann callable as otheriwse shiften inputs would be out of bounds
I could reset bounds param in hartman callable that was previously created and passed as input, but I am not sure if we actually want that.

Copy link
Collaborator

@AVHopp AVHopp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like the way this is going! Have some more ideas for further simplification

Comment on lines +170 to +172
if target is None:
target = Hartmann
target_function = target(dim=source_function.dim, bounds=source_function._bounds)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't you directly define the target also within the bodies of the corresponding functions? As you always call the _compose function with target=None anyway, I do not see any reason why you shouldn't simply handle the target the same as the source.

Copy link
Collaborator Author

@Hrovatin Hrovatin Nov 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reduces the amount of code in the bodies of individual functions by having a default (which I think here is a reasonable default since it is literally just the standard Hartmann). But if strongly prefered I can get rid of the default

base_func: Hartmann = field(init=False)
"""The underlying Hartmann function instance."""

def __attrs_post_init__(self) -> None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it be easier to simply have a constructor HartmannShifted.from_shift(original_function: callable, shifts: list[float])? That would avoid the handling of all of those attributes in the __attrs_post_init__, you could easily access the bounds and dim from the original_function and the usability would be clearer

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would it make any difference if I change __attrs_post_init__ to from_shift - wouldnt that just rename the same code block? And it is easier to use in the current way as it is consistent with the interface of the botorch benchmarking functions (e.g. Hartman) so all code can be used in both equally.

Re accessign bounds and dim: I dont see how returning a normal Hartmann would work since we need to do the shift within the call. So we need a different class.

@Hrovatin Hrovatin requested a review from AVHopp November 6, 2025 13:57
DataFrame containing benchmark results.
"""

def source(dim: int, bounds: np.ndarray) -> callable:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think for typing one should use Callable not callable

from benchmarks.domains.hartmann.utils import HartmannShifted


def hartmann_tl_3_20_15(settings: ConvergenceBenchmarkSettings) -> pd.DataFrame:
Copy link
Collaborator

@Scienfitz Scienfitz Nov 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you just make one hartmann_tl function that accepts flags for shift / invert? You can fully encapsulate _compose_hartmann_tl_3_20_15 in it as well and save its unnecessary documentation which is 4x the amount of code

You can then create the 3 respective functions by applying partial with

  • hartmann_tl_3_20_15=partial(hartmann_tl, shift=False, invert=False)
  • hartmann_tl_shift_3_20_15=partial(hartmann_tl, shift=True, invert=False)
  • hartmann_tl_inv_3_20_15=partial(hartmann_tl, shift=False, invert=True)

It will save a lot of boilerplate code and documentation

and 0 for dimensions 0 and 2.
"""

kwargs_dict: dict = field(factory=dict)
Copy link
Collaborator

@Scienfitz Scienfitz Nov 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you allow base_func to be initialized then you can completely remove kwargs_dict because all info needed to create a shifted copy is included in the base_func. You would then create this class by passing an instance of the "normal" Hartmann function that is supposed to be shifted, which seems more natural to me than the interface created here

another, and probably cleaner, option would be to create a class and derive it from Hartmann (if Hartmann is a class). All you have to add is a shift attribute and modify the internal attributes. For isntance: you could apply the shift to the bounds before you call super.init etc
But you dont have to recode the interface in any way.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've just had a brief look and also think you're making yourself a hard life here. The inheritance approach suggested by @Scienfitz seems like a good idea, though I would then not use attrs for the construction (since we're in torch world)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants