Add support for running microbenchmarks using a manifest json #4912
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR is part of a larger piece of work around being able to run targeted performance tests such as comparing multiple runtime versions side-by-side. Currently the way we define what jobs/tests we run in performance tests are by using command line arguments. However, the arguments can be a bit difficult to construct sometimes, and for some configurations it does not support the ability to run some combinations of jobs at the same time, only being possible to run in separate invocations.
With this PR, I have made it so that you can pass in a manifest.json argument which defines:
As an example, the following json can be used to validate dotnet/perf-autofiling-issues#60871:
With this, the operation counts for each test case are fixed for a much fairer comparison. This is not possible currently in BDN as there is no way to do per-test-case overrides.
The schema for defining jobs in the manifest matches identically to the schema used for defining jobs inside BDN itself. This means it is possible to construct any possible set of jobs with full flexibility.
I also changed our argument parsing logic to use the same library that BenchmarkDotNet itself uses, that way when you run
--help
on our executable, you can see both our custom args and BDN's custom args in the same dialog.I have a separate PR #4911 which works well alongside this for creating multiple core_root payloads from the build artifacts, so that one can build a script to automatically validate performance changes detected by the auto-filer.
This PR needs a bit more testing on some other configurations as I have just been focusing on the corerun scenario, but creating this now for early feedback.