fix: go and python benchmarks for hatchet #3
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hey folks,
This PR makes some essential updates to the Hatchet benchmark to make it more accurately reflect Hatchet's performance.
The crux of the issue: while the Temporal benchmarks are written to spawn each fibonacci method as an individual Activity, the Hatchet benchmarks have been declared as a predefined DAG. For 400 iterations, this results in a 400-step DAG, which cannot be parallelized (the benchmarks across 10 workers seem to reflect this problem).
This PR updates the Go and Python implementation to use a parent-child task pattern, where the parent task's analogue is the Temporal Workflow, while the child tasks is like the Temporal Activity.
Based on my local testing, this brings all Hatchet benchmarks completely in line with the performance of Temporal. This is also the method that Hatchet recommends using for this type of dynamic orchestration pattern for workflows that are more than a few steps, so it's a much fairer benchmark.
Other notes
v0.71.0.Local Benchmarks
I don't expect these to be the final results, but they are meant to illustrate the discrepancy, particularly for Go. I've also included commands for easy repro:
We'd appreciate you updating the benchmarks to reflect this more fair comparison. Let me know if there's anything else needed for this PR.