Skip to content

Use parallel reduce threading primitive in covariance algorithm #3126

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 23 commits into
base: main
Choose a base branch
from

Conversation

Vika-F
Copy link
Contributor

@Vika-F Vika-F commented Mar 17, 2025

Description

Ads new daal::Reducer interface class that defines the API that have to be implemented in the algorithms to allow the use of reduction primitives based on tbb::parallel_reduce.

Two new threading primitives were added:

  • threader_reduce implements parallel reduction using dynamic work balancing,
  • static_threader_reduce implements parallel reduction using static work balancing.

Dense covariance algorithm in oneDAL was modified to use new static_threader_reduce primitive instead of static_threader_for + single thread reduction as it was done previously.

tls_data_t structure previously used as a thread local storage for partial results in Covariance algorithm was replaced with CovarianceReduser class which implements the interface of new daal::Reducer to perform parallel reduction.

This PR depends on #3159 because CovarianceReducer uses TArrayScalableCalloc to store the partial results.
And the performance of TArrayScalableCalloc is not optimal due to unaligned memory stores (takes up to 45% of the total Covariance compute time). PR #3159 fixes the issue.


PR completeness and readability

  • I have reviewed my changes thoroughly before submitting this pull request.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have updated the documentation to reflect the changes or created a separate PR with update and provided its number in the description, if necessary.
  • Git commit message contains an appropriate signed-off-by string (see CONTRIBUTING.md for details).
  • I have added a respective label(s) to PR if I have a permission for that.
  • I have resolved any merge conflicts that might occur with the base branch.

Testing

  • I have run it locally and tested the changes extensively.
  • All CI jobs are green or I have provided justification why they aren't.

Performance

  • I have measured performance for affected algorithms using scikit-learn_bench and provided at least summary table with measured data, if performance change is expected.
  • I have provided justification why performance has changed or why changes are not expected.
  • I have provided justification why quality metrics have changed or why changes are not expected.
  • I have extended benchmarking suite and provided corresponding scikit-learn_bench PR if new measurable functionality was introduced in this PR.

{
return;
}
tls_data_t<algorithmFPType, cpu> result = tbb::parallel_reduce(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it be better to make a NUMA-aware version of this function that would first reduce within nodes and then globally?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@david-cortes-intel Thank you for looking into this.

I would prefer to go step-by-step with NUMA.
If a non-NUMA version is enough to improve performance, I would be more likely to add it first.
If not, than I'll probably add NUMA awareness into the primitives.

I also have to say that this version of the code is just an initial quick and dirty commit. The code will change a lot further. At least it should pass the testing and all the tbb:: stuff should be moved into threading layer.

@Vika-F Vika-F added the perf Performance optimization label Apr 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement perf Performance optimization
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants