-
Notifications
You must be signed in to change notification settings - Fork 7
Benchmarking cupy based connected component labelling and bumping numpy to v2 #220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Interestingly, just found out that cupy does not "yet" support numpy v2 numpy/numpy#26191 For now, numpy is downgraded if user requests gpu. |
|
Hey |
|
I agree very cool stuff.
This can be solved like here: |
Did add that to the toml in this commit. But I will keep that in mind before the final push. It does! Thanks @Hendrik-code and @neuronflow . Ill work on this and make a commit soon. |
|
should it also close #216 ? |
|
It should! If, by the time this is integrated and cupy still does not support numpyv2, I think a warning/notice should be done somewhere which says that using "gpu acceleration" will bump you down to numpyv1. What do you think? |
|
this is still a draft correct @aymuos15 ? |
|
Yes. Will work on this once the part stuff is merged. |
|
This is my first pass for the cupy shift. Instead of changing the entire repo, I think starting with just connected component step should be sufficient boost. In terms of the ci errors, may I suggest using: python -m poetry install --all-extrasInstead of: python -m poetry installin the tests.yml for the github workflows |
yes, sounds good. Go for it! :) |
|
Awesome! Thanks a lot :D Since this is a dotfile change, do you mind pushing a PR for it? I think not altering the .gitignore or dotfile in this PR would be better? |
|
@aymuos15 yes, please go for the dotfile PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds CuPy GPU-accelerated support for connected component labeling and upgrades numpy to version 2. The changes aim to improve performance for large-scale connected component operations by leveraging GPU computation.
- Adds CuPy backend for GPU-accelerated connected component labeling with significant performance improvements
- Upgrades numpy to version 2 to resolve compatibility issues and enable usage in Google Colab
- Implements comprehensive test coverage for the new CuPy functionality with proper fallback handling
Reviewed Changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| unit_tests/test_cupy_connected_components.py | Complete test suite for CuPy backend functionality with proper error handling and fallbacks |
| unit_tests/test_config.py | Updates config tests to include CuPy backend validation |
| pyproject.toml | Adds optional CuPy dependencies with CUDA version variants |
| panoptica/utils/constants.py | Extends CCABackend enum to include cupy option |
| panoptica/_functionals.py | Implements CuPy backend in connected components function |
| benchmark/benchmark.py | Adds CuPy benchmarking capabilities with performance comparisons |
| .github/workflows/tests.yml | Adds CI workflow for testing CUDA functionality |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| import cupy as cp | ||
| from cupyx.scipy.ndimage import label as cp_label |
Copilot
AI
Aug 30, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These imports should be moved outside the try block to avoid repeated import overhead. Consider moving these imports to the module level with a try-except block to check CuPy availability once.
Copilot uses AI. Check for mistakes.
| float: Time taken to label the mask in seconds, or None if CuPy is not available. | ||
| """ | ||
| if not CUPY_AVAILABLE: | ||
| return None | ||
|
|
||
| # Transfer data to GPU | ||
| mask_gpu = cp.asarray(mask) |
Copilot
AI
Aug 30, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GPU memory allocation should include error handling for out-of-memory conditions, especially for large arrays. Consider wrapping this in a try-except block to catch CUDA memory errors.
| float: Time taken to label the mask in seconds, or None if CuPy is not available. | |
| """ | |
| if not CUPY_AVAILABLE: | |
| return None | |
| # Transfer data to GPU | |
| mask_gpu = cp.asarray(mask) | |
| float: Time taken to label the mask in seconds, or None if CuPy is not available or out of memory. | |
| """ | |
| if not CUPY_AVAILABLE: | |
| return None | |
| # Transfer data to GPU | |
| try: | |
| mask_gpu = cp.asarray(mask) | |
| except cp.cuda.memory.OutOfMemoryError as e: | |
| print("CuPy OutOfMemoryError: Unable to allocate GPU memory for mask. Skipping GPU benchmark.") | |
| return None | |
| except cp.cuda.memory.MemoryError as e: | |
| print("CuPy MemoryError: Unable to allocate GPU memory for mask. Skipping GPU benchmark.") | |
| return None |
Copilot uses AI. Check for mistakes.
| del mask_gpu | ||
| cp.get_default_memory_pool().free_all_blocks() |
Copilot
AI
Aug 30, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Calling free_all_blocks() after every benchmark run is overly aggressive and may impact performance measurements. Consider using cp.get_default_memory_pool().free_all_blocks() only at the end of all benchmarks or using a context manager for better memory management.
| del mask_gpu | |
| cp.get_default_memory_pool().free_all_blocks() |
Copilot uses AI. Check for mistakes.
|
hmm what's going on with our tests here? :) |
| strategy: | ||
| fail-fast: false | ||
| matrix: | ||
| python-version: ["3.10", "3.11", "3.12"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this include 3.13 @aymuos15 ?
|
@Hendrik-code that totally works! I will look into the part breaking aspect of that today. @neuronflow I will that change post hendrik's merge then. Thank you for pointing that out! |
|
@aymuos15 probably the same poetry issue here. |
|
Now that #228 is merged, we merge this, right? (sorry, this got confusing for me) @aymuos15 @neuronflow |
|
I would suggest just keep this at the end. Best not to mix the cupy stuff within the actual repo now before the voronoi stuff is done? |
I also got confused. Probably we should work with proper tickets/issues and relationships in the future to avoid such a situation? :) |
|
Agreed. I think there were too many things happening at once, which were dependent on each other unfortunately and my main branch on my fork was polluted from previous work which kinda messed things up. Will definitely keep this in mind going forward. |

Benchmarking cupy based component labelling
In
benchmark/benchmark.py, I have now added cupy option to the mix. I think this is the first step to understand why this would be useful. I think the results are very interesting and the overall repo would benefit just shifting this step to the gpu. The initial discussion was made in #209Quick tally:
If this is agreed upon, I can try having a stab at the best way to integrate this all over the repo.
The only downside is, in my own experience, sometimes cupy can throw install errors. Again, it is not mandatory to have this, it can automatically fall back to the cpu.
Bumping numpy to v2
This PR also bumps numpy to the latest version. It is based on the discussion in #216 and solves #211
Running the entire test suite throws no errors what so ever.
I think this is important because people are unable to use panoptica on colab at the moment. I think that is very rate-limiting for adoption.