-
Notifications
You must be signed in to change notification settings - Fork 56
Move checks from nonzero kernel to operator #1991
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR optimizes the nonzero operation by moving early validation checks from the kernel implementation to the operator level, improving performance for edge cases. The changes also standardize naming conventions and XPU-specific constants.
- Move early return for empty tensors from kernel to operator level
- Replace PyTorch MAX_DIMS with XPU-specific XPU_MAX_TENSORINFO_DIMS constant
- Rename tensor parameter to out for consistency with operation schema
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
File | Description |
---|---|
src/ATen/native/xpu/Nonzero.cpp | Adds early return for empty tensors and updates dimension limit constant |
src/ATen/native/xpu/sycl/NonzeroKernel.cpp | Removes redundant empty tensor check and renames variables for schema consistency |
Comments suppressed due to low confidence (1)
src/ATen/native/xpu/sycl/NonzeroKernel.cpp:1
- The
range
tensor is declared but never used. Therange_begin
pointer is set tonullptr
on line 99, but therange
tensor allocation should be removed since it's not needed.
#include <ATen/Dispatch.h>
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
MAX_DIMS
toXPU_MAX_TENSORINFO_DIMS
xpu equivalent,tensor
toout
to match op schema.