-
Notifications
You must be signed in to change notification settings - Fork 539
Implementation of torch-to-linalg lowering of AtenOuterOp #4099
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Defined the op in Linear.cpp TODO: - Testing, and perhaps add some test(-s) inside torch-mlir?
Hi, this is my first time contributing to the project - if you have any feedback or suggestions, I would really appreciate that. |
Thanks for picking this up. There isn't any reason to include quantization logic for this op since it doesn't have any qdq fusion implemented in It would also be a bit better to implement this directly as a Also, please do add e2e tests somewhere in |
cda896e
to
2348344
Compare
- Rewrote the ConvertAtenOuterOp without unsqueezing - Replaced linalg::MatmulOp with linalg::GenericOp for buidling result of the op - Added error messages for - Added test case in e2e tests - placed in matmul.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After a change to the init tensor for the generic, I think this looks good!
Thanks for the changes.
Also, be sure to run either |
I changed it to the init tensor and ran pre-commit - everything looks good on my end. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @amemov, can you please take a look at the CI failure?
Hi @vivekkhandelwal1, I skimmed it briefly before - I didn't see any failures specifically related to I will take a better look at it today, but so far I'm not really sure what exactly I need to modify / add here. |
Hi @amemov, some test(s) is/are crashing for the fx_importer config. Most probably, it will be the one that you have added. In order to find out which test is crashing you need to run the tests serially. You may use the following command:
The above command will run all the tests one by one. And, the last test run will be the one that's crashing. Then, you can figure out the fix for that. |
@vivekkhandelwal1
I resolved it by changing the casting and dimensions for operands. On my machine, it now passes AttenOuter test. |
|
||
Location loc = op->getLoc(); | ||
Value lhs = adaptor.getSelf(); | ||
Value rhs = op->getOperand(1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe this needs to be in terms of the adaptor.
The e2e test test should work with dynamic dims, too. So I'd actually like it if you added a test for each case.
Type newResultType = getTypeConverter()->convertType(op.getType()); | ||
|
||
// Create a zero-initialized tensor with shape [lhsDim, rhsDim] | ||
Value initTensor = createInitTensor( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should just be rewriter.create<tensor::EmptyOp>(...)
An attempt to resolve #4093
Initial implementation:
TODO: