-
Notifications
You must be signed in to change notification settings - Fork 321
[Feat] Add swap like grammar in tuple assignment #1185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
👋 Hi! Thank you for contributing to the TileLang project. Please remember to run We appreciate you taking this step! Our team will review your contribution, and we look forward to your awesome work! 🚀 |
WalkthroughAdds two-phase binding to support multi-target unpacking in the AST, remaps underscore placeholders in the builder for readability, refactors dtype construction delegation, and adds a test validating tuple-swap kernels on CUDA tensors. Changes
Sequence DiagramsequenceDiagram
participant Test as Test / User
participant AST as AST Compiler
participant Builder as Builder
participant TIR as Generated TIR
Test->>AST: Parse assignment `a, b = b, a`
rect rgb(240, 248, 255)
Note over AST: Phase 1 — bind RHS to temporaries
AST->>Builder: bind("_", value_b)
Builder->>Builder: remap "_" → "_tmp_0"
Builder->>TIR: __tb.bind("_tmp_0", value_b)
AST->>Builder: bind("_", value_a)
Builder->>Builder: remap "_" → "_tmp_1"
Builder->>TIR: __tb.bind("_tmp_1", value_a)
end
rect rgb(240, 255, 240)
Note over AST: Phase 2 — bind targets to temporaries
AST->>Builder: bind("a", _tmp_0)
Builder->>TIR: __tb.bind("a", _tmp_0)
AST->>Builder: bind("b", _tmp_1)
Builder->>TIR: __tb.bind("b", _tmp_1)
end
TIR->>Test: Executed swap kernel (results returned)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Poem
Pre-merge checks and finishing touches✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
🔇 Additional comments (2)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
testing/python/language/test_tilelang_language_frontend_v2.py(1 hunks)tilelang/language/v2/ast.py(3 hunks)tilelang/language/v2/builder.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
testing/python/language/test_tilelang_language_frontend_v2.py (2)
tilelang/jit/__init__.py (3)
jit(275-276)jit(280-291)jit(294-361)tilelang/language/v2/builder.py (2)
prim_func(136-140)prim_func(573-666)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Quick Lint
| data = torch.tensor([1.0, 2.0], dtype=torch.float32).cuda() | ||
| k_swap_var(data) | ||
| ref = torch.tensor([2.0, 1.0], dtype=torch.float32).cuda() | ||
| torch.testing.assert_close(data, ref) | ||
|
|
||
| k_swap_idx = swap_idx() | ||
| data = torch.tensor([1.0, 2.0], dtype=torch.float32).cuda() | ||
| k_swap_idx(data) | ||
| ref = torch.tensor([2.0, 1.0], dtype=torch.float32).cuda() | ||
| torch.testing.assert_close(data, ref) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Skip when CUDA is unavailable
Line 294 calls .cuda() without checking torch.cuda.is_available(). On CPU-only machines (typical CI runners), this raises AssertionError: Torch not compiled with CUDA, causing the suite to fail even though the swap logic itself is valid. Please guard the test so it skips when CUDA isn’t present.
@@
def test_swap_logic():
+ if not torch.cuda.is_available():
+ import pytest
+ pytest.skip("CUDA is required for test_swap_logic")
@@
k_swap_var = swap_var()
data = torch.tensor([1.0, 2.0], dtype=torch.float32).cuda()Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In testing/python/language/test_tilelang_language_frontend_v2.py around lines
294 to 303, the test unconditionally calls .cuda() which fails on machines
without CUDA; guard the CUDA-dependent assertions by checking
torch.cuda.is_available() and skip the CUDA-specific section when it's False
(e.g., call pytest.skip("CUDA unavailable") or add a pytest.mark.skipif
decorator), or alternatively run the same tensor ops on CPU when CUDA is not
present so the swap logic is still validated without requiring GPU.
This pr introduce swap like grammar:
When enter
tilelang generates:
Each step is required for the correctness of binding, for example, if we ignore step 2, and generate the following code:
Is identical to the code below, that's wrong.
Summary by CodeRabbit
Release Notes
Refactor
Tests