You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Profiler] Induce Inductor Import before Profiling (pytorch#155243)
Fixespytorch#151829
Summary:
Currently, inductor has a lazy init which causes certain aten ops to run during a profiling run. This ends up cluttering the function events especially for smaller traces. One of the attempts to fix this was to simply remove that import from the profiler entirely but it looks like the import happens somewhere downstream anyways and the event still flood our profile.
To fix this, we induce the inductor import during prepare trace if the inductor is present. This way regardless of how the workload imports the inductor the actual init process will be done before tracing starts, resulting in more accurate tracing.
Test Plan:
Added test, also ran N7316820 manually and went from getting many events on the first run to the following output (only difference is Runtime Triggered Module Loading which is CUPTI overhead event):
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self CUDA Self CUDA % CUDA total CUDA time avg # of Calls
aten::mul_ 1.40% 340.638us 99.92% 24.390ms 24.390ms 1.535us 100.00% 4.605us 4.605us 1
cudaLaunchKernel 0.60% 146.533us 98.52% 24.049ms 24.049ms 0.000us 0.00% 3.070us 3.070us 1
Runtime Triggered Module Loading 6.14% 1.500ms 6.14% 1.500ms 1.500ms 1.535us 100.00% 1.535us 1.535us 1
Runtime Triggered Module Loading 91.78% 22.403ms 91.78% 22.403ms 22.403ms 1.535us 100.00% 1.535us 1.535us 1
void at::native::vectorized_elementwise_kernel<4, at... 0.00% 0.000us 0.00% 0.000us 0.000us 1.535us 100.00% 1.535us 1.535us 1
cudaDeviceSynchronize 0.08% 20.031us 0.08% 20.031us 20.031us 0.000us 0.00% 0.000us 0.000us 1
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self CUDA Self CUDA % CUDA total CUDA time avg # of Calls
aten::mul_ 82.81% 484.396us 94.26% 551.378us 551.378us 1.440us 100.00% 1.440us 1.440us 1
cudaLaunchKernel 11.45% 66.982us 11.45% 66.982us 66.982us 0.000us 0.00% 0.000us 0.000us 1
void at::native::vectorized_elementwise_kernel<4, at... 0.00% 0.000us 0.00% 0.000us 0.000us 1.440us 100.00% 1.440us 1.440us 1
cudaDeviceSynchronize 5.74% 33.581us 5.74% 33.581us 33.581us 0.000us 0.00% 0.000us 0.000us 1
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Rollback Plan:
Differential Revision: D76056511
Pull Request resolved: pytorch#155243
Approved by: https://github.com/ngimel
0 commit comments