-
Notifications
You must be signed in to change notification settings - Fork 143
Feat: tensorfy via rank-width #346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Hi kuyanov, this looks great. More intelligent support for tensor contraction has been a thing that has been on the list essentially forever. Just a couple of things, since I've been trying to be a bit better at ensuring code quality before things are merged in: there's a couple of methods you introduce that don't have docstrings, and also certain parameters don't have a mypy type set. |
Also, not sure how much sense it makes for this feature, but in order for people to actually find this and use it, it is useful to point out its use somewhere. For instance adding a notebook in the demos folder demonstrating cases where certain strategies perform better than others. Or even just a note in the demos/AllFeatures notebook that there are other strategies for contracting with a link to the appropriate place in the documentation. |
No problem, I will add docstrings to all functions. Galois is not much of a problem, yet I was thinking of getting rid of it for a long time due to performance issues. I will modify rank_factorise(...) so that it can return the inverted factors instead, which is more efficient than just calling .inverse() on them. |
I see the BenchmarkQuimb notebook already contains benchmarks of tensorfy against Quimb. I guess it'd be a good place to add the comparison? |
Yes, that sounds like a good place. Maybe we can then also change the name to something like BenchmarkTensorContraction or something. |
BenchmarkTensorContraction is finally complete - it is based on the former BenchmarkQuimb with classes |
Overall, the rank-width methods turn out to be more reliable in general. The naive approach has a better constant in some cases, but sometimes it is exponentially slower. |
Efficient implementation of tensorfy() based on my dissertation "A Rank-Width--Based Approach to Quantum Circuit Simulation via ZX-Calculus". Added the following components:
Now tensorfy takes an additional kwarg 'strategy' (defaulting to 'naive'); the new routine is accessible with tensorfy(g, strategy='rw-auto').