Open
Description
This zkevm folder has received new tests quickly in the last weeks. As we tried to parallelize work between many people, we've intentionally kept PRs from overlapping to avoid conflicts.
This means that there are a couple of tasks that we should revisit when the dust settles. I'll leave a list here, and please feel free to add more so we can address them later.
- Review test names and how they're organized in files
- Many tests have the usual
code_prefix + iter_loop + code_suffix
construction. See if we can extract this logic and make most of them reuse a more generic code. - Revisit any
TODO
comments and see if they still apply. If that's the case, at a minimum, create a separate issue to keep track of it. - Figure out a way to verify that the benchmark tests behave as expected: most (all?) use a while loop, we should verify either at the test level OR at runtime level (with a post-state check) that the test behaves as expected. Note that runtime level implicitly means we have to add some kind of verification logic in the test, which will cost gas and will also add to the counted zk cycles thus impacting the outcome. It would be very nice to have some kind of assertion in the test itself to verify that (for context also see attempts like here 3e2cf75#r2127226177):
- Loop runs three or more times
- Loop is balanced (no stack under/overflows)
- Check if the tests contain duplicate tests (which either run the same code or a small variation but with the same goal in the same setting) (Nice to have)