-
Notifications
You must be signed in to change notification settings - Fork 0
Update benchmarking.md with paper content #21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I just noticed that by putting in the |
Do you think actually mentioning the whole send and receive thing in there is helpful? You would not include networking in a benchmark like that, right? (I would do networking benchmarks separately). I think it is fine to mention the whole |
I like the send/receive because that’s what you want to measure, right? How much time until there’s a string available to send und how much time between receiving that string und getting the (say, Boolean) result. If the test case is „create a signature, then verify it“, that’s not actually a useful scenario because you know the verification result already. No need to check. I also object to the framing that lazy computation is an issue that needs to be worked around by „forcing“ computation. |
My benchmark example is not meant to measure verification as part of a larger application, though, just purely the verification time (like you might see in the paper for the signature scheme). In that context lazy computation is actually an issue we need to just work around, and trying to justify the mitigations by talking about sending values around seems inappropriate. If I were to take more of a "Benchmark the signature as if it was part of some larger application" angle, then I would find that more fitting. Does that mean you would prefer the whole page to talk more from that perspective? |
But the „mitigation“ is also serializing and deserializing, no? I think all I want is not to have that sound unnatural, but weave it into a sensible story. |
That sounds like we are kind of swiping the issues under the rug, no? What if the user just wants to measure purely the verification algorithm and not the whole sign->serialize->deserialize->verify process? I think it comes down to the fact that we need to inform them in a way that guarantees they won't fall for the lazy evaluation issues. I think the most clear way to do that is to mention them very clearly, instead of creating a nice story where the actual issues might end up being hidden. I can of course write something like "These are the issues, but actually if you benchmark like this, you won't run into them", but then I still have to mention what happens if they don't benchmark like that, i.e. "you still have to do the whole serialize->deserialize thing in any case". Just makes the whole doc unnecessarily complicated in my opinion.
The part that is timed being different is quite relevant depending on what the user wants to measure and how they want to represent it. Measuring separately gives you more information in case you have, for example, multiple parties verifying each signature. Then measuring signing + verifying as one thing is not very useful. We don't know what the user wants to measure and why. |
Ok, we had a discussion where we clarified what we wanted for the "Problem" section. I have integrated that feedback. Now it just remains to add some information on the new exp alg setter methods for |
I'd suggest the following:
computeSync()
. Instead, tell them to do a completex.getRepresentation(); send(); x = fromRepresentation()
cycle, simulating a real application.The text was updated successfully, but these errors were encountered: