-
Notifications
You must be signed in to change notification settings - Fork 0
Online ceremony mode PoC #6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
|
||
| // notifyEnd notifies the contributor that their turn is over. | ||
| func (c *contributor) notifyEnd() { | ||
| c.endNotificationChan <- struct{}{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if we want to wait here on message over channel being delivered. If so then we can initialize channel with buffer size 1 instead of default 0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I may personally even feel better to notify everyone that it is end of the notify stream just by closing the channel. Maybe event notify method could put messages to channel which is then handled in WaitForTurn to pass messages to client and closing such channel would notify about the end the stream. WDYF?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't wait. Quite the opposite, notifyEnd() releases the thread that's blocked on that channel. Then the now-unblocked thread advances, essentially returning from WaitForTurn and closing the channel. This happens for the contributor who has just received an ack/nack message on their contribution.
We don't want to close the channel for everybody else, because in the very same moment, when notifyEnd is called, everybody else in the queue get's a message you're now nth in the queue. Somebody gets 0 and they start contributing.
Makes sense now? BTW I'm not a fan of this mechanism, but I couldn't come up with anything better sadly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I write it too quickly. :P What I mean was that:
- For each client we create
contributoron server side. Then the thread that is handling this client gRPC call is blocked withWaitForTurnmethod. Thiscontributorobject has inside thestreamobject used to send messages back to the client. - My thinking was about not sharing that
streamobject withcontributor.contributorcould have just notifications channel where all notifications will be sent and closing that channel will notify original thread handling that client connection that it is done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could even create then something like ContributionCoordinator which will be totally decoupled from gRPC part to handle coordination.
c262b46 to
d833b8e
Compare
d833b8e to
82e12f9
Compare
To make room for the upcoming online mode, create the offline package. Move all code there. Make the main file minimal, with all commands definitions also moved to the offline package. Move integration tests to the package, since they will have nothing to do with the upcoming online mode. Add `offline mode` category label to commands, so they're groupped together when the help is printed. This commit just shuffles files around, without adding or removing any functionality. Signed-off-by: Wojciech Zmuda <[email protected]> ofline: missing command categories
Instead of having hardcoded names of test files to be used in tests, create temporary files. This will avoid polluting the project directory. Remove the R1CS serializing function because it was only used in that test and it's not needed anymore. Signed-off-by: Wojciech Zmuda <[email protected]>
MPC objects can get big for big circuits. Use pointers in functions instead of passing them by copy. Signed-off-by: Wojciech Zmuda <[email protected]>
The current architecture assumes that whoever needs randomness, will just accept it as an argument. It's the caller's responsibility to make sure the random value is cached for subsequent uses. In tests its enough to just define any byte array and reuse it. Signed-off-by: Wojciech Zmuda <[email protected]>
Provide implementation of client and server for online contributions. The server waits for contributions from the clients and orchestrates the ceremony. Clients connect to the server, contribute and disconnect. Signed-off-by: Wojciech Zmuda <[email protected]>
de5ef5b to
ed6b43d
Compare


This is a proof of concept
I opened this PR to gather feedback. It is not a finished solution, so I don't want to merge it after this review. This PR exists for gathering opinions about this part of the feature. Over time I would like it to open a proper PR of the full online ceremony feature.
It is pointless to merge this PR before the storage is implemented, but on the other hand I don't want to throw a single gigantic PR at you. Hence I decided to go with review of this half-baked thing, while I'm still working on it.
What is implemented
What is NOT implemented
Storage
The existing PoC, despite providing online mode, does not have online storage. This is the main reason for it being a PoC instead of a feature. While the ceremony happens using network, both client and server must be on the same machine, because they send local file paths instead URLs. This will be changed of course when I implement the S3-based storage.
Another problem with storage is that this PoC bases on the assumption that presigned S3 URLs will be sent. At this point I think this is not true. Most probably the server will interact with S3 via API and clients will stream contributions to the server. This is because exposing presigned URLs is possibly dangerous: when given a GET URL, a malicious actor can download the file in a loop, generating costs.
Better tests
I'd like to make it unit-testable. For now there are only integration tests provided and they test only the happy path (except queues, they're nicely tested IMO). We need tests for validating e.g. client disconnecting without providing contribution (to make sure others don't wait infinitely).
Better architecture
I had an attempt at this and I decided it's better to provide readable-but-not-really-maintainable PoC than to come up with a dozen of abstractions. I think need to start working on the storage first.