Skip to content

Conversation

@wzmuda
Copy link
Contributor

@wzmuda wzmuda commented Jul 27, 2025

This is a proof of concept

I opened this PR to gather feedback. It is not a finished solution, so I don't want to merge it after this review. This PR exists for gathering opinions about this part of the feature. Over time I would like it to open a proper PR of the full online ceremony feature.

It is pointless to merge this PR before the storage is implemented, but on the other hand I don't want to throw a single gigantic PR at you. Hence I decided to go with review of this half-baked thing, while I'm still working on it.

What is implemented

  • gRPC protocol definition
  • ceremony server
  • ceremony service with handlers of the gRPC protocol
  • ceremony client
  • integration test of the online ceremony (happy scenario)

What is NOT implemented

Storage

The existing PoC, despite providing online mode, does not have online storage. This is the main reason for it being a PoC instead of a feature. While the ceremony happens using network, both client and server must be on the same machine, because they send local file paths instead URLs. This will be changed of course when I implement the S3-based storage.

Another problem with storage is that this PoC bases on the assumption that presigned S3 URLs will be sent. At this point I think this is not true. Most probably the server will interact with S3 via API and clients will stream contributions to the server. This is because exposing presigned URLs is possibly dangerous: when given a GET URL, a malicious actor can download the file in a loop, generating costs.

Better tests

I'd like to make it unit-testable. For now there are only integration tests provided and they test only the happy path (except queues, they're nicely tested IMO). We need tests for validating e.g. client disconnecting without providing contribution (to make sure others don't wait infinitely).

Better architecture

I had an attempt at this and I decided it's better to provide readable-but-not-really-maintainable PoC than to come up with a dozen of abstractions. I think need to start working on the storage first.


// notifyEnd notifies the contributor that their turn is over.
func (c *contributor) notifyEnd() {
c.endNotificationChan <- struct{}{}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if we want to wait here on message over channel being delivered. If so then we can initialize channel with buffer size 1 instead of default 0.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may personally even feel better to notify everyone that it is end of the notify stream just by closing the channel. Maybe event notify method could put messages to channel which is then handled in WaitForTurn to pass messages to client and closing such channel would notify about the end the stream. WDYF?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't wait. Quite the opposite, notifyEnd() releases the thread that's blocked on that channel. Then the now-unblocked thread advances, essentially returning from WaitForTurn and closing the channel. This happens for the contributor who has just received an ack/nack message on their contribution.

We don't want to close the channel for everybody else, because in the very same moment, when notifyEnd is called, everybody else in the queue get's a message you're now nth in the queue. Somebody gets 0 and they start contributing.

Makes sense now? BTW I'm not a fan of this mechanism, but I couldn't come up with anything better sadly.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I write it too quickly. :P What I mean was that:

  • For each client we create contributor on server side. Then the thread that is handling this client gRPC call is blocked with WaitForTurn method. This contributor object has inside the stream object used to send messages back to the client.
  • My thinking was about not sharing that stream object with contributor. contributor could have just notifications channel where all notifications will be sent and closing that channel will notify original thread handling that client connection that it is done.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something like that:
image
And contributor with single notifications channel:
image

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could even create then something like ContributionCoordinator which will be totally decoupled from gRPC part to handle coordination.

@wzmuda wzmuda force-pushed the wz/pre-online-refactorings branch 2 times, most recently from c262b46 to d833b8e Compare July 29, 2025 21:04
@wzmuda wzmuda force-pushed the wz/pre-online-refactorings branch from d833b8e to 82e12f9 Compare July 30, 2025 21:05
Base automatically changed from wz/pre-online-refactorings to main August 1, 2025 21:00
wzmuda added 5 commits August 7, 2025 23:03
To make room for the upcoming online mode, create the offline package.
Move all code there. Make the main file minimal, with all commands
definitions also moved to the offline package. Move integration tests to
the package, since they will have nothing to do with the upcoming online
mode.

Add `offline mode` category label to commands, so they're groupped
together when the help is printed.

This commit just shuffles files around, without adding or removing any
functionality.

Signed-off-by: Wojciech Zmuda <[email protected]>

ofline: missing command categories
Instead of having hardcoded names of test files to be used in tests,
create temporary files. This will avoid polluting the project directory.

Remove the R1CS serializing function because it was only used in that
test and it's not needed anymore.

Signed-off-by: Wojciech Zmuda <[email protected]>
MPC objects can get big for big circuits. Use pointers in functions
instead of passing them by copy.

Signed-off-by: Wojciech Zmuda <[email protected]>
The current architecture assumes that whoever needs randomness,
will just accept it as an argument. It's the caller's responsibility to
make sure the random value is cached for subsequent uses.

In tests its enough to just define any byte array and reuse it.

Signed-off-by: Wojciech Zmuda <[email protected]>
Provide implementation of client and server for online contributions.
The server waits for contributions from the clients and orchestrates
the ceremony. Clients connect to the server, contribute and disconnect.

Signed-off-by: Wojciech Zmuda <[email protected]>
@wzmuda wzmuda force-pushed the wz/online-ceremony branch from de5ef5b to ed6b43d Compare August 10, 2025 22:44
@wzmuda wzmuda closed this Aug 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants