-
Notifications
You must be signed in to change notification settings - Fork 84
feat(code): limit the number of values in a sync response based on upper limit #1171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| const MAX_BYTES_PER_ADDRESS: usize = 32; | ||
| const MAX_BYTES_PER_SIGNATURE: usize = 100; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would declare these variables close to the declarations of the address and signature types. Also, each app will have its own definition of address and signature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I fully follow your comment: Are you suggesting contacting the app to receive the sizes of addresses and signatures? If not, where exactly would you declare those constants?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could add them as static const to the Address and SigningScheme traits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But that's pretty brittle as well, and could break things if the upper bound is not tight enough, eg. some application uses a signature scheme with variable-size signatures and defines Signature::MAX_SIZE to a large value just in case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The best way would probably be to encode the commit certificate (or even the full sync response) using the (Network)Codec and use the actual computed size. Not great for performance, we should probably either measure the impact upfront, or go with it and only optimize this if we see too big a perf impact. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The best way would probably be to encode the commit certificate (or even the full sync response) using the (Network)Codec and use the actual computed size.
I agree this is the right away to go about it. I'll rewrite the PR to take this into account.
| if stored_peer_id != &peer_id { | ||
| warn!( | ||
| %request_id, peer.actual = %peer_id, peer.expected = %stored_peer_id, | ||
| "Received response from different peer than expected" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably use this PR to abort processing the response in the case where we got a response from another peer than the one we expected.
|
It would be good to add an integration test that exercises the new code path. |
|
Thanks you very much for the updates and your comments @romac.
I'll do that, and will also look into the network codec to get the actual size as you suggest. Just a note that I’ll probably only be able to look into this tomorrow afternoon and early next week (doubt I can finish it by tomorrow). |
No problem, thank you! |
|
Superseded by #1184 |
Closes: #1164
We introduce a check so that a peer only sends values of at most
cfg.sync.max_response_size.The retrieval of the right amount of values is performed when handling the
Effect::GetDecidedValueseffect in the engine and not insyncin order to avoid sending a huge amount of values tosyncand then havesyncdo the retrieval.We manually tested these changes by running the tests from the
value_sync.rsfile and looking at the logs. With testing, we identified an issue where in case a syncing peer got a partial response (i.e., some of the values) back, it immediately re-requests the exact same range - because the partial values are not yet processed by the consensus engine - flooding the network. We fixed this issue in this PR.PR author checklist
For all contributors
RELEASE_NOTES.mdif the change warrants itBREAKING_CHANGES.mdif the change warrants itFor external contributors