Fix race condition on downlink attempt event registration #7681
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
References:
Changes
Logging would be useful to still have in place but I realised it's not really doing what is supposed to. Sometimes SIGBUS or even SIGSEGV is triggered which cannot be caught by the recover mechanism in Go.
To get more insight into the problem, only logging every event name from GS will help (only events that marshal data). That might increase the logs significantly because the events that marshal data are for uplinks and downlinks too.
There is already a clone created:
However, this clone is published as an event using
registerScheduleDownlinkAttempt
and marshalled later in the subscriber of the events and at the same time it is modified in theconn.ScheduleDown
method:Testing
I don't have a way to trigger this race condition.
Results
N/A.
Regressions
None.
Notes for Reviewers
This is caused by the race condition triggered by publishing events. The issue was initially discussed in here:
There are more issues related to this one (some closed some still on going):
I believe the root cause of the problem is that the event system does not marshal the data immediately when the event is created or published. Instead, the data is stored as a reference in the event struct (event.data) and is later marshalled by the events subscriber. The subscriber runs in a different goroutine and it is not synced in any way with the publisher who might modify the referenced event.
I don't know if marshalling in the subscribers was a conscious decision or not. The only reason I can think of is pulling out the marshalling workload of the events from the hot path of processing messages.
The proper fix I believe would be to move the event marshalling into the publisher and send the already marshalled data to the subscriber. This change might take some work (I haven’t yet gone through the code to see what this implies) and will affect the whole codebase because the event system is shared by other components too.
The quick fix is to just clone all the events that marshal data, but might increase resource usage.
Checklist
README.md
for the chosen target branch.CHANGELOG.md
.CONTRIBUTING.md
, there are no fixup commits left.