-
Notifications
You must be signed in to change notification settings - Fork 95
mem fix - reset more updated protobuf objects #7089
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: 8.6.x
Are you sure you want to change the base?
mem fix - reset more updated protobuf objects #7089
Conversation
3fd65c3 to
48f60e7
Compare
hjoliver
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
Note: the coverage thing is just because I wanted to be more explicit about which types were being reset (so the other side of an |
|
@dwsutherland, do we think that this issue would also affect the cylc-uiserver (which presumably has the same long lived message problem)? If so, does this fix also cover the UIS side of things? |
|
This fix is essentially serialising / deserialising the entire data store periodically. The main concern here is that this ends up being a high CPU hit. From a quick profiling run with the following config, this change increased the CPU hit of the |
I just went with fixing a massive memory leak as the priority, but good that you checked that. |
Yes, as mentioned:
|
No, I moved away from serialising/deserialising (I initially did it that way because of paranoia, but changed to something less intensive in that same #6727 ticket).. Also this isn't periodic, and it's for these selected types (i.e. workflow, T/F and T/F-proxies... jobs didn't appear to be too bad) and on every delta..
Good to see it's not a massive hit, I would trade 0.01s to avoid a 500% increase in memory usage (for some workflows) |
closes #7078
From the example in the associated issue:


before (using queue size 5, sleep 5, n=1)
(Drops indicate new cycle point. This memory problem presents with huge number of tasks between pruning)
after (using queue size 10, sleep 5, n=1)


This fix essentially extends #6727 to some short-lived objects that receive a barrage of deltas over their life..
Workflows that balloon out to Gbs for this reason should now be back to the ~500Mb realm
This should also reduce respective UIS memory footprint (as it will replicate the delta application there).
Check List
CONTRIBUTING.mdand added my name as a Code Contributor.setup.cfg(andconda-environment.ymlif present).?.?.xbranch.