You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What is the underlying problem you're trying to solve?
The decision log plugin uploads multiple logged events as a gzipped JSON array, limited by the configured upload_size_limit_bytes (default 32KB). Currently this upload can be triggered in two different ways, either periodically or manually. The problem is there is no option to upload as soon as a complete gzipped JSON array is ready to upload. This would enable a more constant stream of reading events from the buffer without having to adjust the periodic timer.
The periodic trigger timing is controlled by the options min_delay_seconds and max_delay_seconds, which gives the user a controllable sliding window of time when events are uploaded. This helps the backend receiving the events not get overloaded with events. It is possible to set min_delay_seconds to 0 to mimic constant uploads but there would still be a delay.
Add a new trigger option that changes the upload behavior. So that uploads occur as soon as the upload_size_limit_bytes is reached OR if the periodic trigger happens. The additional periodic trigger would only be there to prevent events getting stuck waiting to hit the upload limit.
The trigger option could be nameddecision_logs.reporting.trigger=upload_size_limit.
name ideas:
immediate
stream
async_push
???
Another optimization would be to have the upload occur in a separate routine so that next payload chunk can immediately begin populating.
Describe a "Good Enough" solution
Ideally this new trigger option should be supported by both the current buffer implementation and the one being introduced in this PR. But only supporting it for the new "event" buffer implementation would be a good enough solution until there is a demand to also add it to the current implementation.
What is the underlying problem you're trying to solve?
The decision log plugin uploads multiple logged events as a gzipped JSON array, limited by the configured
upload_size_limit_bytes
(default 32KB). Currently this upload can be triggered in two different ways, either periodically or manually. The problem is there is no option to upload as soon as a complete gzipped JSON array is ready to upload. This would enable a more constant stream of reading events from the buffer without having to adjust the periodic timer.The periodic trigger timing is controlled by the options
min_delay_seconds
andmax_delay_seconds
, which gives the user a controllable sliding window of time when events are uploaded. This helps the backend receiving the events not get overloaded with events. It is possible to setmin_delay_seconds
to 0 to mimic constant uploads but there would still be a delay.This idea was suggested by @mjungsbluth, thank you!
Describe the ideal solution
Add a new trigger option that changes the upload behavior. So that uploads occur as soon as the
upload_size_limit_bytes
is reached OR if the periodic trigger happens. The additional periodic trigger would only be there to prevent events getting stuck waiting to hit the upload limit.The trigger option could be named
decision_logs.reporting.trigger=upload_size_limit
.name ideas:
Another optimization would be to have the upload occur in a separate routine so that next payload chunk can immediately begin populating.
Describe a "Good Enough" solution
Ideally this new trigger option should be supported by both the current buffer implementation and the one being introduced in this PR. But only supporting it for the new "event" buffer implementation would be a good enough solution until there is a demand to also add it to the current implementation.
Additional Context
#7446
The text was updated successfully, but these errors were encountered: