-
Notifications
You must be signed in to change notification settings - Fork 83
Use Systemd-provided cgroup IO limits #125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: Brendan Hide <[email protected]>
b5e4284
to
682684a
Compare
Signed-off-by: Brendan Hide <[email protected]>
Signed-off-by: Brendan Hide <[email protected]>
682684a
to
87a1c72
Compare
Limits tested and working for btrfs-scrub also |
I have tested and confirmed that this also works for btrfs-defrag. I'm not sure if it applies to btrfs-trim, so maybe the insertion of the IO limits configs should specifically skip the trim service. @kdave I'd appreciate any comments on this PR, especially regards testing and what else should be done to have this ready to merge. |
I think the difficult part here is how to do the configuration. Systemd needs the raw device paths, this may be cumbersome for the user to extract them and the devices could change over time (using device add/remove). Ideally there's a helper that lists devices of a given mount point before running the command. Next, where to store the configuration for each filesystem. The generated unit files with IO limits make more sense as the sysconfig does not seem suitable for that, other than a global switch on/off wheather to apply the limits if configured. For the disk classes I think this needs some user interaction, it can be guessed from sysfs but still should be confirmed if it's correct as there's more than HDD/SSD/NVMe. There could be a helper tool to gather the information and create the timer config overrides. Regarding cron I'm not sure if this is still in use, in the beginning it was meant to be temporary as systemd was not everywhere. |
I've been putting some thought into this for a while but I don't really have a good answer that I'm sure on yet. Below are my current "good enough for now" thoughts: You are right that the ideal config is not catered for due to complexity. My initial thought to make that complexity intuitive would be to put the config into a Perhaps the refresh timer could also specify boot time as that is also a well-known time when disk paths are changed/refreshed. The above could work well after the systemd rules are overridden - but then, as you suggested, any disks dynamically added or removed will have the wrong limits applied until the refresh is executed. I'd consider this an acceptable caveat as long as it is documented. Please let me know if you like this path or if you have suggestions. :-) |
Maintenance tasks may starve system IO resources for other more urgent workloads. This PR enables cgroup IO resource limits when using systemd timers.
Notes:
(systemd 255.10-3.fc40)
.To view these cgroup limits in action outside of a regular service, you can wrap a command with
systemd-run
. For example, the following is a balance with -musage=30 on a two-disk filesystem on /dev/dm-0 and /dev/dm-1, with IOPS limits of 60 and bandwidth of 10MBps:Further to the above example, if I run
lsblk
it shows the corresponding device IDs252:0
and252:128
fordm-0
anddm-1
respectively.