Skip to content

Puma config that draws on the cpu assignment to determine workers #1595

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

louispt1
Copy link
Contributor

This PR re-draws the puma config in ETEngine to use the cpu assignment of the container to determine workers (if WEB_CONCURRENCY is not set explicitly).

This is more of an 'example' PR for now. Before merging this PR, we should also correctly set the WEB_CONCURRENCY variable or remove it as per this issue.

If we would like to move forward with adjusting the puma configurations in our other rails applications, for the thread-safe applications (ETModel and MyETM at least) we can also allow the threads assignment in their puma configurations to be determined buy the cpu_cores assignment, matching the number of threads and workers to the number of cpus available, or explicitly setting the RAILS_MAX_THREADS variable to limit threads explicitly.

thread_count = Integer(ENV.fetch('RAILS_MAX_THREADS', cpu_cores.to_s))
threads thread_count, thread_count

This PR also introduces preload_app! to the puma config which helps save memory between workers using copy on write. This seems to make sense in any case where more than one worker might be used.

@louispt1
Copy link
Contributor Author

This PR should not be merged. I have set up the correct configurations for the puma config for the rails applications in our environment. PRs for each change can be found here:

There are two categories of configuration, one for thread safe and one for non-thread safe applications.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant