-
DescriptionIn the case of the Airflow API server running on Kubernetes, there are situations where memory usage exceeds 75%, 80%, or 85% — triggering alerts — regardless of the configured memory limit. Use case/motivationIn on-premises environments, resource alerts often indicate serious issues. Related issuesNo response Are you willing to submit a PR?
Code of Conduct
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 5 replies
-
Thanks for opening your first issue here! Be sure to follow the issue template! If you are willing to raise PR to address this issue please do so, no need to wait for approval. |
Beta Was this translation helpful? Give feedback.
-
This looks like a deployment issue - not airflow itself, I think you need to look at your k8s configuration and configure it. Guessing of course. But this is most likely thing deployment manager (you) should be able to configure I guess. converting into discussion, maybe you can provide more information or maybe someone might help with better guidance |
Beta Was this translation helpful? Give feedback.
This issue was about configuring alarms. BTW We are not using gunicorn any more - just starlette and uvicorn I believe.
Forcing garbage collection in Python is not a good idea. But if you think there is a memory leak somewhere and api-server takes more memory for you than expected you can try to track your memory allocation with memray - like one of the users did for scheduler in #56641 and report it.
If you want to add "--limit-max-requests" - that might also be a good idea. PRs are most welcome. Opening an issue describing this new feature is also a good idea, but you will have to wait for someone to volunteer to implement it so contributing PR on your own if you need it is the fastest …