You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My application is a flask application running with gunicorn. The metrics causes a high memory usage over time. I have tried sync, gevent and gthread worker types and all of them show the same behavior. I am using max requests and jitter settings for gunicorn which somehow made it better but overall there is an increase trend in memory. What I found was that it accumulates large amount of .db and .hist files which are not getting deleted properly. When I try to manually delete some old files memory decreases significantly but I am getting pod errors which are getting consumed by graphana. And if some of the files are still used by current workers metrics endpoint returns error and it also gets shown in graphana dashboard.
I would be glad if someone can help with what is the best approach here to stop high memory use.
The text was updated successfully, but these errors were encountered:
If yes and you still have an issue, you might want to check what options you have in the underlying Prometheus client library we use here, I found this issue that has some relevant discussion: prometheus/client_python#568
Thanks for a quick reply @rycus86 . Yeap, I have child_exit function defined in gunicorn_conf.py. But even with it there is a memory increase even for a simple flask app with a single get endpoint. I also commented in https://github.com/prometheus/client_python repo in case someone can help with it. Nevertheless, I will search for a solution in there then. Thanks again
Hi,
My application is a flask application running with gunicorn. The metrics causes a high memory usage over time. I have tried sync, gevent and gthread worker types and all of them show the same behavior. I am using max requests and jitter settings for gunicorn which somehow made it better but overall there is an increase trend in memory. What I found was that it accumulates large amount of
.db
and.hist
files which are not getting deleted properly. When I try to manually delete some old files memory decreases significantly but I am getting pod errors which are getting consumed by graphana. And if some of the files are still used by current workers metrics endpoint returns error and it also gets shown in graphana dashboard.I would be glad if someone can help with what is the best approach here to stop high memory use.
The text was updated successfully, but these errors were encountered: