Could it be this stuff? (I think these are all the same issue under the hood, just providing several links because they have different context and one might be more useful than the others to you.)
As far as I can tell, uwsgi tries to do an allocation that scales linearly with the number of file descriptors it thinks might ever possibly exist on the system, and then high CPU usage occurs when other parts of the code iterate over those gigantic arrays. So putting a cap on what uwsgi thinks is the maximum number of fds, by setting the environment variable UWSGI_MAX_FD or by setting the nofile ulimit in the Docker container, might fix it. I have Linkding running on PikaPods and the max number of fds is set to 524288 as far as uwsgi is concerned, and the CPU usage is very reasonable; I've seen people in the comments of those issues report setting it as low as 1024 with no problems.
no subject
Could it be this stuff? (I think these are all the same issue under the hood, just providing several links because they have different context and one might be more useful than the others to you.)
As far as I can tell, uwsgi tries to do an allocation that scales linearly with the number of file descriptors it thinks might ever possibly exist on the system, and then high CPU usage occurs when other parts of the code iterate over those gigantic arrays. So putting a cap on what uwsgi thinks is the maximum number of fds, by setting the environment variable
UWSGI_MAX_FD
or by setting the nofile ulimit in the Docker container, might fix it. I have Linkding running on PikaPods and the max number of fds is set to 524288 as far as uwsgi is concerned, and the CPU usage is very reasonable; I've seen people in the comments of those issues report setting it as low as 1024 with no problems.