meson: let's bump RLIMIT_NOFILE hard limit to 512K#10780
meson: let's bump RLIMIT_NOFILE hard limit to 512K#10780keszybz merged 1 commit intosystemd:masterfrom
Conversation
|
Shouldn't this limit be based on a percentage of available memory and not some arbitrary number? By default Linux limits maximum total number of open file descriptors at 10% of memory (source) (that's the number in Of course it can be bumped to higher value (does systemd do this automatically when RLIMIT_NOFILE > file-max?), but just using arbitrary number might cause issues on systems with small amount of memory - e.g. on my older Thinkpad X220 with 4GB of RAM, file-max limit is merely 382104. Maybe something like "80% of file-max" would work better in practice? Otherwise single process could use quota of open FDs designed for whole OS. |
On current kernels fds are tracked for the purpose of memcg like any other memory allocated by processes. Thus the limit on fds is definitely not useful for memory related tracking because that's already better covered by the memcg limits themselves... Hence, in order to keep things simple let's focus on removing the limit more, not making it more dynamic. |
|
BTW, #10921 highlights why this PR doesn't set this to 1M as requested by the reporter, but tries to be conservative by using 512K, i.e. a value that is substantially higher than the real-life usecases we know of, but not higher than one order of (binary) magnitude: it appears java sets the soft RLIMIT_NOFILE to the hard RLIMIT_NOFILE, and then goes on and allocates a huge array with one entry per fd. If we set 512K this means an array of 2M bytes size, and if we go too much overboard with the limit java pays for it every time... |
|
This PR is not directly related to 10921, because |
Prompted by:
https://lists.freedesktop.org/archives/systemd-devel/2018-October/041578.html