Calculate Linux open file limit?

Is there a way of calculating some kind of optimal level of maximum file handles for a Linux system?

The reason for why I’m asking is that I’m running a bunch of dockerized elasticsearch-containers and noticed that the host sever has more than 2.000.000 file handles open when running ‘lsof’. Is this even a problem? Or is that approach way to blunt?

I’m guessing that certain parts of the server, the filesystem for example, can handle more open file handles than the network adapter?

Hope the main question is clear, tried to search but found no answer. Thanks in advance.

This is the default setting. Perhaps this howto will help you set up.

The major limiting factor of having insane amounts of simultaneously open files is kernel space memory. It is especially true for sockets and IPC objects because each of them have not only associated descriptor structures but the buffers as well. Pure performance aspect of the situation should be negligible. If you have a 64-bit kernel on a big system with many gigabytes of RAM then I don’t see any practical problem with the blunt approach. Linux should handle it just fine.

The kernel provides current global limit value in /proc/sys/fs/file-max. When the kernel starts default limit value is determined and set up automatically depending on the amount of memory you have. The system will just not allow you to open any more files once the limit is reached but when that happens some services and applications should start to crash releasing the descriptors again. The default is generally a reasonable value as is (I have 292395 on an old 3GB 32-bit machine and 3276200 on my laptop) but if you really need it you can easily push the limit up a bit without any problem. Just don’t go to insane levels like 10000000 files on a machine with only 1GB of RAM or more than ten times the default value for your system and it should be fine.

1 Like