Is there a way of calculating some kind of optimal level of maximum file handles for a Linux system?
The reason for why I’m asking is that I’m running a bunch of dockerized elasticsearch-containers and noticed that the host sever has more than 2.000.000 file handles open when running ‘lsof’. Is this even a problem? Or is that approach way to blunt?
I’m guessing that certain parts of the server, the filesystem for example, can handle more open file handles than the network adapter?
Hope the main question is clear, tried to search but found no answer. Thanks in advance.