Nginx errors out with “Too many open files”

If you are running Nginx as a frontend to Artifactory, you may bump into the OS's/Nginx's default open file descriptors limit. When that happens, Nginx's error.log might show errors such as:

2010/04/16 13:24:16 [crit] 21974#0: *3188937 open() "/usr/local/nginx/.." failed (24: Too many open files), client: ... server: foo.com, request: "GET /artifactory/.. HTTP/1.1", upstream: "http://localhost:8081/artifactory..", host: "foo.com"

You can list the default limits of for a given process on a linux machine by running:

cat /proc/$PID/limits

By default, Nginx does proxy_buffering with the upstream, which means that it buffers the upstream response payload to disk when a client makes a reques. Normally, Nginx would save those files to some temporary path and clean up after itself when the request terminates, however, errors such as the aforementioned error may cause Nginx to fail cleaning up those temporary files like it should. As a result, your disk space could fill up very fast and you'll see it cleaning up only when you shut down the Nginx process. 

A command such as:
lsof -a +L1 /var

Might show some file descriptors with the word "(deleted)" next to them. The pure existence of such entries is technically normal, however, if you see a "deleted" entry that isn't being cleaned up after a while, something could be wrong. This will also prevent the OS from being able to free up the disk space consumed by file handle. 

If you are using 'systemd', you can follow the steps on this stack overflow answer to increase the open files limit for nginx:
https://stackoverflow.com/a/36423859/4813105