gogoWebsite

Too many open files problem troubleshooting and solving in Linux

Updated to 2 days ago

Too many open files problem troubleshooting and solving in Linux

author:Grey

Original address:

Blog Park: Too many open files problem investigation and resolution under Linux

CSDN: Too many open files problem troubleshooting and solving under Linux

Too many open files is a common error in Linux systems. Literally speaking, it means that the program opens too many files. However, files here not only mean the file, but also include open communication links (such as sockets), listening ports, etc., so it can sometimes be called handles, and this error can usually be called handles exceeding the system limit. The reason for this is that the process opens the number of files and communication links that exceed the system limit at some point.

The command ulimit -a can check the maximum number of handles set by the current system.

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 31767
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 31767
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

You can see that the configuration of open files is 1024, and you can add open files through the following command

ulimit -n 65535

This modification method cantemporaryIncrease the number of file openings to 65535, but this configuration will fail after the system restarts.

Another way is to modify the system configuration file. Taking Ubuntu as an example, the configuration file is in

/etc/security/

Added in this configuration file

* soft nofile 65535
* hard nofile 65535

If you want to view the number of handles currently opened by a process, you can use the following command:

lsof-p Process ID|wc -l

In addition, if you use supervisor to host and start a project, you will encounter the problem that this configuration cannot take effect. The reason is that the number of handles opened by the supervisor will default to configure is 1024.

If you want to view the maximum open files of a process, you can view it through the limits corresponding to the process number of this process

cat/proc/Process ID/limits

One of the lines is:

Max open files 1024 1024 bytes

The program hosted by the supervisor is the maximum number of supervisor configurations 1024 by default. At this time, you need to manually change the supervisor configuration file. The modification method is as follows. Taking the Ubuntu system as an example, find the supervisor configuration file.

exist[supervisord]Among the options, add the configuration of the minfds option

[supervisord]
minfds=65535                  ; min. avail startup file descriptors; default 1024

After the configuration is completed, you need to restart the supervisor (taking systemctl as an example)

systemctl restart supervisor

It will take effect

In passing:

cat/proc/process number/limits

Check the number of open files for the corresponding process

Max open files 65535 65535 bytes