I am using subprocess to call another program and save its return values to a variable. This process is repeated in a loop, and after a few thousands times the program crashed with the following error:
我正在使用子进程调用另一个程序并将其返回值保存到变量。此过程在循环中重复,几千次后程序崩溃并出现以下错误:
Traceback (most recent call last):
File "./extract_pcgls.py", line 96, in <module>
SelfE.append( CalSelfEnergy(i) )
File "./extract_pcgls.py", line 59, in CalSelfEnergy
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
File "/usr/lib/python3.2/subprocess.py", line 745, in __init__
restore_signals, start_new_session)
File "/usr/lib/python3.2/subprocess.py", line 1166, in _execute_child
errpipe_read, errpipe_write = _create_pipe()
OSError: [Errno 24] Too many open files
Any idea how to solve this issue is much appreciated!
任何想法如何解决这个问题非常感谢!
Code supplied from comments:
评论提供的代码:
cmd = "enerCHARMM.pl -parram=x,xtop=topology_modified.rtf,xpar=lipid27_modified.par,nobuildall -out vdwaals {0}".format(cmtup[1])
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
out, err = p.communicate()
7 个解决方案
#1
22
In Mac OSX (El Capitan) See current configuration:
在Mac OSX(El Capitan)中查看当前配置:
#ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 709
virtual memory (kbytes, -v) unlimited
Set open files value to 10K :
将打开的文件值设置为10K:
#ulimit -Sn 10000
Verify results:
#ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 10000
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 709
virtual memory (kbytes, -v) unlimited
#2
10
I guess the problem was due to the fact that I was processing an open file with subprocess:
我想这个问题是因为我正在使用子进程处理一个打开的文件:
cmd = "enerCHARMM.pl -par param=x,xtop=topology_modified.rtf,xpar=lipid27_modified.par,nobuildall -out vdwaals {0}".format(cmtup[1])
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
Here the cmd variable contain the name of a file that has just been created but not closed. Then the subprocess.Popen
calls a system command on that file. After doing this for many times, the program crashed with that error message.
这里cmd变量包含刚刚创建但未关闭的文件的名称。然后subprocess.Popen在该文件上调用系统命令。多次执行此操作后,程序崩溃并显示该错误消息。
So the message I learned from this is
所以我从中学到的信息是
Close the file you have created, then process it
关闭您创建的文件,然后进行处理
Thanks
#3
4
You can try raising the open file limit of the OS:
您可以尝试提高操作系统的打开文件限制:
ulimit -n 2048
ulimit -n 2048
#4
4
A child process created by Popen()
may inherit open file descriptors (a finite resource) from the parent. Use close_fds=True
on POSIX (default since Python 3.2), to avoid it. Also, "PEP 0446 -- Make newly created file descriptors non-inheritable" deals with some remaining issues (since Python 3.4).
由Popen()创建的子进程可以从父进程继承打开的文件描述符(有限资源)。在POSIX上使用close_fds = True(默认自Python 3.2),以避免它。此外,“PEP 0446 - 使新创建的文件描述符不可继承”处理一些遗留问题(自Python 3.4起)。
#5
2
As others have noted, raise the limit in /etc/security/limits.conf and also file descriptors was an issue for me personally, so I did
正如其他人所说,提高/etc/security/limits.conf中的限制,文件描述符也是我个人的问题,所以我做了
sudo sysctl -w fs.file-max=100000
And added a line with fs.file-max = 100000 to /etc/sysctl.conf (reload with sysctl -p)
并将fs.file-max = 100000的行添加到/etc/sysctl.conf(使用sysctl -p重新加载)
Also if you want to make sure that your process is not affected by anything else (which mine was), use
此外,如果您想确保您的过程不受其他任何事物(我的过程)的影响,请使用
cat /proc/{process id}/limits
to find out what the actual limits of your process are, as for me the software running the python scripts also had its limits applied which have overridden the system wide settings.
找出你的过程的实际限制是什么,对我来说,运行python脚本的软件也有其限制,已经覆盖了系统范围的设置。
Posting this answer here after resolving my particular issue with this error and hopefully it helps someone.
在解决了这个错误的特定问题之后,在这里发布这个答案,希望它可以帮助某人。
#6
1
Maybe you are invoking the command multiple times. If so, each time you're doing stdout=subprocess.PIPE
. Between each call try doing p.stdout.close()
.
也许你多次调用该命令。如果是这样,每次你做stdout = subprocess.PIPE。在每次调用之间尝试执行p.stdout.close()。
#7
0
opens file in subprocess. It is blocking call.
在子进程中打开文件。这是阻止通话。
ss=subprocess.Popen(tempFileName,shell=True)
ss.communicate()
#1
22
In Mac OSX (El Capitan) See current configuration:
在Mac OSX(El Capitan)中查看当前配置:
#ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 709
virtual memory (kbytes, -v) unlimited
Set open files value to 10K :
将打开的文件值设置为10K:
#ulimit -Sn 10000
Verify results:
#ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 10000
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 709
virtual memory (kbytes, -v) unlimited
#2
10
I guess the problem was due to the fact that I was processing an open file with subprocess:
我想这个问题是因为我正在使用子进程处理一个打开的文件:
cmd = "enerCHARMM.pl -par param=x,xtop=topology_modified.rtf,xpar=lipid27_modified.par,nobuildall -out vdwaals {0}".format(cmtup[1])
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
Here the cmd variable contain the name of a file that has just been created but not closed. Then the subprocess.Popen
calls a system command on that file. After doing this for many times, the program crashed with that error message.
这里cmd变量包含刚刚创建但未关闭的文件的名称。然后subprocess.Popen在该文件上调用系统命令。多次执行此操作后,程序崩溃并显示该错误消息。
So the message I learned from this is
所以我从中学到的信息是
Close the file you have created, then process it
关闭您创建的文件,然后进行处理
Thanks
#3
4
You can try raising the open file limit of the OS:
您可以尝试提高操作系统的打开文件限制:
ulimit -n 2048
ulimit -n 2048
#4
4
A child process created by Popen()
may inherit open file descriptors (a finite resource) from the parent. Use close_fds=True
on POSIX (default since Python 3.2), to avoid it. Also, "PEP 0446 -- Make newly created file descriptors non-inheritable" deals with some remaining issues (since Python 3.4).
由Popen()创建的子进程可以从父进程继承打开的文件描述符(有限资源)。在POSIX上使用close_fds = True(默认自Python 3.2),以避免它。此外,“PEP 0446 - 使新创建的文件描述符不可继承”处理一些遗留问题(自Python 3.4起)。
#5
2
As others have noted, raise the limit in /etc/security/limits.conf and also file descriptors was an issue for me personally, so I did
正如其他人所说,提高/etc/security/limits.conf中的限制,文件描述符也是我个人的问题,所以我做了
sudo sysctl -w fs.file-max=100000
And added a line with fs.file-max = 100000 to /etc/sysctl.conf (reload with sysctl -p)
并将fs.file-max = 100000的行添加到/etc/sysctl.conf(使用sysctl -p重新加载)
Also if you want to make sure that your process is not affected by anything else (which mine was), use
此外,如果您想确保您的过程不受其他任何事物(我的过程)的影响,请使用
cat /proc/{process id}/limits
to find out what the actual limits of your process are, as for me the software running the python scripts also had its limits applied which have overridden the system wide settings.
找出你的过程的实际限制是什么,对我来说,运行python脚本的软件也有其限制,已经覆盖了系统范围的设置。
Posting this answer here after resolving my particular issue with this error and hopefully it helps someone.
在解决了这个错误的特定问题之后,在这里发布这个答案,希望它可以帮助某人。
#6
1
Maybe you are invoking the command multiple times. If so, each time you're doing stdout=subprocess.PIPE
. Between each call try doing p.stdout.close()
.
也许你多次调用该命令。如果是这样,每次你做stdout = subprocess.PIPE。在每次调用之间尝试执行p.stdout.close()。
#7
0
opens file in subprocess. It is blocking call.
在子进程中打开文件。这是阻止通话。
ss=subprocess.Popen(tempFileName,shell=True)
ss.communicate()