NOTICE: Feedback on how the question can be improved would be great as I am still learning, I understand there is no code because I am confident it does not need fixing. I have researched online a great deal and cannot seem to find the answer to my question. My script works as it should when I change the parameters to produce less outputs so I know it works just fine. I have debugged the script and got no errors. When my parameters are changed to produce more outputs and the script runs for hours then it stops. My goal for the question below is to determine if linux will timeout a process running over time (or something related) and, if, how it can be resolved.
注意:关于如何改进问题的反馈会很好,因为我还在学习,我知道没有代码,因为我相信它不需要修复。我在网上研究了很多,似乎无法找到我的问题的答案。当我更改参数以产生较少的输出时,我的脚本可以正常工作,所以我知道它的工作正常。我调试了脚本并没有出错。当我的参数被更改为产生更多输出并且脚本运行数小时后它就会停止。我对下面这个问题的目标是确定linux是否会超时一个正在运行的进程(或相关的东西),以及它是如何解决的。
I am running a shell script that has several for loops which does the following:
我正在运行一个shell脚本,它有几个for循环,它执行以下操作:
- Goes through existing files and copies data into a newly saved/named file
- Makes changes to the data in each file
- Submits these files (which number in the thousands) to another system
The script is very basic (beginner here) but so long as I don't give it too much to generate, it works as it should. However if I want it to loop through all possible cases which means I will generates 10's of thousands of files, then after a certain amount of time the shell script just stops running.
这个脚本是非常基本的(这里是初学者),但只要我不给它太多的生成,它就可以正常工作。但是,如果我希望它循环遍历所有可能的情况,这意味着我将生成数十万个文件,那么在一段时间之后shell脚本就会停止运行。
I have more than enough hard drive storage to support all the files being created. One thing to note however is that during the part where files are being submitted, if the machine they are submitted to is full at that moment in time, the shell script I'm running will have to pause where it is and wait for the other machine to clear. This process works for a certain amount of time but eventually the shell script stops running and won't continue.
我有足够的硬盘存储空间来支持所有正在创建的文件。但需要注意的一点是,在提交文件的部分,如果他们提交的机器在那个时刻已满,那么我正在运行的shell脚本必须暂停到原来的位置并等待另一个机器清除。此过程可以工作一段时间,但最终shell脚本停止运行并且不会继续。
Is there a way to make it continue or prevent it from stopping? I typed control + Z to suspend the script and then fg to resume but it still does nothing. I check the status by typing ls -la to see if the file size is increasing and it is not although top/ps says the script is still running.
有没有办法让它继续或阻止它停止?我键入control + Z来暂停脚本,然后fg继续,但它仍然没有做任何事情。我通过键入ls -la检查状态以查看文件大小是否正在增加,尽管top / ps表示脚本仍在运行。
1 个解决方案
#1
0
Assuming that you are using 'Bash' for your script - most likely, you are running out of 'system resources' for your shell session. Also most likely, the manner in which your script works is causing the issue. Without seeing your script it will be difficult to provide additional guidance, however, you can check several items at the 'system level' that may assist you, i.e.
假设您正在为脚本使用“Bash” - 很可能,您的shell会话的“系统资源”已用完。最有可能的是,脚本的工作方式也会导致问题。如果没有看到您的脚本,将很难提供额外的指导,但是,您可以在“系统级别”检查可能对您有所帮助的几个项目,即
- review system logs for errors about your process or about 'system resources'
- check your docs: man ulimit (or 'man bash' and search for 'ulimit')
-
consider removing 'deep nesting' (if present); instead, create work sets where step one builds the 'data' needed for the next step, i.e. if possible, instead of:
考虑删除“深度嵌套”(如果存在);相反,创建工作集,其中第一步构建下一步所需的“数据”,即如果可能,而不是:
step 1 (all files) ## guessing this is what you are doing
step 2 (all files)
step 3 (all files第1步(所有文件)##猜测这是你正在做的第2步(所有文件)第3步(所有文件
查看系统日志,查看有关您的流程或“系统资源”的错误
检查你的文档:man ulimit(或'man bash'并搜索'ulimit')
Try each step for each file - Something like:
尝试每个文件的每个步骤 - 如下所示:
for MY_FILE in ${FILE_LIST}
do
step_1
step_2
step_3
done
:)
Dale
#1
0
Assuming that you are using 'Bash' for your script - most likely, you are running out of 'system resources' for your shell session. Also most likely, the manner in which your script works is causing the issue. Without seeing your script it will be difficult to provide additional guidance, however, you can check several items at the 'system level' that may assist you, i.e.
假设您正在为脚本使用“Bash” - 很可能,您的shell会话的“系统资源”已用完。最有可能的是,脚本的工作方式也会导致问题。如果没有看到您的脚本,将很难提供额外的指导,但是,您可以在“系统级别”检查可能对您有所帮助的几个项目,即
- review system logs for errors about your process or about 'system resources'
- check your docs: man ulimit (or 'man bash' and search for 'ulimit')
-
consider removing 'deep nesting' (if present); instead, create work sets where step one builds the 'data' needed for the next step, i.e. if possible, instead of:
考虑删除“深度嵌套”(如果存在);相反,创建工作集,其中第一步构建下一步所需的“数据”,即如果可能,而不是:
step 1 (all files) ## guessing this is what you are doing
step 2 (all files)
step 3 (all files第1步(所有文件)##猜测这是你正在做的第2步(所有文件)第3步(所有文件
查看系统日志,查看有关您的流程或“系统资源”的错误
检查你的文档:man ulimit(或'man bash'并搜索'ulimit')
Try each step for each file - Something like:
尝试每个文件的每个步骤 - 如下所示:
for MY_FILE in ${FILE_LIST}
do
step_1
step_2
step_3
done
:)
Dale