I'm running a BIG PHP script, it might take a full day to finish its job,
this script grabs data from MySQL database and use it with curl to test stuff.. it does it with about 40,000 records..
So to make it run on background for as long as it needs, i used the terminal to execute it.. in the PHP script itself it has those settings to make sure it runs as long as possible until it finishes :
我正在运行一个大的PHP脚本,它可能需要一整天才能完成它的工作,这个脚本从MySQL数据库中获取数据并使用它与curl来测试东西..它用大约40,000条记录来做..所以要做到这一点只要它需要在后台运行,我使用终端执行它..在PHP脚本本身它有这些设置,以确保它尽可能长时间运行,直到它完成:
set_time_limit(0); // run without timeout limit
and because i execute it from another separated PHP script i use this function
因为我从另一个单独的PHP脚本执行它我使用此功能
ignore_user_abort(1); // ignore my abort
because executing it directly from the command line, it will give me two choices..
1 ) to wait till the script finishes
2 ) cancel the whole process
因为直接从命令行执行它,它会给我两个选择.. 1)等到脚本完成2)取消整个过程
and after searching, there was an article that gives me a third choice and its to run it in background for longest possible by creating an external PHP script to excute the main BIG PHP script in background using this function:
在搜索之后,有一篇文章给了我第三个选择,它通过创建一个外部PHP脚本在后台使用此函数来执行主要的BIG PHP脚本,在后台运行它是最长的:
exec("php bigfile.php");
that means i can open this external page normally from a browser and exit it without worry since ignore_user_abort
will keep it running in background.. That's still not the problem
这意味着我可以正常从浏览器打开这个外部页面,并毫无顾虑地退出它,因为ignore_user_abort将使它在后台运行..这仍然不是问题
the problem is.. after an unknown period, the script stops its job.. how do i know ? I told it to write in an external file the current date time on each record it works on, so i refresh everytime to that external page to see if it stopped updating,
问题是..在一段不知名的时期后,脚本停止工作......我怎么知道?我告诉它在外部文件中写入它所处理的每个记录的当前日期时间,所以我每次刷新到该外部页面以查看它是否停止更新,
after an unknown period it actually stops for no reason, the script has nothing that says stop or anything.. and if anything wrong happened, i told it to skip the record ( nothing wrong happens tho, they all work in the same line, if one works then all should work )
在一个不知名的时期之后,它实际上无缘无故停止,剧本没有任何说停止或任何东西......如果发生任何错误,我告诉它跳过记录(没有错误发生,他们都在同一行,如果一个工作然后所有应该工作)
However my main doubts are in the following :
但是我的主要疑虑如下:
- Apache has a timeout that kills it
- That was not the proper way to execute PHP script in background
- There is a timeout somewhere, whether in PHP or Apache or (MySQL !?)
that's where my biggest doubt goes.. MySQL, Would it ever stop giving records to the PHP loop fromwhile
? Would it crash the whole script if an error occurred ? Does it has any timeout at anything that crashes the whole script ?
Apache有一个超时可以杀死它
这不是在后台执行PHP脚本的正确方法
在某个地方有一个超时,无论是在PHP或Apache还是(MySQL!?),这是我最大的疑问.MySQL,它会不会停止给PHP循环提供记录?如果发生错误,它会崩溃整个脚本吗?在崩溃整个脚本的任何事情上是否有任何超时?
If none of those could apply to it, Is there any way to log what is exactly going on the script now ? Or why would it crash ? any detailed way to log everything ?
如果没有一个可以应用它,有没有办法记录脚本现在正在发生什么?或者为什么会崩溃?记录所有内容的详细方法?
UPDATE:
I FOUND this in messages
file in /var/log/
:
我在/ var / log /中的消息文件中找到了这个:
Dec 29 16:29:56 i0sa shutdown[5609]: shutting down for system halt
Dec 29 16:30:14 i0sa exiting on signal 15
Dec 29 16:30:28 i0sa syslogd 1.5.0#6: restart.
Dec 29 16:50:28 i0sa -- MARK --
.....
Dec 29 18:50:31 i0sa -- MARK --
Dec 29 19:02:36 i0sa shutdown[3641]: shutting down for system halt
Dec 29 19:03:11 i0sa exiting on signal 15
Dec 29 19:03:48 i0sa syslogd 1.5.0#6: restart.
it says for system halt.. I'll try to make sure that this could be it in the future crashes and match times, COULD this be causing it ? and why ? memory_limit
is 128M while i have 2GB
of server memory ram, could this be it ?
它说系统暂停..我会尽力确保这可能是在未来的崩溃和匹配时间,这可能导致它?为什么? memory_limit是128M,而我有2GB的服务器内存ram,这可能吗?
P.S.: I restarted the server several times manually.. But this one says shutdown and halt ?
P.S。:我手动重启服务器几次..但是这个说关机并停止?
9 个解决方案
#1
5
For such cases i use with success nohup command like this:
对于这种情况,我使用成功nohup命令,如下所示:
nohup php /home/cron.php >/dev/null 2>&1 &
You can check after that if script is running with:
如果脚本正在运行,您可以在此之后检查:
jobs -l
Note: When you use nohup command path for php file must to be absolute not relative. I think is not very graceful to call from one php file another php file only to prevent that execution to stop before finish work.
注意:当你使用nohup命令路径时,php文件必须绝对不是相对的。我认为从一个php文件调用另一个php文件只是为了防止执行在完成工作之前停止不是很优雅。
External reference: http://en.wikipedia.org/wiki/Nohup
外部参考:http://en.wikipedia.org/wiki/Nohup
Also make sure that you do not have memory leaks in your script, that make script after some time to crash because "out of memory".
还要确保脚本中没有内存泄漏,这会使脚本在一段时间后崩溃,因为“内存不足”。
#2
2
Have you tried using nohup?
你尝试过使用nohup吗?
exec("nohup php bigfile.php &");
If your host allow the nohup command, the command should run in the background and the script calling exec should continue immediately.
如果主机允许nohup命令,则命令应该在后台运行,并且调用exec的脚本应该立即继续。
#3
1
Osa, your Apache will in all likelyhood have killed this and I could go into some detail about what possible causes are there, as others surely will. I will rather answer your problem instead of your question and advise you to use tmux
for the task you describe (or screen
if tmux
is not available).
Osa,你的Apache很可能已经杀了这个,我可以详细了解可能的原因,正如其他人肯定会的那样。我宁愿回答您的问题,而不是您的问题,并建议您使用tmux来完成您描述的任务(或者如果tmux不可用则使用屏幕)。
So what you would do is open up tmux
and start the script from in there. detach with ctrl-b d
and reattach later with tmux attach
to see how far it is. While detached you can log out without stopping it.
所以你要做的就是打开tmux并从那里开始编写脚本。用ctrl-b d分离并稍后用tmux attach重新附加以查看它有多远。分离时,您可以在不停止的情况下退出。
For a short intro into other tmux
stuff, that you don't yet need, but may help you understand the approach http://victorquinn.com/blog/2011/06/20/tmux/ or later http://blog.hawkhost.com/2010/06/28/tmux-the-terminal-multiplexer/ are quite ok
有关其他tmux内容的简短介绍,您还不需要,但可以帮助您了解该方法http://victorquinn.com/blog/2011/06/20/tmux/或更高版本的http:// blog。 hawkhost.com/2010/06/28/tmux-the-terminal-multiplexer/非常好
#4
0
Execution time is not the only thing to consider. Next likely culprit would be the script exceeding max_memory_limit
执行时间不是唯一要考虑的事情。接下来可能的罪魁祸首是脚本超过max_memory_limit
Have you checked/enabled your error logging? Could be failing silently.
您检查/启用了错误记录吗?可能会默默地失败。
#5
0
If you use Apache, there is a timeout about roughly 300 seconds. Call apache_reset_timeout();
regularly can allow a script to run forever if you don't stop it.
如果你使用Apache,大约有300秒的超时。调用apache_reset_timeout();如果你不停止脚本,可以定期允许脚本永远运行。
#6
0
First of all i have faced this problem and resolved but i want to emphasis that your MySQL server isn't responsible about this hanging Only you need to change some settings in Apache.Put these below lines in the above of your PHP file.
首先,我遇到了这个问题并解决了但我想强调你的MySQL服务器不负责这个挂起只需要在Apache中更改一些设置。在PHP文件的上面这些下面的行。
ini_set('max_execution_time', 1000);
ini_set('memory_limit', '50M');
set_time_limit(0);
and each fetching data should use :
每个提取数据应使用:
$url_class = $URL_path;
//open connection
$ch_school_class = curl_init();
curl_setopt($ch_school_class, CURLOPT_URL, $url_class);
$result_school_class = curl_exec($ch_school_class);
//clean up
curl_close($ch_school_class);
//Each fetch like this
$url_class = $URL_path;
//open connection
$ch_school_class = curl_init();
curl_setopt($ch_school_class, CURLOPT_URL, $url_class);
$result_school_class = curl_exec($ch_school_class);
//clean up
curl_close($ch_school_class);
#7
0
1: Consider running the PHP script externally using popen(), pclose() etc..
1:考虑使用popen(),pclose()等在外部运行PHP脚本。
2: You can shorten the length of your cURL requests using a multi curl handler
2:您可以使用多卷曲处理程序缩短cURL请求的长度
3: If possible, you should try doing it in a different language. PHP is known to be considerably slow for larger tasks. Why take all take all day when it can be done in only a few hours at the most (I say a few hours since it has to make network requests to 40,000 different addresses)
3:如果可能,您应该尝试使用其他语言。众所周知,PHP对于较大的任务来说相当慢。为什么全部花费一整天才能在最短的几个小时内完成(我说几个小时后它必须向40,000个不同的地址发出网络请求)
4: And, as many others have said, try using nohup if you can.
4:而且,正如许多其他人所说的那样,如果可以的话,尝试使用nohup。
#8
0
Using ignore_user_abort to run a long-running script through apache is a dirty way to do it.
使用ignore_user_abort通过apache运行长时间运行的脚本是一种肮脏的方法。
The simplest way is to use GNU screen, just type screen and you'll be attached to a new console inside your current console, launch your script then detach from the screen with ctrl-a d (full manual here), the screen will continue to work even if you get disconnected from your server.
最简单的方法是使用GNU屏幕,只需键入屏幕,然后你将被连接到当前控制台内的新控制台,启动你的脚本然后用ctrl-a d(在这里完整手册)从屏幕上分离,屏幕将继续即使您与服务器断开连接也能正常工作。
To reattach to you screen, use screen -r and you'll get back to your running script.
要重新连接到您的屏幕,请使用屏幕-r,然后您将返回到正在运行的脚本。
The hardest way (and the cleanest way) is to rewrite your script to work as a system daemon, there's a lot of libs that can help you doing it, i suggest you to dig in pear's daemon lib (an example here).
最难的方式(也是最干净的方法)是重写你的脚本作为系统守护进程,有很多lib可以帮你做,我建议你挖掘一下pear的守护进程lib(这里有一个例子)。
As for the memory limit problem (?), before deciding to update the memory_limit configuration, you have to check if your script is consuming more ram than what's written in your current configuration, make a simple ps aux|grep php and look for the RSS column, that's all the memory your script is eating.
至于内存限制问题(?),在决定更新memory_limit配置之前,你必须检查你的脚本是否比你当前配置中写的更多ram,做一个简单的ps aux | grep php并查找RSS列,这是你的脚本正在吃的所有内存。
#9
0
There are other many problem we generally face .
我们通常面临许多其他问题。
- you program that is getting failed might be connecting to any other users/ service / server with inappropriate user permission . because generally web-php is using user apache(centos) or www-data (in ubuntu) which are not the part of execution sudo list . give the relevent permission and then try.
- check for php.ini and check for line disabled_function : exec, shell_exec or something , remove/comment that line and then try .
检查php.ini并检查行disabled_function:exec,shell_exec或其他东西,删除/注释该行,然后尝试。
您失败的程序可能会连接到具有不适当用户权限的任何其他用户/服务/服务器。因为一般来说web-php使用的是用户apache(centos)或www-data(在ubuntu中),它们不是执行sudo列表的一部分。给予相关许可,然后尝试。检查php.ini并检查行disabled_function:exec,shell_exec或其他东西,删除/注释该行,然后尝试。
#1
5
For such cases i use with success nohup command like this:
对于这种情况,我使用成功nohup命令,如下所示:
nohup php /home/cron.php >/dev/null 2>&1 &
You can check after that if script is running with:
如果脚本正在运行,您可以在此之后检查:
jobs -l
Note: When you use nohup command path for php file must to be absolute not relative. I think is not very graceful to call from one php file another php file only to prevent that execution to stop before finish work.
注意:当你使用nohup命令路径时,php文件必须绝对不是相对的。我认为从一个php文件调用另一个php文件只是为了防止执行在完成工作之前停止不是很优雅。
External reference: http://en.wikipedia.org/wiki/Nohup
外部参考:http://en.wikipedia.org/wiki/Nohup
Also make sure that you do not have memory leaks in your script, that make script after some time to crash because "out of memory".
还要确保脚本中没有内存泄漏,这会使脚本在一段时间后崩溃,因为“内存不足”。
#2
2
Have you tried using nohup?
你尝试过使用nohup吗?
exec("nohup php bigfile.php &");
If your host allow the nohup command, the command should run in the background and the script calling exec should continue immediately.
如果主机允许nohup命令,则命令应该在后台运行,并且调用exec的脚本应该立即继续。
#3
1
Osa, your Apache will in all likelyhood have killed this and I could go into some detail about what possible causes are there, as others surely will. I will rather answer your problem instead of your question and advise you to use tmux
for the task you describe (or screen
if tmux
is not available).
Osa,你的Apache很可能已经杀了这个,我可以详细了解可能的原因,正如其他人肯定会的那样。我宁愿回答您的问题,而不是您的问题,并建议您使用tmux来完成您描述的任务(或者如果tmux不可用则使用屏幕)。
So what you would do is open up tmux
and start the script from in there. detach with ctrl-b d
and reattach later with tmux attach
to see how far it is. While detached you can log out without stopping it.
所以你要做的就是打开tmux并从那里开始编写脚本。用ctrl-b d分离并稍后用tmux attach重新附加以查看它有多远。分离时,您可以在不停止的情况下退出。
For a short intro into other tmux
stuff, that you don't yet need, but may help you understand the approach http://victorquinn.com/blog/2011/06/20/tmux/ or later http://blog.hawkhost.com/2010/06/28/tmux-the-terminal-multiplexer/ are quite ok
有关其他tmux内容的简短介绍,您还不需要,但可以帮助您了解该方法http://victorquinn.com/blog/2011/06/20/tmux/或更高版本的http:// blog。 hawkhost.com/2010/06/28/tmux-the-terminal-multiplexer/非常好
#4
0
Execution time is not the only thing to consider. Next likely culprit would be the script exceeding max_memory_limit
执行时间不是唯一要考虑的事情。接下来可能的罪魁祸首是脚本超过max_memory_limit
Have you checked/enabled your error logging? Could be failing silently.
您检查/启用了错误记录吗?可能会默默地失败。
#5
0
If you use Apache, there is a timeout about roughly 300 seconds. Call apache_reset_timeout();
regularly can allow a script to run forever if you don't stop it.
如果你使用Apache,大约有300秒的超时。调用apache_reset_timeout();如果你不停止脚本,可以定期允许脚本永远运行。
#6
0
First of all i have faced this problem and resolved but i want to emphasis that your MySQL server isn't responsible about this hanging Only you need to change some settings in Apache.Put these below lines in the above of your PHP file.
首先,我遇到了这个问题并解决了但我想强调你的MySQL服务器不负责这个挂起只需要在Apache中更改一些设置。在PHP文件的上面这些下面的行。
ini_set('max_execution_time', 1000);
ini_set('memory_limit', '50M');
set_time_limit(0);
and each fetching data should use :
每个提取数据应使用:
$url_class = $URL_path;
//open connection
$ch_school_class = curl_init();
curl_setopt($ch_school_class, CURLOPT_URL, $url_class);
$result_school_class = curl_exec($ch_school_class);
//clean up
curl_close($ch_school_class);
//Each fetch like this
$url_class = $URL_path;
//open connection
$ch_school_class = curl_init();
curl_setopt($ch_school_class, CURLOPT_URL, $url_class);
$result_school_class = curl_exec($ch_school_class);
//clean up
curl_close($ch_school_class);
#7
0
1: Consider running the PHP script externally using popen(), pclose() etc..
1:考虑使用popen(),pclose()等在外部运行PHP脚本。
2: You can shorten the length of your cURL requests using a multi curl handler
2:您可以使用多卷曲处理程序缩短cURL请求的长度
3: If possible, you should try doing it in a different language. PHP is known to be considerably slow for larger tasks. Why take all take all day when it can be done in only a few hours at the most (I say a few hours since it has to make network requests to 40,000 different addresses)
3:如果可能,您应该尝试使用其他语言。众所周知,PHP对于较大的任务来说相当慢。为什么全部花费一整天才能在最短的几个小时内完成(我说几个小时后它必须向40,000个不同的地址发出网络请求)
4: And, as many others have said, try using nohup if you can.
4:而且,正如许多其他人所说的那样,如果可以的话,尝试使用nohup。
#8
0
Using ignore_user_abort to run a long-running script through apache is a dirty way to do it.
使用ignore_user_abort通过apache运行长时间运行的脚本是一种肮脏的方法。
The simplest way is to use GNU screen, just type screen and you'll be attached to a new console inside your current console, launch your script then detach from the screen with ctrl-a d (full manual here), the screen will continue to work even if you get disconnected from your server.
最简单的方法是使用GNU屏幕,只需键入屏幕,然后你将被连接到当前控制台内的新控制台,启动你的脚本然后用ctrl-a d(在这里完整手册)从屏幕上分离,屏幕将继续即使您与服务器断开连接也能正常工作。
To reattach to you screen, use screen -r and you'll get back to your running script.
要重新连接到您的屏幕,请使用屏幕-r,然后您将返回到正在运行的脚本。
The hardest way (and the cleanest way) is to rewrite your script to work as a system daemon, there's a lot of libs that can help you doing it, i suggest you to dig in pear's daemon lib (an example here).
最难的方式(也是最干净的方法)是重写你的脚本作为系统守护进程,有很多lib可以帮你做,我建议你挖掘一下pear的守护进程lib(这里有一个例子)。
As for the memory limit problem (?), before deciding to update the memory_limit configuration, you have to check if your script is consuming more ram than what's written in your current configuration, make a simple ps aux|grep php and look for the RSS column, that's all the memory your script is eating.
至于内存限制问题(?),在决定更新memory_limit配置之前,你必须检查你的脚本是否比你当前配置中写的更多ram,做一个简单的ps aux | grep php并查找RSS列,这是你的脚本正在吃的所有内存。
#9
0
There are other many problem we generally face .
我们通常面临许多其他问题。
- you program that is getting failed might be connecting to any other users/ service / server with inappropriate user permission . because generally web-php is using user apache(centos) or www-data (in ubuntu) which are not the part of execution sudo list . give the relevent permission and then try.
- check for php.ini and check for line disabled_function : exec, shell_exec or something , remove/comment that line and then try .
检查php.ini并检查行disabled_function:exec,shell_exec或其他东西,删除/注释该行,然后尝试。
您失败的程序可能会连接到具有不适当用户权限的任何其他用户/服务/服务器。因为一般来说web-php使用的是用户apache(centos)或www-data(在ubuntu中),它们不是执行sudo列表的一部分。给予相关许可,然后尝试。检查php.ini并检查行disabled_function:exec,shell_exec或其他东西,删除/注释该行,然后尝试。