如何优雅地重新启动芹菜工人?

时间:2021-03-09 19:17:17

While issuing a new build to update code in workers how do I restart celery workers gracefully?

当发布一个新的构建来更新工作人员的代码时,我如何优雅地重新启动芹菜工作人员呢?

Edit: What I intend to do is to something like this.

编辑:我想做的就是这样的事情。

  • Worker is running, probably uploading a 100 MB file to S3
  • Worker正在运行,可能会将一个100 MB的文件上传到S3
  • A new build comes
  • 一个新的构建
  • Worker code has changes
  • 工人代码变化
  • Build script fires signal to the Worker(s)
  • 构建脚本向工作人员发送信号
  • Starts new workers with the new code
  • 用新代码启动新员工
  • Worker(s) who got the signal after finishing the existing job exit.
  • 在完成现有工作退出后收到信号的员工。

6 个解决方案

#1


42  

The new recommended method of restarting a worker is documented in here http://docs.celeryproject.org/en/latest/userguide/workers.html#restarting-the-worker

重新启动worker的新推荐方法在这里有文档说明:http://docs.celeryproject.org/en/latest/userguide/workers.html#restarting- The worker

$ celery multi start 1 -A proj -l info -c4 --pidfile=/var/run/celery/%n.pid
$ celery multi restart 1 --pidfile=/var/run/celery/%n.pid

According to http://ask.github.com/celery/userguide/workers.html#restarting-the-worker you can restart a worker sending a HUP signal

根据http://ask.github.com/celery/userguide/workers.html#restarting-the-worker,可以重新启动发送HUP信号的worker

 ps auxww | grep celeryd | grep -v "grep" | awk '{print $2}' | xargs kill -HUP

#2


11  

celery multi start 1 -A proj -l info -c4 --pidfile=/var/run/celery/%n.pid
celery multi restart 1 --pidfile=/var/run/celery/%n.pid

http://docs.celeryproject.org/en/latest/userguide/workers.html#restarting-the-worker

http://docs.celeryproject.org/en/latest/userguide/workers.html restarting-the-worker

#3


4  

If you're going the kill route, pgrep to the rescue:

如果你走的是杀人路线,pgrep去救你:

kill -9 `pgrep -f celeryd`

Mind you, this is not a long-running task and I don't care if it terminates brutally. Just reloading new code during dev. I'd go the restart service route if it was more sensitive.

请注意,这不是一个长期运行的任务,我不关心它是否会残忍地终止。在开发过程中重新加载新代码。如果重新启动服务路径更敏感,我将使用它。

#4


3  

You should look at Celery's autoreloading

你应该看看芹菜的自动装载

#5


2  

I have repeatedly tested the -HUP solution using an automated script, but find that about 5% of the time, the worker stops picking up new jobs after being restarted.

我曾多次使用自动脚本测试-HUP解决方案,但发现在大约5%的情况下,工作人员在重新启动后停止接收新作业。

A more reliable solution is:

更可靠的解决办法是:

stop <celery_service>
start <celery_service>

停止< celery_service >开始< celery_service >

which I have used hundreds of times now without any issues.

我已经用了几百次了,没有任何问题。

From within Python, you can run:

在Python中,您可以运行:

import subprocess
service_name = 'celery_service'
for command in ['stop', 'start']:
    subprocess.check_call(command + ' ' + service_name, shell=True)

#6


0  

What should happen to long running tasks? I like it this way: long running tasks should do their job. Don't interrupt them, only new tasks should get the new code.

长时间运行的任务会发生什么?我喜欢这样:长时间运行的任务应该完成它们的工作。不要打断他们,只有新的任务才能得到新的代码。

But this is not possible at the moment: https://groups.google.com/d/msg/celery-users/uTalKMszT2Q/-MHleIY7WaIJ

但这在目前是不可能的:https://groups.google.com/d/msg/celery-users/uTalKMszT2Q/-MHleIY7WaIJ

#1


42  

The new recommended method of restarting a worker is documented in here http://docs.celeryproject.org/en/latest/userguide/workers.html#restarting-the-worker

重新启动worker的新推荐方法在这里有文档说明:http://docs.celeryproject.org/en/latest/userguide/workers.html#restarting- The worker

$ celery multi start 1 -A proj -l info -c4 --pidfile=/var/run/celery/%n.pid
$ celery multi restart 1 --pidfile=/var/run/celery/%n.pid

According to http://ask.github.com/celery/userguide/workers.html#restarting-the-worker you can restart a worker sending a HUP signal

根据http://ask.github.com/celery/userguide/workers.html#restarting-the-worker,可以重新启动发送HUP信号的worker

 ps auxww | grep celeryd | grep -v "grep" | awk '{print $2}' | xargs kill -HUP

#2


11  

celery multi start 1 -A proj -l info -c4 --pidfile=/var/run/celery/%n.pid
celery multi restart 1 --pidfile=/var/run/celery/%n.pid

http://docs.celeryproject.org/en/latest/userguide/workers.html#restarting-the-worker

http://docs.celeryproject.org/en/latest/userguide/workers.html restarting-the-worker

#3


4  

If you're going the kill route, pgrep to the rescue:

如果你走的是杀人路线,pgrep去救你:

kill -9 `pgrep -f celeryd`

Mind you, this is not a long-running task and I don't care if it terminates brutally. Just reloading new code during dev. I'd go the restart service route if it was more sensitive.

请注意,这不是一个长期运行的任务,我不关心它是否会残忍地终止。在开发过程中重新加载新代码。如果重新启动服务路径更敏感,我将使用它。

#4


3  

You should look at Celery's autoreloading

你应该看看芹菜的自动装载

#5


2  

I have repeatedly tested the -HUP solution using an automated script, but find that about 5% of the time, the worker stops picking up new jobs after being restarted.

我曾多次使用自动脚本测试-HUP解决方案,但发现在大约5%的情况下,工作人员在重新启动后停止接收新作业。

A more reliable solution is:

更可靠的解决办法是:

stop <celery_service>
start <celery_service>

停止< celery_service >开始< celery_service >

which I have used hundreds of times now without any issues.

我已经用了几百次了,没有任何问题。

From within Python, you can run:

在Python中,您可以运行:

import subprocess
service_name = 'celery_service'
for command in ['stop', 'start']:
    subprocess.check_call(command + ' ' + service_name, shell=True)

#6


0  

What should happen to long running tasks? I like it this way: long running tasks should do their job. Don't interrupt them, only new tasks should get the new code.

长时间运行的任务会发生什么?我喜欢这样:长时间运行的任务应该完成它们的工作。不要打断他们,只有新的任务才能得到新的代码。

But this is not possible at the moment: https://groups.google.com/d/msg/celery-users/uTalKMszT2Q/-MHleIY7WaIJ

但这在目前是不可能的:https://groups.google.com/d/msg/celery-users/uTalKMszT2Q/-MHleIY7WaIJ