I set an environment variable in supervisord:
我在supervisord中设置了一个环境变量:
[program:worker]
directory = /srv/app/
command=celery -A tasks worker -Q default -l info -n default_worker.%%h
environment=BROKER="amqp://admin:password@xxxxx:5672//"
Within my celeryconfig.py I then try to read that variable like this.
在我的celeryconfig.py中,然后尝试读取这样的变量。
BROKER = os.environ['BROKER']
But I still get the key the error below, why?
但我仍然得到下面的错误,为什么?
File "/usr/local/lib/python2.7/dist-packages/celery/loaders/base.py", line 106, in import_module
return importlib.import_module(module, package=package)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/srv/app/celeryconfig.py", line 6, in <module>
BROKER = os.environ['BROKER']
File "/usr/lib/python2.7/UserDict.py", line 23, in __getitem__
raise KeyError(key)
KeyError: 'BROKER
There is a file dump of the envs as suggested in the comments:
根据评论中的建议,有一个envs的文件转储:
{
'SUPERVISOR_GROUP_NAME': 'celery_default_worker',
'TERM': 'linux',
'SUPERVISOR_SERVER_URL': 'unix: ///var/run/supervisor.sock',
'UPSTART_INSTANCE': '',
'RUNLEVEL': '2',
'UPSTART_EVENTS': 'runlevel',
'PREVLEVEL': 'N',
'SUPERVISOR_PROCESS_NAME': 'celery_default_worker',
'UPSTART_JOB': 'rc',
'PWD': '/',
'SUPERVISOR_ENABLED': '1',
'runlevel': '2',
'PATH': '/usr/local/sbin: /usr/local/bin: /sbin: /bin: /usr/sbin: /usr/bin',
'previous': 'N'
}
2 个解决方案
#1
6
It looks like a known bug in supervisord
:
它看起来像是supervisord中的已知错误:
http://github.com/Supervisor/supervisor/issues/91 (kindof resolved)
http://github.com/Supervisor/supervisor/issues/91(已解决)
http://github.com/Supervisor/supervisor/pull/550 (pending)
http://github.com/Supervisor/supervisor/pull/550(待定)
In that case, moving your environment spec to global scope (for supervisord process itself) may be an acceptable workaround.
在这种情况下,将环境规范移动到全局范围(对于supervisord进程本身)可能是一种可接受的解决方法。
Finally, if all else fails, wrap celery
in a shell script that accepts this specific environment variable as command line argument.
最后,如果所有其他方法都失败了,请将celery包装在一个shell脚本中,该脚本接受此特定环境变量作为命令行参数。
#2
2
This answer is most likely not the cause, check https://*.com/a/28829162/1589147 for information on a related supervisord bug instead.
这个答案很可能不是原因,请查看https://*.com/a/28829162/1589147以获取有关相关监督错误的信息。
I can reproduce your error partially. I do not see the error when celery runs within supervisor. I see the error when I try to run the task from an environment outside supervisor where I did not set the BROKER
environment variable. celeryconfig.py
is executed both by celery and by anything which tries to execute a task.
我可以部分重现你的错误。当芹菜在主管内运行时,我没有看到错误。当我尝试从主管外部的环境运行任务时,我看到错误,我没有设置BROKER环境变量。 celeryconfig.py由celery和尝试执行任务的任何东西执行。
I am not certain if this issue is exactly what you have come across, if you could share how you are executing the tasks and when that exception is raised it may help.
我不确定这个问题是否与您所遇到的完全相同,如果您可以分享您执行任务的方式,并且在提出该异常时它可能有所帮助。
For example, if I try to run the task from ipython
an error is generated which matches your error.
例如,如果我尝试从ipython运行任务,则会生成与您的错误匹配的错误。
In [1]: from tasks import add
In [2]: add.delay(2,3)
...
21 if hasattr(self.__class__, "__missing__"):
22 return self.__class__.__missing__(self, key)
---> 23 raise KeyError(key)
24 def __setitem__(self, key, item): self.data[key] = item
25 def __delitem__(self, key): del self.data[key]
KeyError: 'BROKER'
The celeryconfig.py
is loaded locally in order to establish a connection to the celery broker and backend. I am unable to execute the task without setting the BROKER
environment variable.
celeryconfig.py在本地加载,以建立与芹菜代理和后端的连接。如果不设置BROKER环境变量,我将无法执行任务。
If I set the environment variable before executing my task the same code works for me.
如果我在执行任务之前设置环境变量,则相同的代码对我有效。
In [3]: import os
In [4]: os.environ["BROKER"] = "broker is set"
In [5]: add.delay(2,3)
Out[5]: <AsyncResult: 0f3xxxx-87fa-48d7-9258-173bdd2052ca>
Here are the files I used in case it helps.
以下是我使用的文件以防万一。
supervisor.conf
: supervisord -c supervisor.conf
supervisor.conf:supervisord -c supervisor.conf
[unix_http_server]
file=/tmp/supervisor.sock
[supervisord]
loglevel = info
nodaemon = true
identifier = supervisor
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[program:worker]
command=/app/srv/main-env/bin/celery -A tasks worker -Q default -l info -n default_worker.%%h
environment=BROKER="amqp://admin:password@xxxxx:5672//"
directory=/app/srv/
numprocs=1
stdout_logfile=/app/srv/worker.log
stderr_logfile=/app/srv/worker.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600
killasgroup=true
priority=998
celeryconfig.py
:
celeryconfig.py:
import os
BROKER = os.environ['BROKER']
tasks.py
:
tasks.py:
from celery import Celery
app = Celery(
'tasks',
backend='amqp',
broker='amqp://admin:password@xxxxx:5672//')
app.config_from_object('celeryconfig')
@app.task
def add(x, y):
return x + y
#1
6
It looks like a known bug in supervisord
:
它看起来像是supervisord中的已知错误:
http://github.com/Supervisor/supervisor/issues/91 (kindof resolved)
http://github.com/Supervisor/supervisor/issues/91(已解决)
http://github.com/Supervisor/supervisor/pull/550 (pending)
http://github.com/Supervisor/supervisor/pull/550(待定)
In that case, moving your environment spec to global scope (for supervisord process itself) may be an acceptable workaround.
在这种情况下,将环境规范移动到全局范围(对于supervisord进程本身)可能是一种可接受的解决方法。
Finally, if all else fails, wrap celery
in a shell script that accepts this specific environment variable as command line argument.
最后,如果所有其他方法都失败了,请将celery包装在一个shell脚本中,该脚本接受此特定环境变量作为命令行参数。
#2
2
This answer is most likely not the cause, check https://*.com/a/28829162/1589147 for information on a related supervisord bug instead.
这个答案很可能不是原因,请查看https://*.com/a/28829162/1589147以获取有关相关监督错误的信息。
I can reproduce your error partially. I do not see the error when celery runs within supervisor. I see the error when I try to run the task from an environment outside supervisor where I did not set the BROKER
environment variable. celeryconfig.py
is executed both by celery and by anything which tries to execute a task.
我可以部分重现你的错误。当芹菜在主管内运行时,我没有看到错误。当我尝试从主管外部的环境运行任务时,我看到错误,我没有设置BROKER环境变量。 celeryconfig.py由celery和尝试执行任务的任何东西执行。
I am not certain if this issue is exactly what you have come across, if you could share how you are executing the tasks and when that exception is raised it may help.
我不确定这个问题是否与您所遇到的完全相同,如果您可以分享您执行任务的方式,并且在提出该异常时它可能有所帮助。
For example, if I try to run the task from ipython
an error is generated which matches your error.
例如,如果我尝试从ipython运行任务,则会生成与您的错误匹配的错误。
In [1]: from tasks import add
In [2]: add.delay(2,3)
...
21 if hasattr(self.__class__, "__missing__"):
22 return self.__class__.__missing__(self, key)
---> 23 raise KeyError(key)
24 def __setitem__(self, key, item): self.data[key] = item
25 def __delitem__(self, key): del self.data[key]
KeyError: 'BROKER'
The celeryconfig.py
is loaded locally in order to establish a connection to the celery broker and backend. I am unable to execute the task without setting the BROKER
environment variable.
celeryconfig.py在本地加载,以建立与芹菜代理和后端的连接。如果不设置BROKER环境变量,我将无法执行任务。
If I set the environment variable before executing my task the same code works for me.
如果我在执行任务之前设置环境变量,则相同的代码对我有效。
In [3]: import os
In [4]: os.environ["BROKER"] = "broker is set"
In [5]: add.delay(2,3)
Out[5]: <AsyncResult: 0f3xxxx-87fa-48d7-9258-173bdd2052ca>
Here are the files I used in case it helps.
以下是我使用的文件以防万一。
supervisor.conf
: supervisord -c supervisor.conf
supervisor.conf:supervisord -c supervisor.conf
[unix_http_server]
file=/tmp/supervisor.sock
[supervisord]
loglevel = info
nodaemon = true
identifier = supervisor
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[program:worker]
command=/app/srv/main-env/bin/celery -A tasks worker -Q default -l info -n default_worker.%%h
environment=BROKER="amqp://admin:password@xxxxx:5672//"
directory=/app/srv/
numprocs=1
stdout_logfile=/app/srv/worker.log
stderr_logfile=/app/srv/worker.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600
killasgroup=true
priority=998
celeryconfig.py
:
celeryconfig.py:
import os
BROKER = os.environ['BROKER']
tasks.py
:
tasks.py:
from celery import Celery
app = Celery(
'tasks',
backend='amqp',
broker='amqp://admin:password@xxxxx:5672//')
app.config_from_object('celeryconfig')
@app.task
def add(x, y):
return x + y