如何在Python Flask应用中使用我的记录器,以便在每页视图中使用唯一ID进行记录?

时间:2022-11-30 08:59:21

I have a Flask app with a logging system that I wrote based on the standard Python logger.

我有一个带有日志记录系统的Flask应用程序,我根据标准的Python记录器编写。

I want to have a unique ID for each page view, so I can track the entire process and check for logged errors and premature ending.

我希望每个页面视图都有一个唯一的ID,因此我可以跟踪整个过程并检查记录的错误和过早的结束。

The first thing I tried was to put unique ID creator into the __init__ for the logger object.The result of that was that all requests had the same view. I moved where the unique ID was created to a method, and this improved the situation - multiple IDs appeared in the logs, and everything appeared to be working.

我尝试的第一件事是将唯一的ID创建者放入__init__中,用于记录器对象。结果是所有请求都具有相同的视图。我将创建唯一ID的位置移动到方法中,这改善了这种情况 - 日志中出现了多个ID,并且所有内容都显示正常。

However, it seems sometimes two requests use the same logger object. It seems when one request is running, another initiates and runs the ID generation method. Then the first request starts using the new ID also...

但是,有时似乎有两个请求使用相同的记录器对象。似乎当一个请求正在运行时,另一个请求启动并运行ID生成方法。然后第一个请求也开始使用新ID ...

22:04:31 - MPvzGNAelE : in content
22:04:31 - MPvzGNAelE : in parse options
22:04:31 - MPvzGNAelE : finished parse options
22:04:31 - MPvzGNAelE : about to query API for user info. user id : 7401173, resource id: 59690
#the following is where the 2nd requests starts
22:04:31 - SbEOjmbFSa : in  frame initial 
22:04:31 - SbEOjmbFSa : in  frame 2 - cleaned IDs are 1114 127059932
22:04:31 - SbEOjmbFSa : in parse options
22:04:31 - SbEOjmbFSa : finished parse options
22:04:31 - SbEOjmbFSa : in  frame finishing - for 1114 127059932
#the following is the first request continuing with the new ID
22:04:31 - SbEOjmbFSa : api user info status is success
22:04:31 - SbEOjmbFSa : user_id is 5549565, user name is joe_spryly
22:04:31 - SbEOjmbFSa : config[data_source] is 0
22:04:31 - SbEOjmbFSa : api seems to be working, so going to retrieve items for 59690 7401173
22:04:31 - SbEOjmbFSa : getting items from API for 59690 7401173

This is my log object code...

这是我的日志对象代码......

class AS_Log(object):      
    def __init__(self):
        self.log=logging.getLogger('aman_log')
        logging.basicConfig(filename='amanset_log',level=logging.DEBUG)

    def generate_id(self):
        from random import choice
        import string
        chars=string.letters+string.digits
        self.log_id=''.join([choice(chars) for i in range(10)])

    def format_date(self,timestamp):
        return datetime.fromtimestamp(timestamp).strftime('%m-%d-%y %H:%M:%S')   

    def log_msg(self,message):
        self.log.debug('{0} - {1} : {2}'.format(self.format_date(time()),self.log_id,message))     

I initiate the log in the flask app like

我启动了烧瓶应用程序中的日志

as=AS_Log()

Then call generate_id per request like

然后按请求调用generate_id

@app.route('/<resource_id>/<user_id>/')
def aman_frame(resource_id,user_id):
    as.generate_id()
    return amanset_frame(resource_id,user_id)

then use the log in the amanset_frame function.

然后使用amanset_frame函数中的日志。

I have some notion that this is related application context in Flask but I don't understand how to use that to fix this. Do I use app_context(), and if so, how?

我有一些想法,这是Flask中的相关应用程序上下文,但我不明白如何使用它来解决这个问题。我是否使用app_context(),如果是,如何使用?

1 个解决方案

#1


2  

Ultimately this is caused by mutating the shared as.log_id field across multiple execution contexts (most likely threads, but it could be greenlets or even sub-processes if as is in the shared parent interpreter as with certain uwsgi / mod_wsgi setups).

最终,这是通过在多个执行上下文中改变共享的as.log_id字段(最可能是线程,但如果在共享父解释器中可能是greenlet甚至子进程,与某些uwsgi / mod_wsgi设置一样)引起的。

request-1 --> mutates log_id --> does ... work ... --> response
              request-2 --> mutates log_id  --> does ... work ... --> response

The fix is to either use a proper thread-local for the logger, so a new logger is created per thread of execution. Werkzeug has a LocalProxy that makes this a bit easier to manage too.

修复是为记录器使用适当的线程本地,因此每个执行线程创建一个新的记录器。 Werkzeug有一个LocalProxy,这使得它更易于管理。

#1


2  

Ultimately this is caused by mutating the shared as.log_id field across multiple execution contexts (most likely threads, but it could be greenlets or even sub-processes if as is in the shared parent interpreter as with certain uwsgi / mod_wsgi setups).

最终,这是通过在多个执行上下文中改变共享的as.log_id字段(最可能是线程,但如果在共享父解释器中可能是greenlet甚至子进程,与某些uwsgi / mod_wsgi设置一样)引起的。

request-1 --> mutates log_id --> does ... work ... --> response
              request-2 --> mutates log_id  --> does ... work ... --> response

The fix is to either use a proper thread-local for the logger, so a new logger is created per thread of execution. Werkzeug has a LocalProxy that makes this a bit easier to manage too.

修复是为记录器使用适当的线程本地,因此每个执行线程创建一个新的记录器。 Werkzeug有一个LocalProxy,这使得它更易于管理。