以块为单位处理大型字典响应以进行操作

时间:2023-02-08 23:47:13

I have a large dictionary response, which I then use to create a bulk object. Since it is simple object creation but in bulk, it's taking too much time. I am trying to do it by multiprocessing or thread. So I can create a bulk object in pieces of data (by single file) and store them together at the end.

我有一个大的字典响应,然后我用它来创建一个批量对象。由于它是简单的对象创建,但是散装,它需要花费太多时间。我试图通过多处理或线程来做到这一点。所以我可以在数据片段中创建一个批量对象(通过单个文件)并在最后将它们存储在一起。

Here is my function:

这是我的功能:

def create_obj(self, resp):
    school_db = Schools.objects.bulk_create(
        [Schools(
            **{k: v for k, v in value.items()
               })for key, value in resp.items()
         ])
    return school_db

and sample of my large dictionary response-

和我的大词典回应样本 -

response = {
    'A101': {
        'VEG': True,
        'CONTACT': '12345',
        'CLASS': 'SIX',
        'ROLLNO': 'A101',
        'CITY': 'CHANDI',
    },
    'A102': {
        'VEG': True,
        'CONTACT': '54321',
        'CLASS': 'SEVEN',
        'ROLLNO': 'A102',
        'CITY': 'GANGTOK',
    },
}

So is there any way to split up the dictionary into 15-20 chunks by which I can use multiprocessing to process?

那么有没有办法将字典分成15-20块,我可以使用多处理来处理?

1 个解决方案

#1


0  

bulk_create accepts batch_size parameter however I think whaat you really want to do is to return 200 response immediately and perform bulk creation in a background task eg. with celery

bulk_create接受batch_size参数但是我认为你真正想做的是立即返回200响应并在后台任务中执行批量创建,例如。用芹菜

#1


0  

bulk_create accepts batch_size parameter however I think whaat you really want to do is to return 200 response immediately and perform bulk creation in a background task eg. with celery

bulk_create接受batch_size参数但是我认为你真正想做的是立即返回200响应并在后台任务中执行批量创建,例如。用芹菜