First of all, I know about queues and now have good experience with queues. The problem with the queue is, it is a queue. I want to execute multiple functions or commands together in the background. Queues will keep second command or function in a queue and will execute once the first one is done executing!
首先,我了解了队列,现在对队列有了很好的体验。队列的问题是,它是一个队列。我想在后台一起执行多个函数或命令。队列将在队列中保留第二个命令或函数,并在执行第一个命令之后执行!
For example, I have a table with ~3,000,000 records and I want to process them faster. What I can do is divide them into 5 equal chunks and execute 5 commands altogether so that I can utilize my CPU as well as process data 5 times faster.
例如,我有一个拥有300万条记录的表,我想更快地处理它们。我能做的是将它们分成5个相等的块,并执行5个命令,这样我就可以更快地利用我的CPU和处理数据5倍。
So, How can I do this with Laravel? Queues are not going to work because they execute stuff one after another. If your idea is to create multiple 5 multiple queues and supervisors to accomplish, that's not a standard way to do it I think.
那么,我怎么用Laravel做这个呢?队列不会起作用,因为它们一个接一个地执行。如果您的想法是创建多个5个多队列和管理器,我认为这不是一种标准的方法。
Any Idea on what can be done in this case?
你知道在这种情况下能做什么吗?
2 个解决方案
#1
1
Just to add something from my personal experience.
只是想从我的个人经历中添加一些东西。
First install and configure supervisor for your OS accordingly, following are the confs for linux basaed OS e.g. Ubuntu
首先为您的操作系统安装和配置主管,以下是linux basaed OS的confs,例如Ubuntu。
Supervisor confs: (/etc/supervisor/conf.d)
主管会议:(/ etc /主管/ conf.d)
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/app/artisan queue:work database --sleep=3 --tries=3
autostart=true
autorestart=true
user=username
numprocs=25
redirect_stderr=true
stderr_events_enabled=true
stderr_logfile=/var/www/app/storage/logs/worker.error.log
stdout_logfile=/var/www/app/storage/logs/worker.log
Then create jobs according to your needs and dispatch.
然后根据您的需要和分派创建作业。
Supervisor will process jobs simultaneously, in this particular case 25
jobs will be processing at a time.
主管将同时处理工作,在这个特殊情况下,25个工作将同时处理。
#2
2
Finally, I found a solution. It is very very easy. So, Here is how it works.
最后,我找到了一个解决方案。这很简单。这就是它的工作原理。
First of all, I divide records into the number of chunks (For example, 5 chunks. It will divide 3 million items into 5 chunks having 600k items each)
首先,我将记录划分为块的数量(例如,5个块)。它将把300万件物品分成5块,每个有60万件)
Then, I can push a command to queues that I create instantaneously for the chunks and execute queue worker for that queue only once using --once
option. To make it simple to understand, Here is the code that I am using.
然后,我可以将一个命令推到队列中,我立即为这些块创建队列,并仅使用once选项为该队列执行一次队列worker。为了便于理解,下面是我正在使用的代码。
$chunk_id = 0;
foreach($chunks as $chunk){
// Adding chunk to queue
Artisan::queue('process:items',[
'items' => $chunk,
])->onQueue('processChunk'.$chunk_id);
// Executing queue worker only once
exec('php artisan queue:work --queue=processChunk'.$chunk_id.' --once > storage/logs/process.log &');
$chunk_id++;
}
With exec
command, we are executing queue worker for the specific queue created for the specific chunk. Also, we've added &
at the end of the command which forces the command to execute in the background at OS level.
使用exec命令,我们为为特定块创建的特定队列执行队列worker。此外,我们还添加了&在命令末尾强制命令在OS级别的后台执行。
This is how it can be done. I tested it and it is working smoothly! Anything else to improve or are there any drawbacks of using this method?
这就是我们可以做到的。我测试过了,它运行得很顺利!使用此方法还有什么需要改进的地方吗?
#1
1
Just to add something from my personal experience.
只是想从我的个人经历中添加一些东西。
First install and configure supervisor for your OS accordingly, following are the confs for linux basaed OS e.g. Ubuntu
首先为您的操作系统安装和配置主管,以下是linux basaed OS的confs,例如Ubuntu。
Supervisor confs: (/etc/supervisor/conf.d)
主管会议:(/ etc /主管/ conf.d)
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/app/artisan queue:work database --sleep=3 --tries=3
autostart=true
autorestart=true
user=username
numprocs=25
redirect_stderr=true
stderr_events_enabled=true
stderr_logfile=/var/www/app/storage/logs/worker.error.log
stdout_logfile=/var/www/app/storage/logs/worker.log
Then create jobs according to your needs and dispatch.
然后根据您的需要和分派创建作业。
Supervisor will process jobs simultaneously, in this particular case 25
jobs will be processing at a time.
主管将同时处理工作,在这个特殊情况下,25个工作将同时处理。
#2
2
Finally, I found a solution. It is very very easy. So, Here is how it works.
最后,我找到了一个解决方案。这很简单。这就是它的工作原理。
First of all, I divide records into the number of chunks (For example, 5 chunks. It will divide 3 million items into 5 chunks having 600k items each)
首先,我将记录划分为块的数量(例如,5个块)。它将把300万件物品分成5块,每个有60万件)
Then, I can push a command to queues that I create instantaneously for the chunks and execute queue worker for that queue only once using --once
option. To make it simple to understand, Here is the code that I am using.
然后,我可以将一个命令推到队列中,我立即为这些块创建队列,并仅使用once选项为该队列执行一次队列worker。为了便于理解,下面是我正在使用的代码。
$chunk_id = 0;
foreach($chunks as $chunk){
// Adding chunk to queue
Artisan::queue('process:items',[
'items' => $chunk,
])->onQueue('processChunk'.$chunk_id);
// Executing queue worker only once
exec('php artisan queue:work --queue=processChunk'.$chunk_id.' --once > storage/logs/process.log &');
$chunk_id++;
}
With exec
command, we are executing queue worker for the specific queue created for the specific chunk. Also, we've added &
at the end of the command which forces the command to execute in the background at OS level.
使用exec命令,我们为为特定块创建的特定队列执行队列worker。此外,我们还添加了&在命令末尾强制命令在OS级别的后台执行。
This is how it can be done. I tested it and it is working smoothly! Anything else to improve or are there any drawbacks of using this method?
这就是我们可以做到的。我测试过了,它运行得很顺利!使用此方法还有什么需要改进的地方吗?