Laravel的插入/更新查询非常缓慢

时间:2021-12-22 00:53:51

I have a laravel application which must insert/update thousands of records per second in a for loop. my problem is that my Database insert/update rate is 100-150 writes per second . I have increased the amount of RAM dedicated to my database but got no luck.

我有一个laravel应用程序,它必须在for循环中每秒插入/更新数千条记录。我的问题是,我的数据库插入/更新速度是每秒100-150次写操作。我增加了我的数据库的内存,但是没有运气。

Laravel的插入/更新查询非常缓慢

is there any way to increase the write rate for mysql to thousands of records per second ?

有没有办法把mysql的写率提高到每秒数千条记录?

please provide me optimum configurations for performance tuning

请为我提供性能调优的最佳配置

and PLEASE do not down mark the question . my code is correct . Its not a code problem because I have no problem with MONGODB . but I have to use mysql .

请不要把问题记下来。我的代码是正确的。这不是代码问题,因为我对MONGODB没有问题。但我必须使用mysql。

My Storage Engine is InnoDB

我的存储引擎是InnoDB

2 个解决方案

#1


2  

For insert, you might want to look into the INSERT DELAYED syntax. That will increase insert performance, but it won't help with update and the syntax will eventually be deprecated. This post offers an alternative for updates, but it involves custom replication.

对于插入,您可能需要研究插入延迟语法。这将提高插入性能,但对更新没有帮助,语法最终将被弃用。这篇文章提供了一种更新的替代方法,但它涉及自定义复制。

One way my company's succeeded in speeding up inserts is by writing the SQL to a file, and then doing using a MySQL LOAD DATA INFILE command, but I believe we found that required the server's command line to have the mysql application installed.

我的公司成功加速插入的一种方法是将SQL写入文件,然后使用MySQL LOAD DATA INFILE命令,但是我相信我们发现需要服务器的命令行来安装MySQL应用程序。

I've also found that inserting and updating in a batch is often faster. So if you're calling INSERT 2k times, you might be better off running 10 inserts of 200 rows each. This would decrease the lock requirements and decrease information/number of calls sent over the wire.

我还发现在批处理中插入和更新通常更快。如果你调用插入2k次,你最好每次运行10次插入200行。这将减少锁需求,减少通过网络发送的信息/调用数量。

#2


1  

Inserting rows one at a time, and autocommitting each statement, has two overheads.

每次插入一行,并自动提交每个语句,有两个开销。

Each transaction has overhead, probably more than one insert. So inserting multiple rows in one transaction is the trick. This requires a code change, not a configuration change.

每个事务都有开销,可能不止一次插入。因此,在一个事务中插入多个行是一个诀窍。这需要代码更改,而不是配置更改。

Each INSERT statement has overhead. One insert has about 90% over head and 10% actual insert.

每个INSERT语句都有开销。一个插入有90%的头部和10%的实际插入。

The optimal is 100-1000 rows being inserted per transaction.

最佳的方法是在每个事务中插入100-1000行。

For rapid inserts:

快速插入:

  • Best is LOAD DATA -- if you are starting with a .csv file. If you must build the .csv file first, then it is debatable whether that overhead makes this approach lose.
  • 最好是加载数据——如果您从.csv文件开始。如果您必须首先构建.csv文件,那么这种开销是否会使这种方法失败是有争议的。
  • Second best is multi-row INSERT statements: INSERT INTO t (a,b) VALUES (1,2), (2,3), (44,55), .... I recommend 1000 per statement, and COMMIT each statement. This is likely to get you past 1000 rows per second being inserted.
  • 第二最好是多行插入语句:插入t(a,b)值(1、2),(2、3),(44岁,55),....我建议每条语句1000条,并提交每条语句。这可能会使您每秒插入1000行。

Another problem... Since each index is updated as the row is inserted, you may run into trouble with thrashing I/O to achieve this task. InnoDB automatically "delays" updates to non-unique secondary indexes (no need for INSERT DELAYED), but the work is eventually done. (So RAM size and innodb_buffer_pool_size come into play.)

另一个问题……由于每个索引都是在插入行时更新的,因此为了完成此任务,可能会遇到抖动I/O的麻烦。InnoDB会自动“延迟”对非唯一辅助索引的更新(不需要插入延迟),但最终会完成这项工作。(因此,RAM大小和innodb_buffer_pool_size开始发挥作用。)

If the "thousands" of rows/second is a one time task, then you can stop reading here. If you expect to do this continually 'forever', there are other issues to contend with. See High speed ingestion .

如果“数千”行/秒是一次性任务,那么您可以停止在这里阅读。如果你希望永远这样做,还有其他问题需要解决。见高速摄取。

#1


2  

For insert, you might want to look into the INSERT DELAYED syntax. That will increase insert performance, but it won't help with update and the syntax will eventually be deprecated. This post offers an alternative for updates, but it involves custom replication.

对于插入,您可能需要研究插入延迟语法。这将提高插入性能,但对更新没有帮助,语法最终将被弃用。这篇文章提供了一种更新的替代方法,但它涉及自定义复制。

One way my company's succeeded in speeding up inserts is by writing the SQL to a file, and then doing using a MySQL LOAD DATA INFILE command, but I believe we found that required the server's command line to have the mysql application installed.

我的公司成功加速插入的一种方法是将SQL写入文件,然后使用MySQL LOAD DATA INFILE命令,但是我相信我们发现需要服务器的命令行来安装MySQL应用程序。

I've also found that inserting and updating in a batch is often faster. So if you're calling INSERT 2k times, you might be better off running 10 inserts of 200 rows each. This would decrease the lock requirements and decrease information/number of calls sent over the wire.

我还发现在批处理中插入和更新通常更快。如果你调用插入2k次,你最好每次运行10次插入200行。这将减少锁需求,减少通过网络发送的信息/调用数量。

#2


1  

Inserting rows one at a time, and autocommitting each statement, has two overheads.

每次插入一行,并自动提交每个语句,有两个开销。

Each transaction has overhead, probably more than one insert. So inserting multiple rows in one transaction is the trick. This requires a code change, not a configuration change.

每个事务都有开销,可能不止一次插入。因此,在一个事务中插入多个行是一个诀窍。这需要代码更改,而不是配置更改。

Each INSERT statement has overhead. One insert has about 90% over head and 10% actual insert.

每个INSERT语句都有开销。一个插入有90%的头部和10%的实际插入。

The optimal is 100-1000 rows being inserted per transaction.

最佳的方法是在每个事务中插入100-1000行。

For rapid inserts:

快速插入:

  • Best is LOAD DATA -- if you are starting with a .csv file. If you must build the .csv file first, then it is debatable whether that overhead makes this approach lose.
  • 最好是加载数据——如果您从.csv文件开始。如果您必须首先构建.csv文件,那么这种开销是否会使这种方法失败是有争议的。
  • Second best is multi-row INSERT statements: INSERT INTO t (a,b) VALUES (1,2), (2,3), (44,55), .... I recommend 1000 per statement, and COMMIT each statement. This is likely to get you past 1000 rows per second being inserted.
  • 第二最好是多行插入语句:插入t(a,b)值(1、2),(2、3),(44岁,55),....我建议每条语句1000条,并提交每条语句。这可能会使您每秒插入1000行。

Another problem... Since each index is updated as the row is inserted, you may run into trouble with thrashing I/O to achieve this task. InnoDB automatically "delays" updates to non-unique secondary indexes (no need for INSERT DELAYED), but the work is eventually done. (So RAM size and innodb_buffer_pool_size come into play.)

另一个问题……由于每个索引都是在插入行时更新的,因此为了完成此任务,可能会遇到抖动I/O的麻烦。InnoDB会自动“延迟”对非唯一辅助索引的更新(不需要插入延迟),但最终会完成这项工作。(因此,RAM大小和innodb_buffer_pool_size开始发挥作用。)

If the "thousands" of rows/second is a one time task, then you can stop reading here. If you expect to do this continually 'forever', there are other issues to contend with. See High speed ingestion .

如果“数千”行/秒是一次性任务,那么您可以停止在这里阅读。如果你希望永远这样做,还有其他问题需要解决。见高速摄取。