为什么SQL的导入如此缓慢?

时间:2022-01-28 05:44:11

I have an SQL file containing two tables with around 600,000 rows altogether. Yesterday, I tried to import the file into my MySQL database on Fedora 16, and it took over 2 hours to import the file. On my Windows PC it took 7 minutes. My Linux and Windows machines have exactly the same hardware. A couple of my friends tried it too, and they had a similar experience.

我有一个SQL文件,包含两个表,总共大约有60万行。昨天,我在Fedora 16上尝试将文件导入MySQL数据库,导入文件花费了2个多小时。在我的Windows电脑上花了7分钟。我的Linux和Windows机器有完全相同的硬件。我的几个朋友也尝试过,他们也有过类似的经历。

The command we were using was: mysql -u root database_name < sql_file.sql.

我们使用的命令是:mysql -u root database_name < sql_file.sql。

Why is there such a difference in speed?

为什么速度上有这样的差异?

2 个解决方案

#1


66  

My bet is that Fedora 16 is honoring the transaction/sync semantics and Windows is not. If you do the math, 600,000 updates in two hours is 5,000 per minute. That's the same order of magnitude as a disk's rotation rate.

我打赌Fedora 16遵守事务/同步语义,而Windows不是。如果你计算一下,两小时内60万次更新是每分钟5000次。这个数量级和圆盘的旋转速率是一样的。

You can try adding SET autocommit=0; to the beginning of your import file and COMMIT; to the end. See this page for more information.

您可以尝试添加SET autocommit=0;到导入文件的开头并提交;到最后。有关更多信息,请参见此页。

#2


3  

Why don't you export .sql file as BULK INSERT option and import it, try these options while taking a backup using mysqldump

为什么不将.sql文件导出为批量插入选项并导入它,在使用mysqldump进行备份时尝试这些选项呢

--extended-insert: use multiple-row insert statements

——扩展插入:使用多行插入语句

--quick: do not do buffering of row data, good if tables are large

-quick:不要对行数据进行缓冲,如果表很大就很好

Note: Make sure you should increase value of max_allowed_packet=32M or more in my.cnf file before generating .sql file.

注意:在生成.sql文件之前,请确保在my.cnf文件中增加max_allowed_packet=32M或以上的值。

#1


66  

My bet is that Fedora 16 is honoring the transaction/sync semantics and Windows is not. If you do the math, 600,000 updates in two hours is 5,000 per minute. That's the same order of magnitude as a disk's rotation rate.

我打赌Fedora 16遵守事务/同步语义,而Windows不是。如果你计算一下,两小时内60万次更新是每分钟5000次。这个数量级和圆盘的旋转速率是一样的。

You can try adding SET autocommit=0; to the beginning of your import file and COMMIT; to the end. See this page for more information.

您可以尝试添加SET autocommit=0;到导入文件的开头并提交;到最后。有关更多信息,请参见此页。

#2


3  

Why don't you export .sql file as BULK INSERT option and import it, try these options while taking a backup using mysqldump

为什么不将.sql文件导出为批量插入选项并导入它,在使用mysqldump进行备份时尝试这些选项呢

--extended-insert: use multiple-row insert statements

——扩展插入:使用多行插入语句

--quick: do not do buffering of row data, good if tables are large

-quick:不要对行数据进行缓冲,如果表很大就很好

Note: Make sure you should increase value of max_allowed_packet=32M or more in my.cnf file before generating .sql file.

注意:在生成.sql文件之前,请确保在my.cnf文件中增加max_allowed_packet=32M或以上的值。