MySQL有效地将所有记录从一个表复制到另一个表。

时间:2022-07-17 19:20:05

Is there a more-efficent, less laborious way of copying all records from one table to another that doing this:

是否有一种更有效、更省事的方法将所有记录从一个表复制到另一个表中,从而做到这一点:

INSERT INTO product_backup SELECT * FROM product

Typically, the product table will hold around 50,000 records. Both tables are identical in structure and have 31 columns in them. I'd like to point out this is not my database design, I have inherited a legacy system.

通常,产品表将保存大约50000条记录。这两个表在结构上是相同的,有31列。我想指出这不是我的数据库设计,我继承了一个遗留系统。

5 个解决方案

#1


10  

I thinks this is the best way to copy records from one table to another table. In this way you are preserving existing indexes of the target table also.

我认为这是将记录从一个表复制到另一个表的最佳方式。通过这种方式,还可以保存目标表的现有索引。

#2


13  

There's just one thing you're missing. Especially, if you're using InnoDB, is you want to explicitly add an ORDER BY clause in your SELECT statement to ensure you're inserting rows in primary key (clustered index) order:

你缺了一件事。特别是,如果您使用的是InnoDB,那么您希望在SELECT语句中显式地添加ORDER BY子句,以确保您在主键(聚集索引)顺序中插入行:

INSERT INTO product_backup SELECT * FROM product ORDER BY product_id

Consider removing secondary indexes on the backup table if they're not needed. This will also save some load on the server.

如果不需要备份表上的辅助索引,请考虑删除它们。这也将在服务器上节省一些负载。

Finally, if you are using InnoDB, reduce the number of row locks that are required and just explicitly lock both tables:

最后,如果您正在使用InnoDB,请减少所需的行锁数量,并只显式地锁定两个表:

LOCK TABLES product_backup WRITE;
LOCK TABLES product_id READ;
INSERT INTO product_backup SELECT * FROM product ORDER BY product_id;
UNLOCK TABLES;

The locking stuff probably won't make a huge difference, as row locking is very fast (though not as fast as table locks), but since you asked.

由于行锁的速度非常快(虽然没有表锁那么快),所以锁的内容可能不会有太大的区别,但是既然您提出了这个问题。

#3


4  

mysqldump -R --add-drop-table db_name table_name > filepath/file_name.sql

This will take a dump of specified tables with a drop option to delete the exisiting table when you import it. then do,

这将使用一个带有drop选项的指定表转储,以便在导入时删除存在的表。那么做的,

mysql db_name < filepath/file_name.sql

#4


2  

DROP the destination table:

删除目标表:

DROP TABLE DESTINATION_TABLE;
CREATE TABLE DESTINATION_TABLE AS (SELECT * FROM SOURCE_TABLE);

#5


1  

I don't think this will be worthy for a 50k table but: If you have the database dump you can reload a table from it. As you want to load a table in another one you could change the table name in the dump with a sed command: Here you have some hints: http://blog.tsheets.com/2008/tips-tricks/mysql-restoring-a-single-table-from-a-huge-mysqldump-file.html

我不认为这对50k表有价值,但是:如果您有数据库转储,您可以从它重新加载表。当您想在另一个表中加载一个表时,您可以使用sed命令来更改转储中的表名:这里有一些提示:http://blog.tsheets.com/2008/tips-phs/mysql -restoring-a- huge-mysqldump .html

An alternative (depending on your design) would be to use triggers on the original table inserts so that the duplicated table gets the data as well.

另一种选择(取决于您的设计)是在原始的表插入中使用触发器,这样重复的表也可以获得数据。

And a better alternative would be to create another MySQL instance and either run it in a master-slave configuration or in a daily dump master/load slave fashion.

另一种更好的选择是创建另一个MySQL实例,并以主从配置或每日转储的主/加载从属方式运行它。

#1


10  

I thinks this is the best way to copy records from one table to another table. In this way you are preserving existing indexes of the target table also.

我认为这是将记录从一个表复制到另一个表的最佳方式。通过这种方式,还可以保存目标表的现有索引。

#2


13  

There's just one thing you're missing. Especially, if you're using InnoDB, is you want to explicitly add an ORDER BY clause in your SELECT statement to ensure you're inserting rows in primary key (clustered index) order:

你缺了一件事。特别是,如果您使用的是InnoDB,那么您希望在SELECT语句中显式地添加ORDER BY子句,以确保您在主键(聚集索引)顺序中插入行:

INSERT INTO product_backup SELECT * FROM product ORDER BY product_id

Consider removing secondary indexes on the backup table if they're not needed. This will also save some load on the server.

如果不需要备份表上的辅助索引,请考虑删除它们。这也将在服务器上节省一些负载。

Finally, if you are using InnoDB, reduce the number of row locks that are required and just explicitly lock both tables:

最后,如果您正在使用InnoDB,请减少所需的行锁数量,并只显式地锁定两个表:

LOCK TABLES product_backup WRITE;
LOCK TABLES product_id READ;
INSERT INTO product_backup SELECT * FROM product ORDER BY product_id;
UNLOCK TABLES;

The locking stuff probably won't make a huge difference, as row locking is very fast (though not as fast as table locks), but since you asked.

由于行锁的速度非常快(虽然没有表锁那么快),所以锁的内容可能不会有太大的区别,但是既然您提出了这个问题。

#3


4  

mysqldump -R --add-drop-table db_name table_name > filepath/file_name.sql

This will take a dump of specified tables with a drop option to delete the exisiting table when you import it. then do,

这将使用一个带有drop选项的指定表转储,以便在导入时删除存在的表。那么做的,

mysql db_name < filepath/file_name.sql

#4


2  

DROP the destination table:

删除目标表:

DROP TABLE DESTINATION_TABLE;
CREATE TABLE DESTINATION_TABLE AS (SELECT * FROM SOURCE_TABLE);

#5


1  

I don't think this will be worthy for a 50k table but: If you have the database dump you can reload a table from it. As you want to load a table in another one you could change the table name in the dump with a sed command: Here you have some hints: http://blog.tsheets.com/2008/tips-tricks/mysql-restoring-a-single-table-from-a-huge-mysqldump-file.html

我不认为这对50k表有价值,但是:如果您有数据库转储,您可以从它重新加载表。当您想在另一个表中加载一个表时,您可以使用sed命令来更改转储中的表名:这里有一些提示:http://blog.tsheets.com/2008/tips-phs/mysql -restoring-a- huge-mysqldump .html

An alternative (depending on your design) would be to use triggers on the original table inserts so that the duplicated table gets the data as well.

另一种选择(取决于您的设计)是在原始的表插入中使用触发器,这样重复的表也可以获得数据。

And a better alternative would be to create another MySQL instance and either run it in a master-slave configuration or in a daily dump master/load slave fashion.

另一种更好的选择是创建另一个MySQL实例,并以主从配置或每日转储的主/加载从属方式运行它。