对InnoDB修改表的MySQL进行优化

时间:2022-09-24 10:33:55

Sometime soon we will need to make schema changes to our production database. We need to minimize downtime for this effort, however, the ALTER TABLE statements are going to run for quite a while. Our largest tables have 150 million records, largest table file is 50G. All tables are InnoDB, and it was set up as one big data file (instead of a file-per-table). We're running MySQL 5.0.46 on an 8 core machine, 16G memory and a RAID10 config.

不久的将来,我们将需要对生产数据库进行模式更改。我们需要最小化此工作的停机时间,然而,ALTER TABLE语句将会运行很长一段时间。我们最大的表有1.5亿个记录,最大的表文件是50G。所有表都是InnoDB,并将其设置为一个大数据文件(而不是每个表的文件)。我们在8核机器上运行MySQL 5.0.46, 16G内存和RAID10配置。

I have some experience with MySQL tuning, but this usually focusses on reads or writes from multiple clients. There is lots of info to be found on the Internet on this subject, however, there seems to be very little information available on best practices for (temporarily) tuning your MySQL server to speed up ALTER TABLE on InnoDB tables, or for INSERT INTO .. SELECT FROM (we will probably use this instead of ALTER TABLE to have some more opportunities to speed things up a bit).

我有一些MySQL调优的经验,但这通常集中在从多个客户端读取或写入。在Internet上可以找到关于这个主题的很多信息,但是,似乎很少有关于优化MySQL服务器以加速InnoDB表上的ALTER TABLE或INSERT INTO的最佳实践的信息。选择FROM(我们可能会使用它而不是ALTER TABLE,以便有更多的机会加快速度)。

The schema changes we are planning to do is adding a integer column to all tables and make it the primary key, instead of the current primary key. We need to keep the 'old' column as well so overwriting the existing values is not an option.

我们计划进行的模式更改是向所有表添加一个整数列,并将其作为主键,而不是当前的主键。我们还需要保留“旧”列,因此覆盖现有值不是一个选项。

What would be the ideal settings to get this task done as quick as possible?

要尽快完成这项任务,最理想的设置是什么?

6 个解决方案

#1


15  

You need to think about your requirements a little more carefully.

您需要更仔细地考虑您的需求。

At the simplest level, the "fastest" way to get the table changed is to do it in as few ALTER TABLE statements as possible, preferably one. This is because MySQL copies a table's data to change the schema and making fifteen changes whilst make a single copy is obviously (and really is) faster than copying the table fifteen times, making one change at a time.

在最简单的层次上,更改表的“最快”方法是在尽可能少的ALTER table语句中进行更改,最好是一个。这是因为MySQL复制表的数据以更改模式并进行15个更改,而一次复制显然(实际上是)比一次复制15个表要快。

But I suspect you're asking how to do this change with the least amount of downtime. The way I would do that, you basically synthesize the way a non-block ALTER TABLE would work. But it has some additional requirements:

但是我怀疑您在问如何在最短的停机时间内完成这个更改。我的方法是,你基本上合成了非块修改表的工作方式。但它还有一些附加要求:

  1. you need a way to track added and changed data, such as with a "modified" date field for the latter, or an AUTO_INCREMENT field for the former.
  2. 您需要一种跟踪添加和更改数据的方法,例如对后者使用“修改”日期字段,或者对前者使用AUTO_INCREMENT字段。
  3. you need space to have two copies of your table on the database.
  4. 在数据库中,需要有两个表副本的空间。
  5. you need a time period where alterations to the table won't get too far ahead of a snapshot
  6. 您需要一段时间,在此期间,对表的更改不会在快照之前进行太大的更改

The basic technique is as you suggested, i.e. using an INSERT INTO ... SELECT .... At least you're in front because you're starting with an InnoDB table, so the SELECT won't block. I recommend doing the ALTER TABLE on the new, empty table, which will save MySQL copying all the data again, which will mean you need to list all the fields correctly in the INSERT INTO ... SELECT ... statement. Then you can do a simple RENAME statement to swap it over. Then you need to do another INSERT INTO ... SELECT ... WHERE ... and perhaps an UPDATE ... INNER JOIN ... WHERE ... to grab all the modified data. You need to do the INSERT and UPDATE quickly or your code will starting adding new rows and updates to your snapshot which will interfere with your update. (You won't have this problem if you can put your app into maintenence mode for a few minutes from before the RENAME.)

基本技巧如你所建议的,即使用插入式……选择....至少您在前面,因为您从InnoDB表开始,所以SELECT不会阻塞。我建议在新的空表上执行ALTER,这样MySQL就可以再次复制所有数据,这意味着您需要在INSERT INTO中正确列出所有字段。选择……声明。然后,您可以做一个简单的RENAME语句来交换它。然后你需要再做一次插入…选择……在那里……或许还有一个更新……内连接…在那里……获取所有修改后的数据。您需要快速地进行插入和更新,否则您的代码将开始向快照添加新的行和更新,这会影响您的更新。(如果你能在重命名前几分钟将你的应用进入维护模式,你就不会有这个问题了。)

Apart from that, there are some key and buffer related settings you can change for just one session that may help the main data move. Things like read_rnd_buffer_size and read_buffer_size would be useful to increase.

除此之外,还有一些与键和缓冲区相关的设置,您可以只对一个会话进行更改,这可能有助于主数据的移动。像read_rnd_buffer_size和read_buffer_size这样的内容可以增加。

#2


15  

You might want to look at pt-online-schema-change from Percona toolkit. Essentially what it does is:

您可能想看看Percona toolkit中的pt-online-schema-change。它的本质是:

  • Copies original table structure, runs ALTER.
  • 复制原始表结构,运行ALTER。
  • Copies rows from old table to newly created one.
  • 将旧表中的行复制到新创建的表中。
  • Uses triggers to track and sync changes while copying.
  • 使用触发器在复制时跟踪和同步更改。
  • When everything is finished it swaps tables by renaming both.
  • 当一切都完成后,它通过重命名两者来交换表。

Works very well for single instance databases, but might be quite tricky if you use replication and you can't afford stopping slaves and rebuilding them later.

对于单个实例数据库,这种方法非常有效,但是如果您使用复制,并且以后无法停止奴隶并重新构建它们,那么这种方法可能会非常棘手。

There's also a nice webinar about this here.

这里还有一个很好的网络研讨会。

PS: I know it's an old question, just answering in case someone hits this via search engine.

PS:我知道这是个老问题,只要有人通过搜索引擎找到这个问题就可以了。

#3


12  

  1. Setup slave
  2. 设置的奴隶
  3. Stop replication.
  4. 停止复制。
  5. Make ALTER on slave
  6. 奴隶做出改变
  7. Let slave catch up the master
  8. 让奴隶追上主人
  9. swap master and slave, so slave becomes production server with changed structure and minimum downtime
  10. 交换主服务器和从服务器,使从服务器成为具有更改的结构和最小停机时间的生产服务器

#4


11  

Unfortunately, this is not always as simple as staticsan leads on in his answer. Creating the new table while online and moving the data over is easy enough, and doing the cleanup while in maintenance mode is also do-able enough, however, the Mysql RENAME operation automatically manipulates any foreign key references to your old table. What this means is that any foreign key references to the original table will still point to whatever you rename the table to.

不幸的是,这并不总是像staticsan在他的回答中所引导的那样简单。在联机时创建新表并移动数据是很容易的,在维护模式下进行清理也是可以做到的,但是,Mysql RENAME操作将自动操作对旧表的任何外键引用。这意味着对原始表的任何外键引用仍然指向您重命名的表。

So, if you have any foreign key references to the table you're trying to alter you're stuck either altering those tables to replace the reference to your new table, or worse if that table is large you have to repeat the process with large table number two.

因此,如果您想要修改的表有任何外键引用,那么您要么修改这些表以替换对新表的引用,要么更糟糕的是,如果该表很大,您必须使用大表2重复这个过程。

Another approach that has worked for us in the past has been to juggle a set of Mysql replicas handling the alter. I'm not the best person to speak to the process, but it basically consists of breaking replication to one slave, running the patch on that instance, turning replication back on once the alter table is completed so that it catches up on replication. Once the replication catches up, you put the site into maintenance mode (if necessary) to switch from your master to this new patched slave as the new master database.

另一种过去对我们有用的方法是使用一组Mysql副本来处理变更。我不是与流程对话的最佳人选,但它基本上包括将复制中断到一个从服务器,在该实例上运行补丁,在alter表完成后将复制重新打开,以便它赶上复制。一旦复制赶上,您将站点置于维护模式(如果需要),以便从主服务器切换到这个新补丁的从服务器作为新的主数据库。

The only thing I can't remember is exactly when you point the other slaves at the new master so that they also get the alter applied. One caveat to this process, we typically use this to roll alter patches before the code needs the change, or after the code has changed to no longer reference the columns/keys.

我唯一不记得的是,当你把其他奴隶指向新主人时,他们也会被施放圣坛。对这个过程的一个警告是,我们通常在代码需要更改之前或代码更改为不再引用列/键之后,使用它来滚动alter补丁。

#5


5  

I tested various strategies to speed up one alter table. Eventually I got about 10x speed increase in my particular case. The results may or may not apply to your situation. However, based on this I would suggest experimenting with InnoDB log file/buffer size parameters.

我测试了各种策略,以加快一个可修改表的速度。最终我得到了大约10倍的速度增长在我的特殊情况。结果可能适用于你的情况,也可能不适用。然而,基于此,我建议试验InnoDB日志文件/缓冲区大小参数。

In short, only increasing innodb_log_file_size and innodb_log_buffer_size had a measurable effect (Be careful! Changing innodb_log_file_size is risky. Look below for more info).

简而言之,只增加innodb_log_file_size和innodb_log_buffer_size具有可度量的效果(小心!改变innodb_log_file_size是危险的。查看下面的更多信息)。

Based on the rough write data rate (iostat) and cpu activity the bottleneck was io based, but not data throughput. In the faster 500s runs the write throughput is at least in the same ballpark that you would expect from the hard disk.

基于粗略的写数据速率(iostat)和cpu活动,瓶颈是基于io的,而不是数据吞吐量。在运行速度较快的500s中,写操作吞吐量至少与您预期的硬盘相同。

Tried performance optimizations:

性能优化:

Changing innodb_log_file_size can be dangerous. See http://www.mysqlperformanceblog.com/2011/07/09/how-to-change-innodb_log_file_size-safely/ The technique (file move) explained in the link worked nicely in my case.

更改innodb_log_file_size可能会很危险。请参见http://www.mysqlperformanceblog.com/2011/07/09/howto -change-innodb_log_file_size-safely/该链接中解释的技术(文件移动)在我的案例中很好地工作。

Also see http://www.mysqlperformanceblog.com/2007/11/03/choosing-innodb_buffer_pool_size/ and http://www.mysqlperformanceblog.com/2008/11/21/how-to-calculate-a-good-innodb-log-file-size/ for information about innodb and tuning log sizes. One drawback of larger log files is longer recovery time after crash.

有关innodb和调优日志大小的信息,请参见http://www.mysqlperformanceblog.com/2007/11/03/choo- innodb_buffer_pool_size/和http://www.mysqlperformanceblog.com/2008/11/21/how-good -innodb-log- size/。较大日志文件的一个缺点是崩溃后恢复时间更长。

Test runs and rough timings:

测试运行和粗略计时:

  • The simple load data to a freshly createad table: 6500s
  • 将数据加载到新创建的createad表:6500s
  • load data w. innodb_log_file_size=200M, innodb_log_buffer_size=8M, innodb_buffer_pool_size=2200M, autocommit= 0; unique_checks=0, foreign_key_checks=0: 500s
  • 加载数据w. innodb_log_file_size=200M, innodb_log_buffer_size=8M, innodb_buffer_pool_size=2200M, autocommit= 0;unique_checks = 0,foreign_key_checks = 0:500年代
  • load data w. innodb_log_file_size=200M, innodb_log_buffer_size=8M: 500s
  • 加载数据w. innodb_log_file_size=200M, innodb_log_buffer_size=8M: 500s
  • Equivalent straight alter table w. datainnodb_log_file_size=200M, innodb_log_buffer_size=8M: 500s
  • 等效直调表w. datainnodb_log_file_size=200M, innodb_log_buffer_size=8M: 500s

Testing details: Table: InnoDB, 6M rows, 2.8G on disk, single file (innodb_file_per_table option), primary key is 1 integer, +2 unque constraints/indices, 8 columns, avg. row length 218 bytes. Server: Ubuntu 12.04, x86_64, virtual machine, 8 cores, 16GB, sata consumer grade disk, no raid, no database activity, minuscule other process activity, minuscule activity in other and much smaller virtual machines. Mysql 5.1.53. The initial server config is pretty default except for increased innodb_buffer_pool_size of 1400M. The alter table adds 2 small columns. I didn't clock the raw alter table, but instead experimented with equivalent load data infile statement, finally I did the straight alter table and got comparable result.

测试细节:表:InnoDB, 6M行,2.8G磁盘,单个文件(innodb_file_per_table选项),主键是1个整数,+2 unque约束/索引,8列,avg行长度218字节。服务器:Ubuntu 12.04、x86_64、虚拟机、8核、16GB、sata消费级磁盘、无raid、无数据库活动、极小的其他进程活动、极小的其他和小得多的虚拟机活动。Mysql 5.1.53。除了增加了innodb_buffer_pool_size (1400M)之外,初始服务器配置是默认的。alter table添加了2个小列。我没有对原始的alter table进行时钟处理,而是对等效的load data infile语句进行了实验,最后我做了直的alter table,得到了比较的结果。

This question is related to at least following questions:

这个问题至少与以下几个问题有关:

#6


-4  

I really don't know how to optimize that, but it's usually a good practice to put the site in offline mode before doing such updates.

我真的不知道如何去优化它,但是在做这样的更新之前,把网站放到离线模式通常是一个很好的实践。

Then, you can run your DB scripts at, say, 3 am, so it shouldn't matter much if downtime's a big longer than ideal.

然后,您可以在凌晨3点运行DB脚本,因此如果停机时间比理想状态长得多,应该不会太重要。

#1


15  

You need to think about your requirements a little more carefully.

您需要更仔细地考虑您的需求。

At the simplest level, the "fastest" way to get the table changed is to do it in as few ALTER TABLE statements as possible, preferably one. This is because MySQL copies a table's data to change the schema and making fifteen changes whilst make a single copy is obviously (and really is) faster than copying the table fifteen times, making one change at a time.

在最简单的层次上,更改表的“最快”方法是在尽可能少的ALTER table语句中进行更改,最好是一个。这是因为MySQL复制表的数据以更改模式并进行15个更改,而一次复制显然(实际上是)比一次复制15个表要快。

But I suspect you're asking how to do this change with the least amount of downtime. The way I would do that, you basically synthesize the way a non-block ALTER TABLE would work. But it has some additional requirements:

但是我怀疑您在问如何在最短的停机时间内完成这个更改。我的方法是,你基本上合成了非块修改表的工作方式。但它还有一些附加要求:

  1. you need a way to track added and changed data, such as with a "modified" date field for the latter, or an AUTO_INCREMENT field for the former.
  2. 您需要一种跟踪添加和更改数据的方法,例如对后者使用“修改”日期字段,或者对前者使用AUTO_INCREMENT字段。
  3. you need space to have two copies of your table on the database.
  4. 在数据库中,需要有两个表副本的空间。
  5. you need a time period where alterations to the table won't get too far ahead of a snapshot
  6. 您需要一段时间,在此期间,对表的更改不会在快照之前进行太大的更改

The basic technique is as you suggested, i.e. using an INSERT INTO ... SELECT .... At least you're in front because you're starting with an InnoDB table, so the SELECT won't block. I recommend doing the ALTER TABLE on the new, empty table, which will save MySQL copying all the data again, which will mean you need to list all the fields correctly in the INSERT INTO ... SELECT ... statement. Then you can do a simple RENAME statement to swap it over. Then you need to do another INSERT INTO ... SELECT ... WHERE ... and perhaps an UPDATE ... INNER JOIN ... WHERE ... to grab all the modified data. You need to do the INSERT and UPDATE quickly or your code will starting adding new rows and updates to your snapshot which will interfere with your update. (You won't have this problem if you can put your app into maintenence mode for a few minutes from before the RENAME.)

基本技巧如你所建议的,即使用插入式……选择....至少您在前面,因为您从InnoDB表开始,所以SELECT不会阻塞。我建议在新的空表上执行ALTER,这样MySQL就可以再次复制所有数据,这意味着您需要在INSERT INTO中正确列出所有字段。选择……声明。然后,您可以做一个简单的RENAME语句来交换它。然后你需要再做一次插入…选择……在那里……或许还有一个更新……内连接…在那里……获取所有修改后的数据。您需要快速地进行插入和更新,否则您的代码将开始向快照添加新的行和更新,这会影响您的更新。(如果你能在重命名前几分钟将你的应用进入维护模式,你就不会有这个问题了。)

Apart from that, there are some key and buffer related settings you can change for just one session that may help the main data move. Things like read_rnd_buffer_size and read_buffer_size would be useful to increase.

除此之外,还有一些与键和缓冲区相关的设置,您可以只对一个会话进行更改,这可能有助于主数据的移动。像read_rnd_buffer_size和read_buffer_size这样的内容可以增加。

#2


15  

You might want to look at pt-online-schema-change from Percona toolkit. Essentially what it does is:

您可能想看看Percona toolkit中的pt-online-schema-change。它的本质是:

  • Copies original table structure, runs ALTER.
  • 复制原始表结构,运行ALTER。
  • Copies rows from old table to newly created one.
  • 将旧表中的行复制到新创建的表中。
  • Uses triggers to track and sync changes while copying.
  • 使用触发器在复制时跟踪和同步更改。
  • When everything is finished it swaps tables by renaming both.
  • 当一切都完成后,它通过重命名两者来交换表。

Works very well for single instance databases, but might be quite tricky if you use replication and you can't afford stopping slaves and rebuilding them later.

对于单个实例数据库,这种方法非常有效,但是如果您使用复制,并且以后无法停止奴隶并重新构建它们,那么这种方法可能会非常棘手。

There's also a nice webinar about this here.

这里还有一个很好的网络研讨会。

PS: I know it's an old question, just answering in case someone hits this via search engine.

PS:我知道这是个老问题,只要有人通过搜索引擎找到这个问题就可以了。

#3


12  

  1. Setup slave
  2. 设置的奴隶
  3. Stop replication.
  4. 停止复制。
  5. Make ALTER on slave
  6. 奴隶做出改变
  7. Let slave catch up the master
  8. 让奴隶追上主人
  9. swap master and slave, so slave becomes production server with changed structure and minimum downtime
  10. 交换主服务器和从服务器,使从服务器成为具有更改的结构和最小停机时间的生产服务器

#4


11  

Unfortunately, this is not always as simple as staticsan leads on in his answer. Creating the new table while online and moving the data over is easy enough, and doing the cleanup while in maintenance mode is also do-able enough, however, the Mysql RENAME operation automatically manipulates any foreign key references to your old table. What this means is that any foreign key references to the original table will still point to whatever you rename the table to.

不幸的是,这并不总是像staticsan在他的回答中所引导的那样简单。在联机时创建新表并移动数据是很容易的,在维护模式下进行清理也是可以做到的,但是,Mysql RENAME操作将自动操作对旧表的任何外键引用。这意味着对原始表的任何外键引用仍然指向您重命名的表。

So, if you have any foreign key references to the table you're trying to alter you're stuck either altering those tables to replace the reference to your new table, or worse if that table is large you have to repeat the process with large table number two.

因此,如果您想要修改的表有任何外键引用,那么您要么修改这些表以替换对新表的引用,要么更糟糕的是,如果该表很大,您必须使用大表2重复这个过程。

Another approach that has worked for us in the past has been to juggle a set of Mysql replicas handling the alter. I'm not the best person to speak to the process, but it basically consists of breaking replication to one slave, running the patch on that instance, turning replication back on once the alter table is completed so that it catches up on replication. Once the replication catches up, you put the site into maintenance mode (if necessary) to switch from your master to this new patched slave as the new master database.

另一种过去对我们有用的方法是使用一组Mysql副本来处理变更。我不是与流程对话的最佳人选,但它基本上包括将复制中断到一个从服务器,在该实例上运行补丁,在alter表完成后将复制重新打开,以便它赶上复制。一旦复制赶上,您将站点置于维护模式(如果需要),以便从主服务器切换到这个新补丁的从服务器作为新的主数据库。

The only thing I can't remember is exactly when you point the other slaves at the new master so that they also get the alter applied. One caveat to this process, we typically use this to roll alter patches before the code needs the change, or after the code has changed to no longer reference the columns/keys.

我唯一不记得的是,当你把其他奴隶指向新主人时,他们也会被施放圣坛。对这个过程的一个警告是,我们通常在代码需要更改之前或代码更改为不再引用列/键之后,使用它来滚动alter补丁。

#5


5  

I tested various strategies to speed up one alter table. Eventually I got about 10x speed increase in my particular case. The results may or may not apply to your situation. However, based on this I would suggest experimenting with InnoDB log file/buffer size parameters.

我测试了各种策略,以加快一个可修改表的速度。最终我得到了大约10倍的速度增长在我的特殊情况。结果可能适用于你的情况,也可能不适用。然而,基于此,我建议试验InnoDB日志文件/缓冲区大小参数。

In short, only increasing innodb_log_file_size and innodb_log_buffer_size had a measurable effect (Be careful! Changing innodb_log_file_size is risky. Look below for more info).

简而言之,只增加innodb_log_file_size和innodb_log_buffer_size具有可度量的效果(小心!改变innodb_log_file_size是危险的。查看下面的更多信息)。

Based on the rough write data rate (iostat) and cpu activity the bottleneck was io based, but not data throughput. In the faster 500s runs the write throughput is at least in the same ballpark that you would expect from the hard disk.

基于粗略的写数据速率(iostat)和cpu活动,瓶颈是基于io的,而不是数据吞吐量。在运行速度较快的500s中,写操作吞吐量至少与您预期的硬盘相同。

Tried performance optimizations:

性能优化:

Changing innodb_log_file_size can be dangerous. See http://www.mysqlperformanceblog.com/2011/07/09/how-to-change-innodb_log_file_size-safely/ The technique (file move) explained in the link worked nicely in my case.

更改innodb_log_file_size可能会很危险。请参见http://www.mysqlperformanceblog.com/2011/07/09/howto -change-innodb_log_file_size-safely/该链接中解释的技术(文件移动)在我的案例中很好地工作。

Also see http://www.mysqlperformanceblog.com/2007/11/03/choosing-innodb_buffer_pool_size/ and http://www.mysqlperformanceblog.com/2008/11/21/how-to-calculate-a-good-innodb-log-file-size/ for information about innodb and tuning log sizes. One drawback of larger log files is longer recovery time after crash.

有关innodb和调优日志大小的信息,请参见http://www.mysqlperformanceblog.com/2007/11/03/choo- innodb_buffer_pool_size/和http://www.mysqlperformanceblog.com/2008/11/21/how-good -innodb-log- size/。较大日志文件的一个缺点是崩溃后恢复时间更长。

Test runs and rough timings:

测试运行和粗略计时:

  • The simple load data to a freshly createad table: 6500s
  • 将数据加载到新创建的createad表:6500s
  • load data w. innodb_log_file_size=200M, innodb_log_buffer_size=8M, innodb_buffer_pool_size=2200M, autocommit= 0; unique_checks=0, foreign_key_checks=0: 500s
  • 加载数据w. innodb_log_file_size=200M, innodb_log_buffer_size=8M, innodb_buffer_pool_size=2200M, autocommit= 0;unique_checks = 0,foreign_key_checks = 0:500年代
  • load data w. innodb_log_file_size=200M, innodb_log_buffer_size=8M: 500s
  • 加载数据w. innodb_log_file_size=200M, innodb_log_buffer_size=8M: 500s
  • Equivalent straight alter table w. datainnodb_log_file_size=200M, innodb_log_buffer_size=8M: 500s
  • 等效直调表w. datainnodb_log_file_size=200M, innodb_log_buffer_size=8M: 500s

Testing details: Table: InnoDB, 6M rows, 2.8G on disk, single file (innodb_file_per_table option), primary key is 1 integer, +2 unque constraints/indices, 8 columns, avg. row length 218 bytes. Server: Ubuntu 12.04, x86_64, virtual machine, 8 cores, 16GB, sata consumer grade disk, no raid, no database activity, minuscule other process activity, minuscule activity in other and much smaller virtual machines. Mysql 5.1.53. The initial server config is pretty default except for increased innodb_buffer_pool_size of 1400M. The alter table adds 2 small columns. I didn't clock the raw alter table, but instead experimented with equivalent load data infile statement, finally I did the straight alter table and got comparable result.

测试细节:表:InnoDB, 6M行,2.8G磁盘,单个文件(innodb_file_per_table选项),主键是1个整数,+2 unque约束/索引,8列,avg行长度218字节。服务器:Ubuntu 12.04、x86_64、虚拟机、8核、16GB、sata消费级磁盘、无raid、无数据库活动、极小的其他进程活动、极小的其他和小得多的虚拟机活动。Mysql 5.1.53。除了增加了innodb_buffer_pool_size (1400M)之外,初始服务器配置是默认的。alter table添加了2个小列。我没有对原始的alter table进行时钟处理,而是对等效的load data infile语句进行了实验,最后我做了直的alter table,得到了比较的结果。

This question is related to at least following questions:

这个问题至少与以下几个问题有关:

#6


-4  

I really don't know how to optimize that, but it's usually a good practice to put the site in offline mode before doing such updates.

我真的不知道如何去优化它,但是在做这样的更新之前,把网站放到离线模式通常是一个很好的实践。

Then, you can run your DB scripts at, say, 3 am, so it shouldn't matter much if downtime's a big longer than ideal.

然后,您可以在凌晨3点运行DB脚本,因此如果停机时间比理想状态长得多,应该不会太重要。