I have a fairly large InnoDB table which contains about 10 million rows (and counting, it is expected to become 20 times that size). Each row is not that large (131 B on average), but from time to time I have to delete a chunk of them, and that is taking ages. This is the table structure:
我有一个相当大的InnoDB表,它包含大约1000万行(并且计数,它的大小预期是这个的20倍)。每一行都没有那么大(平均131 B),但我时不时地要删除其中的一部分,这需要很长的时间。这是表格结构:
CREATE TABLE `problematic_table` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`taxid` int(10) unsigned NOT NULL,
`blastdb_path` varchar(255) NOT NULL,
`query` char(32) NOT NULL,
`target` int(10) unsigned NOT NULL,
`score` double NOT NULL,
`evalue` varchar(100) NOT NULL,
`log_evalue` double NOT NULL DEFAULT '-999',
`start` int(10) unsigned DEFAULT NULL,
`end` int(10) unsigned DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `taxid` (`taxid`),
KEY `query` (`query`),
KEY `target` (`target`),
KEY `log_evalue` (`log_evalue`)
) ENGINE=InnoDB AUTO_INCREMENT=7888676 DEFAULT CHARSET=latin1;
Queries that delete large chunks from the table are simply like this:
从表中删除大块内容的查询简单如下:
DELETE FROM problematic_table WHERE problematic_table.taxid = '57';
A query like this just took almost an hour to finish. I can imagine that the index rewriting overhead makes these queries very slow.
这样的查询几乎花了一个小时才完成。我可以想象索引重写的开销使得这些查询非常缓慢。
I am developing an application that will run on pre-existing databases. I most likely have no control over server variables unless I make changes to them mandatory (which I would prefer not to), so I'm afraid suggestions that change those are of little value.
我正在开发一个应用程序,它将运行在已有的数据库上。我很可能无法控制服务器变量,除非我对它们进行强制性的更改(我宁愿不这样做),所以我担心更改这些变量的建议没有多大价值。
I have tried to INSERT ... SELECT
those rows that I don't want to delete into a temporary table and just dropping the rest, but as the ratio of to-delete vs. to-keep shifts towards to-keep, this is no longer a useful solution.
我试图插入……选择那些我不想删除的行到临时表中,然后删除其余的行,但是随着删除与保留的比率向保留的方向变化,这不再是一个有用的解决方案。
This is a table that may see frequent INSERT
s and SELECT
s in the future, but no UPDATE
s. Basically, it's a logging and reference table that needs to drop parts of its content from time to time.
这是一个可以看到频繁插入和选择的表,但是没有更新。基本上,它是一个日志和引用表,需要不时地删除它的部分内容。
Could I improve my indexes on this table by limiting their length? Would switching to MyISAM help, which supports DISABLE KEYS
during transactions? What else could I try to improve DELETE
performance?
我可以通过限制索引的长度来改进表上的索引吗?切换到MyISAM帮助,在事务中支持禁用键?我还能做些什么来提高删除性能呢?
Edit: One such deletion would be in the order of about one million of rows.
编辑:这样的删除将会有大约一百万行。
3 个解决方案
#1
12
This solution can provide better performance once completed, but the process may take some time to implement.
这个解决方案可以在完成后提供更好的性能,但是这个过程可能需要一些时间来实现。
A new BIT
column can be added and defaulted to TRUE
for "active" and FALSE
for "inactive". If that's not enough states, you could use TINYINT
with 256 possible values.
可以添加一个新的位列,并默认为TRUE表示“活动”,FALSE表示“非活动”。如果没有足够的状态,可以使用TINYINT和256个可能的值。
Adding this new column will probably take a long time, but once it's over, your updates should be much faster as long as you do it off the PRIMARY
as you do with your deletes and don't index this new column.
添加这个新列可能要花很长时间,但是一旦它结束,您的更新应该要快得多,只要您在主列之外进行更新,就像删除操作一样,并且不索引这个新列。
The reason why InnoDB takes so long to DELETE
on such a massive table as yours is because of the cluster index. It physically orders your table based upon your PRIMARY
, first UNIQUE
it finds, or whatever it can determine as an adequate substitute if it can't find PRIMARY
or UNIQUE
, so when one row is deleted, it now reorders your entire table physically on the disk for speed and defragmentation. So it's not the DELETE
that's taking so long; it's the physical reordering after that row is removed.
InnoDB花这么长时间删除这么大的表的原因是集群索引。它根据您的主表、它找到的第一个惟一的表或它可以确定的任何东西来对表进行物理排序(如果它找不到主表或惟一的表),因此当删除一行时,它现在在磁盘上重新对整个表进行物理排序,以获得速度和碎片整理。所以这不是删除的过程;这是删除该行后的物理重新排序。
When you create a fixed width column and update that instead of deleting, there's no need for physical reordering across your huge table because the space consumed by a row and table itself is constant.
当您创建一个固定宽度的列并更新该列而不是删除时,就不需要跨大型表进行物理重新排序,因为行和表本身所消耗的空间是常量。
During off hours, a single DELETE
can be used to remove the unnecessary rows. This operation will still be slow but collectively much faster than deleting individual rows.
在空闲时,可以使用一个DELETE来删除不必要的行。这个操作仍然很慢,但是总的来说比删除单独的行要快得多。
#2
21
I had a similar scenario with a table with 2 million rows and a delete statement, which should delete around a 100 thousand rows - it took around 10 minutes to do so.
我有一个类似的场景,有一个包含200万行的表和一个delete语句,它应该删除大约10万行——这需要大约10分钟。
After I checked the configuration, I found that MySQL Server was running with default innodb_buffer_pool_size
= 8 MB (!).
检查配置后,我发现MySQL服务器使用默认的innodb_buffer_pool_size = 8 MB(!)运行。
After restart with innodb_buffer_pool_size
= 1.5GB, the same scenario took 10 sec.
在使用innodb_buffer_pool_size = 1.5GB重新启动之后,相同的场景需要10秒。
So it looks like there is a dependency if "reordering of the table" can fit in buffer_pool or not.
因此,如果“表的重新排序”是否适合buffer_pool,那么看起来就有一个依赖关系。
#3
0
I solved a similar problem by using a stored procedure, thereby improving performance by a factor of several thousand.
我通过使用存储过程解决了一个类似的问题,从而将性能提高了几千倍。
My table had 33M rows and several indexes and I wanted to delete 10K rows. My DB was in Azure with no control over innodb_buffer_pool_size.
我的表有33M行和几个索引,我想删除10K行。我的DB在Azure中,无法控制innodb_buffer_pool_size。
For simplicity I created a table tmp_id
with only a primary id
field:
为了简单起见,我创建了一个只有主id字段的表tmp_id:
CREATE TABLE `tmp_id` (
`id` bigint(20) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`)
)
I selected the set of ids I wanted to delete into tmp_id
and ran delete from my_table where id in (select id from tmp_id);
This did not complete in 12 hours, so I tried with only a single id in tmp_id
and it took 25 minutes. Doing delete from my_table where id = 1234
completed in a few milliseconds, so I decided to try doing that in a procedure instead:
我选择要删除到tmp_id的id集合,并从id所在的my_table中运行delete(从tmp_id中选择id);这在12小时内没有完成,所以我在tmp_id中只尝试了一个id,花费了25分钟。在my_table中执行delete,其中id = 1234在几毫秒内完成,所以我决定在一个过程中尝试这样做:
CREATE PROCEDURE `delete_ids_in_tmp`()
BEGIN
declare finished integer default 0;
declare v_id bigint(20);
declare cur1 cursor for select id from tmp_id;
declare continue handler for not found set finished=1;
open cur1;
igmLoop: loop
fetch cur1 into v_id;
if finished = 1 then leave igmLoop; end if;
delete from problematic_table where id = v_id;
end loop igmLoop;
close cur1;
END
Now call delete_ids_in_tmp();
deleted all 10K rows in less than a minute.
现在称之为delete_ids_in_tmp();在一分钟内删除所有10K行。
#1
12
This solution can provide better performance once completed, but the process may take some time to implement.
这个解决方案可以在完成后提供更好的性能,但是这个过程可能需要一些时间来实现。
A new BIT
column can be added and defaulted to TRUE
for "active" and FALSE
for "inactive". If that's not enough states, you could use TINYINT
with 256 possible values.
可以添加一个新的位列,并默认为TRUE表示“活动”,FALSE表示“非活动”。如果没有足够的状态,可以使用TINYINT和256个可能的值。
Adding this new column will probably take a long time, but once it's over, your updates should be much faster as long as you do it off the PRIMARY
as you do with your deletes and don't index this new column.
添加这个新列可能要花很长时间,但是一旦它结束,您的更新应该要快得多,只要您在主列之外进行更新,就像删除操作一样,并且不索引这个新列。
The reason why InnoDB takes so long to DELETE
on such a massive table as yours is because of the cluster index. It physically orders your table based upon your PRIMARY
, first UNIQUE
it finds, or whatever it can determine as an adequate substitute if it can't find PRIMARY
or UNIQUE
, so when one row is deleted, it now reorders your entire table physically on the disk for speed and defragmentation. So it's not the DELETE
that's taking so long; it's the physical reordering after that row is removed.
InnoDB花这么长时间删除这么大的表的原因是集群索引。它根据您的主表、它找到的第一个惟一的表或它可以确定的任何东西来对表进行物理排序(如果它找不到主表或惟一的表),因此当删除一行时,它现在在磁盘上重新对整个表进行物理排序,以获得速度和碎片整理。所以这不是删除的过程;这是删除该行后的物理重新排序。
When you create a fixed width column and update that instead of deleting, there's no need for physical reordering across your huge table because the space consumed by a row and table itself is constant.
当您创建一个固定宽度的列并更新该列而不是删除时,就不需要跨大型表进行物理重新排序,因为行和表本身所消耗的空间是常量。
During off hours, a single DELETE
can be used to remove the unnecessary rows. This operation will still be slow but collectively much faster than deleting individual rows.
在空闲时,可以使用一个DELETE来删除不必要的行。这个操作仍然很慢,但是总的来说比删除单独的行要快得多。
#2
21
I had a similar scenario with a table with 2 million rows and a delete statement, which should delete around a 100 thousand rows - it took around 10 minutes to do so.
我有一个类似的场景,有一个包含200万行的表和一个delete语句,它应该删除大约10万行——这需要大约10分钟。
After I checked the configuration, I found that MySQL Server was running with default innodb_buffer_pool_size
= 8 MB (!).
检查配置后,我发现MySQL服务器使用默认的innodb_buffer_pool_size = 8 MB(!)运行。
After restart with innodb_buffer_pool_size
= 1.5GB, the same scenario took 10 sec.
在使用innodb_buffer_pool_size = 1.5GB重新启动之后,相同的场景需要10秒。
So it looks like there is a dependency if "reordering of the table" can fit in buffer_pool or not.
因此,如果“表的重新排序”是否适合buffer_pool,那么看起来就有一个依赖关系。
#3
0
I solved a similar problem by using a stored procedure, thereby improving performance by a factor of several thousand.
我通过使用存储过程解决了一个类似的问题,从而将性能提高了几千倍。
My table had 33M rows and several indexes and I wanted to delete 10K rows. My DB was in Azure with no control over innodb_buffer_pool_size.
我的表有33M行和几个索引,我想删除10K行。我的DB在Azure中,无法控制innodb_buffer_pool_size。
For simplicity I created a table tmp_id
with only a primary id
field:
为了简单起见,我创建了一个只有主id字段的表tmp_id:
CREATE TABLE `tmp_id` (
`id` bigint(20) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`)
)
I selected the set of ids I wanted to delete into tmp_id
and ran delete from my_table where id in (select id from tmp_id);
This did not complete in 12 hours, so I tried with only a single id in tmp_id
and it took 25 minutes. Doing delete from my_table where id = 1234
completed in a few milliseconds, so I decided to try doing that in a procedure instead:
我选择要删除到tmp_id的id集合,并从id所在的my_table中运行delete(从tmp_id中选择id);这在12小时内没有完成,所以我在tmp_id中只尝试了一个id,花费了25分钟。在my_table中执行delete,其中id = 1234在几毫秒内完成,所以我决定在一个过程中尝试这样做:
CREATE PROCEDURE `delete_ids_in_tmp`()
BEGIN
declare finished integer default 0;
declare v_id bigint(20);
declare cur1 cursor for select id from tmp_id;
declare continue handler for not found set finished=1;
open cur1;
igmLoop: loop
fetch cur1 into v_id;
if finished = 1 then leave igmLoop; end if;
delete from problematic_table where id = v_id;
end loop igmLoop;
close cur1;
END
Now call delete_ids_in_tmp();
deleted all 10K rows in less than a minute.
现在称之为delete_ids_in_tmp();在一分钟内删除所有10K行。