I using next queries for extracting top 100 and 101 lines from DB and gettings following elapsing times, which completely different (second query ~8 slower than first):
我使用下一个查询从DB中提取前100行和101行并获得以下经过的时间,这完全不同(第二个查询比第一个慢〜8):
SELECT TOP (100) *
FROM PhotoLike WHERE photoAccountId=@accountId AND accountId<>@accountId
ORDER BY createDate DESC
GO
SQL Server Execution Times: CPU time = 187 ms, elapsed time = 202 ms.
SQL Server执行时间:CPU时间= 187毫秒,已用时间= 202毫秒。
SELECT TOP (101) *
FROM PhotoLike WHERE photoAccountId=@accountId AND accountId<>@accountId
ORDER BY createDate DESC
GO
SQL Server Execution Times: CPU time = 266 ms, elapsed time = 1644 ms.
SQL Server执行时间:CPU时间= 266 ms,已用时间= 1644 ms。
Execution plan of first two cases:
前两种情况的执行计划:
But if I get rid of @accoundId variable, I get following results, which approximately equals and faster more than 2 times than first query from this question.
但是如果我摆脱了@accoundId变量,我会得到以下结果,它大约等于并且比这个问题中的第一个查询快2倍以上。
SELECT TOP (100) *
FROM PhotoLike WHERE photoAccountId=10 AND accountId<>10
ORDER BY createDate DESC
GO
SQL Server Execution Times: CPU time = 358 ms, elapsed time = 90 ms.
SQL Server执行时间:CPU时间= 358 ms,已用时间= 90 ms。
SELECT TOP (101) *
FROM PhotoLike WHERE photoAccountId=10 AND accountId<>10
ORDER BY createDate DESC
GO
SQL Server Execution Times: CPU time = 452 ms, elapsed time = 93 ms.
SQL Server执行时间:CPU时间= 452 ms,已用时间= 93 ms。
Execution plan of second two cases:
第二个案件的执行计划:
Why is this happen and how can I improve performance with varibales?
为什么会发生这种情况?如何通过varibales提高性能?
UPDATE
Added execution plans.
添加了执行计划。
3 个解决方案
#1
2
There are a couple of things going on here.
这里有几件事情。
When you use variables SQL Server doesn't sniff the values at all except if you also add OPTION (RECOMPILE)
.
当您使用变量时,除非您还添加OPTION(RECOMPILE),否则SQL Server根本不会嗅探值。
The estimate for the number of rows matching photoAccountId=@accountId
is much smaller with the guess than is actually the case. (Note the thick line coming out of the index seek in the second plan and the decision to use a parallel plan).
匹配photoAccountId = @ accountId的行数估计值与猜测相比实际情况要小得多。 (注意在第二个计划中索引的粗线和决定使用并行计划)。
Also TOP 100
/ TOP 101
is the cut off point between the TOP N
sort using an algorithm that just needs space to sort 100 rows and it doing a full sort.. The inaccurate row count estimate likely means there is insufficient memory allocated for the full sort and it is spilling to tempdb
.
TOP 100 / TOP 101也是TOP N排序之间的截止点,它使用的算法只需要空间来排序100行并进行完整排序。不准确的行计数估计可能意味着没有足够的内存分配给满排序,它溢出到tempdb。
Simply adding OPTION (RECOMPILE)
to the query with variables will likely improve things somewhat though it looks as though even the "fast" plan is doing many key lookups that could be avoided with different indexing.
简单地将OPTION(RECOMPILE)添加到带有变量的查询中可能会有所改善,尽管看起来即使“快速”计划正在执行许多可以通过不同索引来避免的键查找。
#2
0
I wonder if this could be parameter sniffing related. How fast does the following query go?
我想知道这是否可能与参数嗅探有关。以下查询的速度有多快?
DECLARE @accountIdParam int;
SELECT @accountIdParam = @accountId;
SELECT TOP (101) *
FROM PhotoLike WHERE photoAccountId=@accountIdParam AND accountId<>@accountIdParam
ORDER BY createDate DESC
GO
#3
0
I you can, you should create a clustered index based on the accountId field of your table.
我可以,您应该根据表的accountId字段创建聚簇索引。
As you test a inequality, it should be more performant :
在测试不等式时,它应该更高性能:
CREATE UNIQUE CLUSTERED INDEX [IX_MyIndexName] ON [dbo].[PhotoLike] ( accountId DESC, createDate DESC, photoAccountId DESC, )
创建独特的聚类索引[IX_MyIndexName] ON [dbo]。[PhotoLike](accountId DESC,createDate DESC,photoAccountId DESC,)
#1
2
There are a couple of things going on here.
这里有几件事情。
When you use variables SQL Server doesn't sniff the values at all except if you also add OPTION (RECOMPILE)
.
当您使用变量时,除非您还添加OPTION(RECOMPILE),否则SQL Server根本不会嗅探值。
The estimate for the number of rows matching photoAccountId=@accountId
is much smaller with the guess than is actually the case. (Note the thick line coming out of the index seek in the second plan and the decision to use a parallel plan).
匹配photoAccountId = @ accountId的行数估计值与猜测相比实际情况要小得多。 (注意在第二个计划中索引的粗线和决定使用并行计划)。
Also TOP 100
/ TOP 101
is the cut off point between the TOP N
sort using an algorithm that just needs space to sort 100 rows and it doing a full sort.. The inaccurate row count estimate likely means there is insufficient memory allocated for the full sort and it is spilling to tempdb
.
TOP 100 / TOP 101也是TOP N排序之间的截止点,它使用的算法只需要空间来排序100行并进行完整排序。不准确的行计数估计可能意味着没有足够的内存分配给满排序,它溢出到tempdb。
Simply adding OPTION (RECOMPILE)
to the query with variables will likely improve things somewhat though it looks as though even the "fast" plan is doing many key lookups that could be avoided with different indexing.
简单地将OPTION(RECOMPILE)添加到带有变量的查询中可能会有所改善,尽管看起来即使“快速”计划正在执行许多可以通过不同索引来避免的键查找。
#2
0
I wonder if this could be parameter sniffing related. How fast does the following query go?
我想知道这是否可能与参数嗅探有关。以下查询的速度有多快?
DECLARE @accountIdParam int;
SELECT @accountIdParam = @accountId;
SELECT TOP (101) *
FROM PhotoLike WHERE photoAccountId=@accountIdParam AND accountId<>@accountIdParam
ORDER BY createDate DESC
GO
#3
0
I you can, you should create a clustered index based on the accountId field of your table.
我可以,您应该根据表的accountId字段创建聚簇索引。
As you test a inequality, it should be more performant :
在测试不等式时,它应该更高性能:
CREATE UNIQUE CLUSTERED INDEX [IX_MyIndexName] ON [dbo].[PhotoLike] ( accountId DESC, createDate DESC, photoAccountId DESC, )
创建独特的聚类索引[IX_MyIndexName] ON [dbo]。[PhotoLike](accountId DESC,createDate DESC,photoAccountId DESC,)