从web应用程序查询时间,但从管理工作室运行良好。

时间:2021-11-24 03:53:08

This is a question I asked on another forum which received some decent answers, but I wanted to see if anyone here has more insight.

这是我在另一个论坛上提出的问题,得到了一些不错的答案,但我想看看在座的各位是否有更深刻的见解。

The problem is that you have one of your pages in a web application timing out when it gets to a stored procedure call, so you use Sql Profiler, or your application trace logs, to find the query and you paste it into management studio to figure our why it's running slow. But you run it from there and it just blazes along, returning in less than a second each time.

问题是,你有一个web应用程序的页面超时当它到达一个存储过程调用,所以你使用Sql分析器,或应用程序跟踪日志,找到你粘贴到的查询和管理工作室图我们为什么运行缓慢。但是你从那里运行它,它就会快速前进,每次不到一秒就返回。

My particular case was using ASP.NET 2.0 and Sql Server 2005, but I think the problem could apply to any RDBMS system.

我的例子是使用ASP。NET 2.0和Sql Server 2005,但是我认为这个问题适用于任何RDBMS系统。

8 个解决方案

#1


26  

This is what I've learned so far from my research.

这是我从我的研究中学到的。

.NET sends in connection settings that are not the same as what you get when you log in to management studio. Here is what you see if you sniff the connection with Sql Profiler:

. net发送的连接设置与登录到management studio时的设置不同。如果您嗅探Sql分析器的连接,您将看到以下内容:

-- network protocol: TCP/IP  
set quoted_identifier off  
set arithabort off  
set numeric_roundabort off  
set ansi_warnings on  
set ansi_padding on  
set ansi_nulls off  
set concat_null_yields_null on  
set cursor_close_on_commit off  
set implicit_transactions off  
set language us_english  
set dateformat mdy  
set datefirst 7  
set transaction isolation level read committed  

I am now pasting those setting in above every query that I run when logged in to sql server, to make sure the settings are the same.

现在,我将这些设置粘贴到上面,以确保在登录到sql server时运行的每个查询都是相同的。

For this case, I tried each setting individually, after disconnecting and reconnecting, and found that changing arithabort from off to on reduced the problem query from 90 seconds to 1 second.

在这种情况下,在断开和重新连接后,我分别尝试了每个设置,并发现从off到on更改算术中止将问题查询从90秒减少到1秒。

The most probable explanation is related to parameter sniffing, which is a technique Sql Server uses to pick what it thinks is the most effective query plan. When you change one of the connection settings, the query optimizer might choose a different plan, and in this case, it apparently chose a bad one.

最可能的解释是与参数嗅探有关,这是Sql服务器用来选择它认为最有效的查询计划的技术。当您更改其中一个连接设置时,查询优化器可能会选择另一个计划,在本例中,它显然选择了一个糟糕的计划。

But I'm not totally convinced of this. I have tried comparing the actual query plans after changing this setting and I have yet to see the diff show any changes.

但我并不完全相信这一点。在更改此设置之后,我尝试比较实际的查询计划,但还没有看到diff显示任何更改。

Is there something else about the arithabort setting that might cause a query to run slowly in some cases?

在某些情况下,算术运算设置是否还有其他原因可能导致查询运行缓慢?

The solution seemed simple: Just put set arithabort on into the top of the stored procedure. But this could lead to the opposite problem: change the query parameters and suddenly it runs faster with 'off' than 'on'.

解决方案似乎很简单:将set算术放到存储过程的顶部。但这可能会导致相反的问题:改变查询参数,突然之间,“off”比“on”运行得更快。

For the time being I am running the procedure 'with recompile' to make sure the plan gets regenerated each time. It's Ok for this particular report, since it takes maybe a second to recompile, and this isn't too noticeable on a report that takes 1-10 seconds to return (it's a monster).

目前,我正在运行“with recompile”过程,以确保每次都重新生成计划。对于这个特定的报告来说,这是可以的,因为重新编译可能需要一秒钟,而对于一个需要1-10秒才能返回的报告来说,这并不太明显(它是一个怪物)。

But it's not an option for other queries that run much more frequently and need to return as quickly as possible, in just a few milliseconds.

但对于运行频率更高、需要在几毫秒内尽快返回的其他查询,它不是一个选项。

#2


6  

I've had similar problems. Try setting the with "WITH RECOMPILE" option on the sproc create to force the system to recompute the execution plan each time it is called. Sometimes the Query processor gets confused in complex stored procedures with lots of branching or case statements and just pulls a really sub-optimal execution plan. If that seems to "fix" the problem, you will probably need to verify statistics are up to date and/or break down the sproc.

我有类似的问题。尝试在sproc create上设置“with RECOMPILE”选项,以迫使系统在每次调用执行计划时重新计算它。有时查询处理器会在复杂的存储过程中与大量的分支或案例语句混淆,从而得出一个真正次优的执行计划。如果这似乎“修复”了这个问题,那么您可能需要验证统计数据是最新的和/或分解sproc。

You can also confirm this by profiling the sproc. When you execute it from SQL Managment Studio, how does the IO compare to when you profile it from the ASP.NET application. If they very a lot, it just re-enforces that its pulling a bad execution plan.

您还可以通过分析sproc来确认这一点。当您从SQL Managment Studio执行它时,IO与从ASP中配置它时的IO有何不同?网络应用程序。如果他们做了很多,他们就会再次强调这是一个糟糕的执行计划。

#3


2  

Have you turned on ASP.NET tracing yet? I've had an instance where it wasn't the SQL stored procedure itself that was the problem, it was the fact that the procedure returned 5000 rows and the app was attempting to create databound ListItems with those 5000 items that was causing the problem.

你打开ASP了吗?网络跟踪了吗?我有一个例子,问题不在于SQL存储过程本身,而在于过程返回了5000行,而应用程序试图用5000个条目创建数据库listitem,这导致了问题。

You might look into the execution times between the web app functions as well through the trace to help track things down.

您可以查看web应用程序功能之间的执行时间,并通过跟踪来帮助跟踪。

#4


1  

test this out on a staging box first, change it on a server level for sql server

先在staging框上测试它,然后在sql server的服务器级上更改它

declare @option int

声明@option int

set @option = @@options | 64

设置@option = @@options | 64。

exec sp_configure 'user options', @option

exec sp_configure 'user options', @ options

RECONFIGURE

重新配置

#5


1  

Same problem I had with SQL reporting services. Try to check type of variables, I was sending different type of variable to SQL like sending varchar in place where it should be integer, or something like that. After I synchronized the types of variables in Reporting Service and in stored procedure on SQL, I solved the problem.

与SQL报告服务相同的问题。试着检查变量的类型,我将不同类型的变量发送到SQL,比如将varchar发送到应该是整数的地方,或者类似的东西。在我同步了报表服务和SQL存储过程中的变量类型之后,我解决了这个问题。

#6


0  

Try changing the SelectCommand timeout value:

尝试更改SelectCommand超时值:

DataAdapter.SelectCommand.CommandTimeout = 120;

#7


0  

You could try using the sp_who2 command to see what process in question is doing. This will show you if it's blocked by another process, or using up an excessive amount of cpu and/or io time.

您可以尝试使用sp_who2命令查看正在处理的进程。这将显示它是否被另一个进程阻塞,或者占用过多的cpu和/或io时间。

#8


0  

We had the same issue and here's what we found out.

我们有同样的问题,这是我们的发现。

our database log size was being kept at the default (814 MB) and auto growth was 10%. On the server, maximum server memory was kept at the default setting as well (2147483647 MB).

我们的数据库日志大小保持默认值(814mb),自动增长为10%。在服务器上,最大服务器内存也保持在默认设置(2147483647 MB)。

When our log got full and needed to grow, it used all the memory from the server and there's nothing left for code to be run so it timed out. What we ended up doing was set database log file initial size to 1 MB and maximum server memory to 2048 MB. This instantly fixed our problem. Of course, you can change these two properties to fit your need but this is an idea for someone running into the timing out issue when executing a stored procedure via code but it runs super fast in SSMS and the solutions above do not help.

当我们的日志被填满并需要增长时,它使用了服务器的所有内存,并且没有任何代码可以运行,所以它超时了。最后我们将数据库日志文件的初始大小设置为1 MB,最大服务器内存设置为2048 MB,这立即解决了我们的问题。当然,您可以更改这两个属性以满足您的需要,但是对于通过代码执行存储过程时遇到超时问题的人来说,这是一个好主意,但是在SSMS中运行速度非常快,上面的解决方案没有帮助。

#1


26  

This is what I've learned so far from my research.

这是我从我的研究中学到的。

.NET sends in connection settings that are not the same as what you get when you log in to management studio. Here is what you see if you sniff the connection with Sql Profiler:

. net发送的连接设置与登录到management studio时的设置不同。如果您嗅探Sql分析器的连接,您将看到以下内容:

-- network protocol: TCP/IP  
set quoted_identifier off  
set arithabort off  
set numeric_roundabort off  
set ansi_warnings on  
set ansi_padding on  
set ansi_nulls off  
set concat_null_yields_null on  
set cursor_close_on_commit off  
set implicit_transactions off  
set language us_english  
set dateformat mdy  
set datefirst 7  
set transaction isolation level read committed  

I am now pasting those setting in above every query that I run when logged in to sql server, to make sure the settings are the same.

现在,我将这些设置粘贴到上面,以确保在登录到sql server时运行的每个查询都是相同的。

For this case, I tried each setting individually, after disconnecting and reconnecting, and found that changing arithabort from off to on reduced the problem query from 90 seconds to 1 second.

在这种情况下,在断开和重新连接后,我分别尝试了每个设置,并发现从off到on更改算术中止将问题查询从90秒减少到1秒。

The most probable explanation is related to parameter sniffing, which is a technique Sql Server uses to pick what it thinks is the most effective query plan. When you change one of the connection settings, the query optimizer might choose a different plan, and in this case, it apparently chose a bad one.

最可能的解释是与参数嗅探有关,这是Sql服务器用来选择它认为最有效的查询计划的技术。当您更改其中一个连接设置时,查询优化器可能会选择另一个计划,在本例中,它显然选择了一个糟糕的计划。

But I'm not totally convinced of this. I have tried comparing the actual query plans after changing this setting and I have yet to see the diff show any changes.

但我并不完全相信这一点。在更改此设置之后,我尝试比较实际的查询计划,但还没有看到diff显示任何更改。

Is there something else about the arithabort setting that might cause a query to run slowly in some cases?

在某些情况下,算术运算设置是否还有其他原因可能导致查询运行缓慢?

The solution seemed simple: Just put set arithabort on into the top of the stored procedure. But this could lead to the opposite problem: change the query parameters and suddenly it runs faster with 'off' than 'on'.

解决方案似乎很简单:将set算术放到存储过程的顶部。但这可能会导致相反的问题:改变查询参数,突然之间,“off”比“on”运行得更快。

For the time being I am running the procedure 'with recompile' to make sure the plan gets regenerated each time. It's Ok for this particular report, since it takes maybe a second to recompile, and this isn't too noticeable on a report that takes 1-10 seconds to return (it's a monster).

目前,我正在运行“with recompile”过程,以确保每次都重新生成计划。对于这个特定的报告来说,这是可以的,因为重新编译可能需要一秒钟,而对于一个需要1-10秒才能返回的报告来说,这并不太明显(它是一个怪物)。

But it's not an option for other queries that run much more frequently and need to return as quickly as possible, in just a few milliseconds.

但对于运行频率更高、需要在几毫秒内尽快返回的其他查询,它不是一个选项。

#2


6  

I've had similar problems. Try setting the with "WITH RECOMPILE" option on the sproc create to force the system to recompute the execution plan each time it is called. Sometimes the Query processor gets confused in complex stored procedures with lots of branching or case statements and just pulls a really sub-optimal execution plan. If that seems to "fix" the problem, you will probably need to verify statistics are up to date and/or break down the sproc.

我有类似的问题。尝试在sproc create上设置“with RECOMPILE”选项,以迫使系统在每次调用执行计划时重新计算它。有时查询处理器会在复杂的存储过程中与大量的分支或案例语句混淆,从而得出一个真正次优的执行计划。如果这似乎“修复”了这个问题,那么您可能需要验证统计数据是最新的和/或分解sproc。

You can also confirm this by profiling the sproc. When you execute it from SQL Managment Studio, how does the IO compare to when you profile it from the ASP.NET application. If they very a lot, it just re-enforces that its pulling a bad execution plan.

您还可以通过分析sproc来确认这一点。当您从SQL Managment Studio执行它时,IO与从ASP中配置它时的IO有何不同?网络应用程序。如果他们做了很多,他们就会再次强调这是一个糟糕的执行计划。

#3


2  

Have you turned on ASP.NET tracing yet? I've had an instance where it wasn't the SQL stored procedure itself that was the problem, it was the fact that the procedure returned 5000 rows and the app was attempting to create databound ListItems with those 5000 items that was causing the problem.

你打开ASP了吗?网络跟踪了吗?我有一个例子,问题不在于SQL存储过程本身,而在于过程返回了5000行,而应用程序试图用5000个条目创建数据库listitem,这导致了问题。

You might look into the execution times between the web app functions as well through the trace to help track things down.

您可以查看web应用程序功能之间的执行时间,并通过跟踪来帮助跟踪。

#4


1  

test this out on a staging box first, change it on a server level for sql server

先在staging框上测试它,然后在sql server的服务器级上更改它

declare @option int

声明@option int

set @option = @@options | 64

设置@option = @@options | 64。

exec sp_configure 'user options', @option

exec sp_configure 'user options', @ options

RECONFIGURE

重新配置

#5


1  

Same problem I had with SQL reporting services. Try to check type of variables, I was sending different type of variable to SQL like sending varchar in place where it should be integer, or something like that. After I synchronized the types of variables in Reporting Service and in stored procedure on SQL, I solved the problem.

与SQL报告服务相同的问题。试着检查变量的类型,我将不同类型的变量发送到SQL,比如将varchar发送到应该是整数的地方,或者类似的东西。在我同步了报表服务和SQL存储过程中的变量类型之后,我解决了这个问题。

#6


0  

Try changing the SelectCommand timeout value:

尝试更改SelectCommand超时值:

DataAdapter.SelectCommand.CommandTimeout = 120;

#7


0  

You could try using the sp_who2 command to see what process in question is doing. This will show you if it's blocked by another process, or using up an excessive amount of cpu and/or io time.

您可以尝试使用sp_who2命令查看正在处理的进程。这将显示它是否被另一个进程阻塞,或者占用过多的cpu和/或io时间。

#8


0  

We had the same issue and here's what we found out.

我们有同样的问题,这是我们的发现。

our database log size was being kept at the default (814 MB) and auto growth was 10%. On the server, maximum server memory was kept at the default setting as well (2147483647 MB).

我们的数据库日志大小保持默认值(814mb),自动增长为10%。在服务器上,最大服务器内存也保持在默认设置(2147483647 MB)。

When our log got full and needed to grow, it used all the memory from the server and there's nothing left for code to be run so it timed out. What we ended up doing was set database log file initial size to 1 MB and maximum server memory to 2048 MB. This instantly fixed our problem. Of course, you can change these two properties to fit your need but this is an idea for someone running into the timing out issue when executing a stored procedure via code but it runs super fast in SSMS and the solutions above do not help.

当我们的日志被填满并需要增长时,它使用了服务器的所有内存,并且没有任何代码可以运行,所以它超时了。最后我们将数据库日志文件的初始大小设置为1 MB,最大服务器内存设置为2048 MB,这立即解决了我们的问题。当然,您可以更改这两个属性以满足您的需要,但是对于通过代码执行存储过程时遇到超时问题的人来说,这是一个好主意,但是在SSMS中运行速度非常快,上面的解决方案没有帮助。