Query is fast in Sql Server Management Studio but slow in application
We have a desktop app which runs a query against SQL server. In a particular scenario, this query was taking about 30 seconds, locking loads of other queries out and then timing out. This seemed unusual but just to check, we grabbed the SQL from sp_who2 and ran it in SSMS, it ran virtually instantly (it was returning quite a few rows).
Whatever we tried, the app was always slow and SSMS was always fast. We reverted the application to an earlier version and this ran a different query and executed quickly.
Really confusing.
What we worked out:
1) It was something to do with the query
2) SSMS wasn't doing exactly what the application was even though the query was the same
It can be hard to pull these apart until you understand what the problem is, at which point it seems kind of obvious but it also involves something called Parameter Sniffing, which is how SQL Server (and others) try to be clever when optimising your query.
The real issue here related to two things. Firstly, there was an inefficiency in the new query - there must be right? otherwise it wouldn't be slow. The second issue is that parameter sniffing was making it worse but paradoxically making SSMS hide the original problem.
What is the problem here?
The query problem starts out with the simple fact that the change does not hit a fully covering index. What happens when this occurs? It depends. If the query analyzer decides that your query will return a few rows, it will decide to do an index seek followed by a key lookup. If it decides that it will return lots of rows, it won't bothered going backwards and forwards to the clustered index but will instead do a clustered index scan.
This makes sense so the question then follows why does it appear to do something different on the app and SSMS?
This is where parameter sniffing comes in. It doesn't sound great but if you didn't compile an execution plan, then each time you ran a query, the query analyzer would have to compile it which would be a noticeable performance hit for every single select. If, however, you compile it once and keep the plan in memory, you can avoid one step and immediately ustilise the plan. The icky part is how does it decide to effectively plan for a few rows being returned and plan for lots of rows being returned? if you cache it wrongly, it will only be optimised for some types of query.
In our example, the query being run can return anything from 0 to 1000s of rows so a lot to guess up front. Well quite simply, the analyzer takes the first parameter you call it with and plans around that! If the first parameter doesn't return any rows, the analyzer takes the cautious approach and plans around that. If the first parameter returns many, the query is optimised for that instead. This was the underlying issue and was fixed by applying a covering index which avoided the need to decide between key lookup or clustered index scan.
This leaves the question of why SSMS was so much faster? Quite simply, there is one setting that is different between SSMS and ADO which causes Sql Server to cache a separate plan! What plan? One that works really well with the parameter you are testing it with - in other words, SSMS can work out exactly the best plan for your scenario, use that and run very quickly.
There is only one setting that is different by default between SSMS and ADO and it is ARITHABORT. Going into the options for SSMS and disabling this by default under Query Execution=>SQL Server=>Advanced will ensure that at least you are playing on a level playing field and should be able to easily recreate the issue and see in the plan that you are doing a slow key lookup for a result set with many rows!
Whatever we tried, the app was always slow and SSMS was always fast. We reverted the application to an earlier version and this ran a different query and executed quickly.
Really confusing.
What we worked out:
1) It was something to do with the query
2) SSMS wasn't doing exactly what the application was even though the query was the same
It can be hard to pull these apart until you understand what the problem is, at which point it seems kind of obvious but it also involves something called Parameter Sniffing, which is how SQL Server (and others) try to be clever when optimising your query.
The real issue here related to two things. Firstly, there was an inefficiency in the new query - there must be right? otherwise it wouldn't be slow. The second issue is that parameter sniffing was making it worse but paradoxically making SSMS hide the original problem.
What is the problem here?
The query problem starts out with the simple fact that the change does not hit a fully covering index. What happens when this occurs? It depends. If the query analyzer decides that your query will return a few rows, it will decide to do an index seek followed by a key lookup. If it decides that it will return lots of rows, it won't bothered going backwards and forwards to the clustered index but will instead do a clustered index scan.
This makes sense so the question then follows why does it appear to do something different on the app and SSMS?
This is where parameter sniffing comes in. It doesn't sound great but if you didn't compile an execution plan, then each time you ran a query, the query analyzer would have to compile it which would be a noticeable performance hit for every single select. If, however, you compile it once and keep the plan in memory, you can avoid one step and immediately ustilise the plan. The icky part is how does it decide to effectively plan for a few rows being returned and plan for lots of rows being returned? if you cache it wrongly, it will only be optimised for some types of query.
In our example, the query being run can return anything from 0 to 1000s of rows so a lot to guess up front. Well quite simply, the analyzer takes the first parameter you call it with and plans around that! If the first parameter doesn't return any rows, the analyzer takes the cautious approach and plans around that. If the first parameter returns many, the query is optimised for that instead. This was the underlying issue and was fixed by applying a covering index which avoided the need to decide between key lookup or clustered index scan.
This leaves the question of why SSMS was so much faster? Quite simply, there is one setting that is different between SSMS and ADO which causes Sql Server to cache a separate plan! What plan? One that works really well with the parameter you are testing it with - in other words, SSMS can work out exactly the best plan for your scenario, use that and run very quickly.
There is only one setting that is different by default between SSMS and ADO and it is ARITHABORT. Going into the options for SSMS and disabling this by default under Query Execution=>SQL Server=>Advanced will ensure that at least you are playing on a level playing field and should be able to easily recreate the issue and see in the plan that you are doing a slow key lookup for a result set with many rows!