DBA Diaries

Thoughts and experiences of a DBA working with SQL Server and MySQL

Using exec sp_who2 to help with SQL Server troubleshooting

Posted on May 20, 2012 Written by Andy Hayes 4 Comments

If you haven’t used sp_who2 before then it is a great utility to help in diagnosing a problem with your database application.

What is sp_who2 ?

It’s a stored procedure which is installed with SQL Server which when run, outputs a row for each “SPID”. SPID stands for Server Process ID and one is created every time an application needs to open a connection to SQL Server.

Each row contains a number of useful columns including information on things like the status of the SPID, what its resource usage is in terms of CPU/IO and what login is currently executing the command.

Permissions needed for sp_who2

A login can run sp_who2 and obtain information about its own connection. For a full list of SPID’s, either the login has to have sysadmin permission or VIEW SERVER STATE permission.

Using sp_who2 to help identify blocking queries

Lets say for example that the phone rings and everyone in the department using the sales system is complaining that their copy of the application is freezing when trying to access the customer sales order data.

So for this blocking demo, I have created a simple table in my database called “Orders” and I am going to start a transaction and leave it open without committing or rolling it back.

BEGIN TRAN
INSERT INTO Orders(AccountID, DatePlaced, Amount, Currency)
VALUES(1, GETDATE(), 100, '$');

Now I will try and read the orders table via another query

SELECT * FROM Orders;

and again via another query…

SELECT * FROM Orders;

Now I will run exec sp_who2 to check what is happening to these connections and it tells me that I have two blocked SPID’s (55 and 56) and that they are being blocked by SPID 54
Using exec sp_who2 to identify blocked queries
The DBA now has to decide how best to proceed with resolving this problem based on the cause and effect of taking action.

They will try and work out what the lead blocker is doing and one way they might do this would be to use “DBCC INPUTBUFFER(SPID)” where SPID is the number of the connection to be analysed and the results returned tell the DBA what the command/query was executing as that SPID.

Instead of using sp_who2, another way would be to look at the dm_exec_requests DMV. I’m going to look at these in a future post.

In my demo it is easy to resolve this as I can kill the connection, force a commit or force a rollback and it will be instantly resolved.

It might not be as clear cut as this in another scenario. Lets say that a process has been running for some time with a large update and this causes the blocking problem described in the demo.

When the DBA views the results of sp_who2 and notices that the process has been running for many hours, they know that by killing the process, a rollback has to finish before the resource will become available again for other queries to be able to complete.

The rollback could also take many hours in this instance and so such decisions cannot be taken lightly. The solution will vary depending on the scenario.

With this tool, you can check a specific connection. It takes an optional parameter SPID, for example:


exec sp_who2 54


Something I get asked is whether there is a way to check for active connections by running a filter – something like exec sp_who2 ‘active’. Sadly this doesn’t work but there is a simple way and that is to capture the results to a temporary table or temporary variable and then query that.

There is a nice piece on this over at Stack Overlow which is worth a read and demonstrates a few solutions to this.

Conclusion

sp_who2 should be part of every DBA’s troubleshooting toolbox.

It provides a great overview of what the connections are doing on the SQL Server and can quickly help the DBA find reasons for increases in application timeouts, high disk IO or high CPU pressure.

 

Filed Under: All Articles, SQL Server Administration Tagged With: performance, sql server, troubleshooting

New T-SQL features in SQL Server 2012 – THROW

Posted on May 6, 2012 Written by Andy Hayes Leave a Comment

sql server 2012 throwNew in SQL Server 2012 as part of exception handling in T-SQL is THROW

Exception handling in previous versions of SQL Server

THROW will raise an exception inside of the catch block and there are already ways to do this in previous versions of SQL Server but where this is an improvement over previous methods is that it simplifies writing the code to raise them.

So lets compare the old and the new.

Here is the T-SQL:

SQL Server 2005/SQL2008

BEGIN TRY
DECLARE @VarToTest INT
SET @VarToTest = 'C'
END TRY
BEGIN CATCH
DECLARE
@ErrorMessage nvarchar(4000)
, @ErrorSeverity int

SET @ErrorMessage = ERROR_MESSAGE()
SET @ErrorSeverity = ERROR_SEVERITY()

RAISERROR (@ErrorMessage, @ErrorSeverity, 1 )
END CATCH

SQL Server 2012

BEGIN TRY
DECLARE @VarToTest INT
SET @VarToTest = 'C'
END TRY
BEGIN CATCH
THROW
END CATCH

Msg 245, Level 16, State 1, Line 4 Conversion failed when converting the varchar value ‘C’ to data type int.

For more information see this link from books online – SQL Server 2012 THROW

Filed Under: All Articles, SQL Tips and Tricks Tagged With: sql server, t-sql

10 simple tips on improving query and server performance

Posted on February 18, 2012 Written by Andy Hayes 3 Comments

improving database performance

The following tips do not promise that your database server will magically become blisteringly fast and go like a Ferrari but they are important things to bear in mind and relatively simple to implement either when building your database or if you have an existing performance problem and are looking for ways in which to make your database run faster.

1/ SELECT * FROM

The rest of my points are in no particular order but I must place this one at the top of the list as it really annoys me. I’ve seen this many times where developers take the lazy approach to writing their queries by including all columns for the table, or in the case of there being joins involved, then this means every column for every table involved in the query. It is far better to limit the query to only bring back the columns required reducing I/O, memory pressure and on bandwidth between the database and the application layers.

2/ Incorrect sizing of data types – less is more

Database design is sometimes a neglected aspect of building an application. The need to rapidly build a database to store the data and get coding results in tables containing data types which are incorrectly sized.

So lets look at a simple example of what I mean using MS SQL Server.

The system we are designing for allows advertisements to be placed in one county in England per advert. Whilst this does not sound like a terribly flexible system, it serves the purpose in trying to demonstrate the issue with badly sized data types 🙂

So here is the create table script. (I have omitted the Customer create table script which would theoretically exist here to ensure that the relationship on Advert.CustomerID could be created).

CREATE TABLE County
(
CountyID INT PRIMARY KEY IDENTITY(1,1),
County VARCHAR(20)
);

CREATE TABLE Advert
(
AdvertID INT PRIMARY KEY IDENTITY(1,1),
CustomerID INT NOT NULL REFERENCES Customer(CustomerID),
DatePlaced DATETIME DEFAULT GETDATE() NOT NULL,
CountyID INT NOT NULL REFERENCES County(CountyID),
AdvertText VARCHAR(MAX) NOT NULL,
);

So, two tables and where in this design are there issues?

How many counties are there in the England? Well certainly not this many: 2,147,483,647 which is the limit of the data type above 0. Yes you can store negatives also up to -2,147,483,648.

So what, they’re just numbers right? What’s the problem? The problem is the storage needed. The INT data type will require 4bytes of storage. The number of English counties is less than 255. Is it ever going to get larger than 255? I can’t see that happening any time soon so better to design the County table to be CountyID TINYINT and the related field in Advert to also be a TINYINT. This data type uses 1byte of storage. That’s a 75{3a76cbff1b5ce5ace3bd34bbdd68f71285d81fb9af60114c7a6db80d3a8688de} reduction and if you had an extremely large Advert table it could all start to add up.

The other field in question is the DatePlaced. Always question whether you need the increased size of the types you are setting. In my example, I don’t think I am too fussy about the extra precision which the DATETIME data type gives me, so I should have designed that as a SMALLDATETIME.

But storage is cheap right? Yes but this is not the point. It’s all about I/O and memory pressure. The smaller the data types are, the less pages have to be read from disk into the memory buffers. If the memory buffers are not constantly being emptied to make way for other data, then your database server and application will be a lot happier as there will be less read requests to disk.

3/ Adding and Removing Indexes

Identify queries which are performing badly. Can you add indexes which would make them faster? SQL Server 2008 Management Studio will even recommend which indexes to add when you view the execution plan. Identify where you need to add clustered indexes and put them in.

Identify unused indexes and remove them. For each record inserted to a table, every index is updated. If you do not need an index, get rid of it. It will help speed up inserts.

4/ Use stored procedures instead of in-line SQL

There are many advantages to using stored procedures. From a performance perspective, having them in place means that each time the procedure is executed, the execution plan is pulled out of cache. As the server does not need to work out the execution plan, the process from calling the procedure to it completing its execution is typically faster.

5/ Set up maintenance plans to rebuild indexes and statistics

All indexes will suffer from fragmentation and statistics will get out of date. Set up maintenance plans in SQL Server or batch tasks in MySQL which will periodically rebuild statistics and indexes to help the optimizer choose the best execution plan.

6/ Replace use of functions in WHERE clauses with SARG’s

It’s all about the search arguments (SARG’s)

Here’s an example. Two queries, the first uses a LIKE, the second uses the SUBSTRING function. There is an index on the LastName column in my table and the first query reports using an index seek which should normally be faster compared with the second which reports using a clustered index scan.

SELECT FirstName, LastName
FROM Employee
WHERE LastName LIKE 'Hay{3a76cbff1b5ce5ace3bd34bbdd68f71285d81fb9af60114c7a6db80d3a8688de}';

SELECT FirstName, LastName
FROM Employee
WHERE LastName SUBSTRING(LastName,1,3) = 'Hay';

Note that use of LIKE will not always reference an index for example this query would not use an index

WHERE LastName LIKE '{3a76cbff1b5ce5ace3bd34bbdd68f71285d81fb9af60114c7a6db80d3a8688de}es'

Other examples of SARG’s:

LastName = 'Hayes'
Amount > 0
Amount BETWEEN 10 AND 20

I’ve also seen UDF’s in place in WHERE clauses and unsuprisingly, performance is poor.

Get the query using a SARG and there is a higher chance that the optimizer will use an index.

7/ Ensure that foreign keys in are in place

Please refer to my post on the importance of foreign keys and how they can help improve query performance.

8/ Increasing chances of using the query cache (MySQL)

Minor changes to queries to make them deterministic can increase query cache hit rate. For example if a query uses CURDATE(), then the results will not be cached.

9/ Deal in sets not cursors

Try and build SQL queries which deal with sets of data. Leave any row by row processing to the application layer.

10/ Allocate more memory to buffers

You need a solution quickly and you have identified your server is under memory pressure.  You have spare RAM available and so it is an easy task to allocate more memory to MS SQL or MySQL database memory buffers. Personally, I will always try and carry out a more exhaustive analysis of the server but if time does not permit that then allocate the memory to improve performance and analyse a typical workload on a staging server to determine where other performance improvements can be made.

It is important to ensure that these issues are not brushed under the carpet after increasing hardware resource. Think about it for a moment. Your application has been slowing down and user complaints have increased. What’s an obvious cause of that? – Growth.

So in order to effectively deal with growth, from a DBA perspective, the database server needs to be monitored for performance problems with issues reported to the relevant technical staff at the earliest opportunity. Adding hardware will always help improve performance but the application will never scale very well unless the issues “underneath the bonnet” in the database layer are effectively dealt with.

Filed Under: All Articles, MySQL Performance, SQL Server Performance Tagged With: mysql, performance, sql server, t-sql

What are the differences between primary keys and unique indexes in SQL Server ?

Posted on January 7, 2012 Written by Andy Hayes Leave a Comment

differences between primary keys and unique indexesI was recently asked, what are the differences between primary keys and unique indexes? Well, they are very similar but here are the differences.

Only one primary key is allowed on a table but multiple unique indexes can be added up to the maximum allowed number of indexes for the table (SQL Server = 250 (1 x clustered, 249 x non clustered) and SQL 2008 and SQL 2012 = 1000 (1 x clustered, 999 x non clustered)).

Primary keys cannot contain nullable columns but unique indexes can. Note, that only one NULL is allowed. If the index is created across multiple columns, each combination of values and NULL’s must be unique.

By default, unless you specify otherwise in the create statement and providing that a clustered index does not already exists, the primary key is created as a clustered index. Unique indexes however are created by default as non clustered indexes unless you specify otherwise and providing that a clustered index does not already exist.

Filed Under: All Articles, SQL Server Administration Tagged With: sql server

  • « Previous Page
  • 1
  • …
  • 10
  • 11
  • 12
  • 13
  • Next Page »

Categories

  • All Articles (82)
  • Career Development (8)
  • MySQL Administration (18)
  • MySQL Performance (2)
  • SQL Server Administration (24)
  • SQL Server News (3)
  • SQL Server Performance (14)
  • SQL Server Security (3)
  • SQL Tips and Tricks (19)

Top 10 Popular Posts

  • Using sp_change_users_login to fix SQL Server orphaned users
  • How to shrink tempdb
  • MySQL SHOW USERS? – How to List All MySQL Users and Privileges
  • How to Transfer Logins to Another SQL Server or Instance
  • How to Delete Millions of Rows using T-SQL with Reduced Impact
  • T-SQL – How to Select Top N Rows for Each Group Using ROW_NUMBER()
  • New T-SQL features in SQL Server 2012 – OFFSET and FETCH
  • How to Kill All MySQL Processes For a Specific User
  • Using exec sp_who2 to help with SQL Server troubleshooting
  • How to move tempdb

Recent Posts

  • How to Setup MySQL Master Master Replication
  • How To Use SQL to Convert a STRING to an INT
  • How to set up MySQL Replication Tutorial
  • How to Use SQL CASE for Conditional Logic in Your SQL Queries
  • Using ISNULL in SQL Server to Replace NULL Values

Search

Connect

  • Twitter
  • Facebook
  • Google+
  • RSS

About

  • Cookie Policy
  • Disclaimer
  • About
Copyright © ‘2021’ DBA Diaries built on the Genesis Framework

This site uses cookies. We assume you are happy with cookies but click the link if you are not. Close