DBA Diaries

Thoughts and experiences of a DBA working with SQL Server and MySQL

New T-SQL features in SQL Server 2012 – THROW

Posted on May 6, 2012 Written by Andy Hayes Leave a Comment

sql server 2012 throwNew in SQL Server 2012 as part of exception handling in T-SQL is THROW

Exception handling in previous versions of SQL Server

THROW will raise an exception inside of the catch block and there are already ways to do this in previous versions of SQL Server but where this is an improvement over previous methods is that it simplifies writing the code to raise them.

So lets compare the old and the new.

Here is the T-SQL:

SQL Server 2005/SQL2008

BEGIN TRY
DECLARE @VarToTest INT
SET @VarToTest = 'C'
END TRY
BEGIN CATCH
DECLARE
@ErrorMessage nvarchar(4000)
, @ErrorSeverity int

SET @ErrorMessage = ERROR_MESSAGE()
SET @ErrorSeverity = ERROR_SEVERITY()

RAISERROR (@ErrorMessage, @ErrorSeverity, 1 )
END CATCH

SQL Server 2012

BEGIN TRY
DECLARE @VarToTest INT
SET @VarToTest = 'C'
END TRY
BEGIN CATCH
THROW
END CATCH

Msg 245, Level 16, State 1, Line 4 Conversion failed when converting the varchar value ‘C’ to data type int.

For more information see this link from books online – SQL Server 2012 THROW

Filed Under: All Articles, SQL Tips and Tricks Tagged With: sql server, t-sql

New T-SQL features in SQL Server 2012 – WITH RESULT SETS

Posted on April 21, 2012 Written by Andy Hayes Leave a Comment

sql server 2012 with result setsNew in SQL Server 2012 comes an extension to EXECUTE in the form of WITH RESULT SETS.

WITH RESULT SETS allows column names and their data types to be changed in the result set of a stored procedure.

My first thought with this improvement is that it could be very useful to have when building SSIS packages which read from a stored procedure as a data source. We’ll take a look at that in a future post but in addition to that, it helps to replace some of the following…

Before Server 2012 WITH RESULT SETS you had to…

If you wanted to change the column names and their data types after executing a stored procedure, then in previous versions there were a few things to try.

  1. Create a copy of the stored procedure and change it to return different column names or data types.
  2. Execute the stored procedure and insert the results into a temporary table, then read from the temporary table.
  3. Edit the same stored procedure and change its output.
  4. Create a UDF and select from that.

Oh dear, not good. 😐

Some of the issues which spring to mind with those points are code duplication, sub-optimal performance because of the use of temp tables and just hassle in writing lengthy code. I’m going to try and look at some of these in a future post.

So lets check WITH RESULT SETS syntax

To get it working you supply WITH RESULT SETS and parentheses followed by the column names and data types as follows:

EXEC Your_Stored_Procedure
WITH RESULT SETS
(
(Col1_Renamed INT,
Col2_Renamed VARCHAR(50))
)

If your stored procedure is returning multiple result sets then it is possible to manipulate all of them. For example:

EXEC Your_Other_Stored_Procedure
WITH RESULT SETS
(
(Col1_Renamed INT,
Col2_Renamed VARCHAR(50)),
(Col3_Renamed TINYINT,
Col4_Renamed NVARCHAR(100))
)

Please note that the number of columns being returned in the result set cannot be changed.

Lets look at a demo of it in action against the AdventureWorks2012 database. First, I will create a new stored procedure…

CREATE PROCEDURE GetCustomerOrders @CustomerID INT
AS

SELECT
SalesOrderID
, OrderDate
FROM Sales.SalesOrderHeader
WHERE CustomerID = @CustomerID

Now, I’m going to execute it and use the WITH RESULT SETS to change the output.

EXEC GetCustomerOrders @CustomerID = 11000
WITH RESULT SETS
(
(OrderID INT,
DateOfOrder VARCHAR(20))
)

Which returns…

OrderID     DateOfOrder
----------- --------------------
43793       Jul 22 2005 12:00AM
51522       Jul 22 2007 12:00AM
57418       Nov  4 2007 12:00AM

(3 row(s) affected)

Without the WITH RESULT SETS it looks like this

SalesOrderID OrderDate
------------ -----------------------
43793        2005-07-22 00:00:00.000
51522        2007-07-22 00:00:00.000
57418        2007-11-04 00:00:00.000

(3 row(s) affected)

WITH RESULT SETS conclusion

Another nice addition to the T-SQL features in SQL Server 2012 which helps to streamline code when formatting result sets produced by stored procedures.

Anything which makes life more simple is always welcome 🙂

If you want more information click this link on SQL Server 2012 WITH RESULT SETS

Filed Under: All Articles, SQL Tips and Tricks Tagged With: sql server 2012, t-sql

New T-SQL features in SQL Server 2012 – OFFSET and FETCH

Posted on April 3, 2012 Written by Andy Hayes 30 Comments

Microsoft has decided in SQL Server 2012, that they will modify the ORDER BY clause and do what MySQL has been doing for a long time – providing simple functions for paging result sets.

This comes in the form of OFFSET and FETCH.

Now, I’m not saying that this was previously not possible in SQL Server. There are solutions to this problem in other versions of the product in the form of temp tables, ROW_NUMBER() and TOP but I prefer OFFSET and FETCH to the others – it’s just simple! 🙂

I am using SQL Server 2012 Express and the AdventureWorks 2012 database for this demonstration.

So lets look at some data and I have decided to query some of the fields from the TransactionHistory table under the Production schema.

SELECT TOP 10
TransactionID
, ProductID
, TransactionDate
, Quantity
, ActualCost
FROM
Production.TransactionHistory;
TransactionID ProductID   TransactionDate         Quantity    ActualCost
------------- ----------- ----------------------- ----------- ---------------------
100000        784         2007-09-01 00:00:00.000 2           0.00
100001        794         2007-09-01 00:00:00.000 1           0.00
100002        797         2007-09-01 00:00:00.000 1           0.00
100003        798         2007-09-01 00:00:00.000 1           0.00
100004        799         2007-09-01 00:00:00.000 1           0.00
100005        800         2007-09-01 00:00:00.000 1           0.00
100006        801         2007-09-01 00:00:00.000 1           0.00
100007        954         2007-09-01 00:00:00.000 1           0.00
100008        955         2007-09-01 00:00:00.000 1           0.00
100009        966         2007-09-01 00:00:00.000 1           0.00

(10 row(s) affected)

This table contains approximately 133,500 rows – not a massive amount in today’s world but for the purposes of this article, lets say I wanted to write queries to page through this data sorted by newest transactions and I wanted to display 20 rows per page.

Using OFFSET and FETCH in SQL Server 2012

So here is an example. Note that OFFSET and FETCH are added after the ORDER BY clause.

SELECT
TransactionID
, ProductID
, TransactionDate
, Quantity
, ActualCost
FROM
Production.TransactionHistory
ORDER BY TransactionDate DESC
OFFSET 0 ROWS
FETCH NEXT 20 ROWS ONLY;

OFFSET provides a starting row from which to display the result set. FETCH instructs the query to display the number of rows you want in your result set from the OFFSET point.

Note that NEXT or FIRST can be supplied for FETCH and are synonyms for ANSI compatibility. You can also type ROW or ROWS, again they are synonyms for ANSI compatibility.

So this is nice and easy, for the next page of results, the OFFSET value would be changed to 20 and then 40 etc.

OFFSET and FETCH can accept variables so for example:

DECLARE @OffsetRows tinyint = 0
, @FetchRows tinyint = 20;
SELECT
TransactionID
, ProductID
, TransactionDate
, Quantity
, ActualCost
FROM
Production.TransactionHistory
ORDER BY TransactionDate DESC
OFFSET @OffsetRows ROWS
FETCH NEXT @FetchRows ROWS ONLY;

You can use expressions in OFFSET and FETCH:

ORDER BY TransactionDate DESC
OFFSET @OffsetRows - 0 ROWS
FETCH NEXT @FetchRows - @OffsetRows + 1 ROWS ONLY;

And I really like this – plugging in a scalar sub queries:

SELECT
TransactionID
, ProductID
, TransactionDate
, Quantity
, ActualCost
FROM
Production.TransactionHistory
ORDER BY TransactionDate DESC
OFFSET @OffsetRows ROWS
FETCH NEXT (SELECT 20) ROWS ONLY;

So imagine that (SELECT 20) was in fact reading a table somewhere in your system (SELECT PageSize FROM PageSetting WHERE SiteID = 5) which controlled the number of rows to be displayed for each query.

OFFSET and FETCH versus ROW_NUMBER()

I’m not going to go into detail about all the methods of paging which have been employed in previous versions of SQL Server and start drawing comparisons and conclusions over performance (perhaps I will in a future post) but the one which immediately springs to mind as an alternative to OFFSET and FETCH is ROW_NUMBER()

So a quick comparison between the two methods shows the following:

Using OFFSET and FETCH

SELECT
TransactionID
, ProductID
, TransactionDate
, Quantity
, ActualCost
FROM
Production.TransactionHistory
ORDER BY TransactionDate DESC
OFFSET 0 ROWS
FETCH NEXT 20 ROWS ONLY

Using ROW_NUMBER() with CTE

WITH Paging_CTE AS
(
SELECT
TransactionID
, ProductID
, TransactionDate
, Quantity
, ActualCost
, ROW_NUMBER() OVER (ORDER BY TransactionDate DESC) AS RowNumber
FROM
Production.TransactionHistory
)
SELECT
TransactionID
, ProductID
, TransactionDate
, Quantity
, ActualCost
FROM
Paging_CTE
WHERE RowNumber > 0 AND RowNumber <= 20

So what do you think? It’s certainly easier to write a query using OFFSET and FETCH as there is less involved. There is one less column too because “RowNumber” is not needed for the OFFSET and FETCH version.

If I were a betting man, I would say that the execution plans are different between the two queries. So lets take a look.

Using OFFSET and FETCH…

sql server offset and fetch versus row_number()

Using ROW_NUMBER()…

sql server offset and fetch versus row_number()
There is certainly more going on in the second one right? As I wrote above, I’m not intending to draw any conclusions on this because I am not doing any thorough testing here.

Finally, if you want to guarantee stable results in your OFFSET and FETCH paging solution there are two things that you must implement.

  1. You should ensure that the underlying data does not change and that involves running the queries inside a transaction using either snapshot or serializable transaction isolation.
  2. The ORDER BY clause needs to contain a column or combination of columns that are guaranteed to be unique.

For more information from Microsoft click here – SQL Server 2012 OFFSET and FETCH

Filed Under: All Articles, SQL Tips and Tricks Tagged With: sql server 2012, t-sql

10 simple tips on improving query and server performance

Posted on February 18, 2012 Written by Andy Hayes 3 Comments

improving database performance

The following tips do not promise that your database server will magically become blisteringly fast and go like a Ferrari but they are important things to bear in mind and relatively simple to implement either when building your database or if you have an existing performance problem and are looking for ways in which to make your database run faster.

1/ SELECT * FROM

The rest of my points are in no particular order but I must place this one at the top of the list as it really annoys me. I’ve seen this many times where developers take the lazy approach to writing their queries by including all columns for the table, or in the case of there being joins involved, then this means every column for every table involved in the query. It is far better to limit the query to only bring back the columns required reducing I/O, memory pressure and on bandwidth between the database and the application layers.

2/ Incorrect sizing of data types – less is more

Database design is sometimes a neglected aspect of building an application. The need to rapidly build a database to store the data and get coding results in tables containing data types which are incorrectly sized.

So lets look at a simple example of what I mean using MS SQL Server.

The system we are designing for allows advertisements to be placed in one county in England per advert. Whilst this does not sound like a terribly flexible system, it serves the purpose in trying to demonstrate the issue with badly sized data types 🙂

So here is the create table script. (I have omitted the Customer create table script which would theoretically exist here to ensure that the relationship on Advert.CustomerID could be created).

CREATE TABLE County
(
CountyID INT PRIMARY KEY IDENTITY(1,1),
County VARCHAR(20)
);

CREATE TABLE Advert
(
AdvertID INT PRIMARY KEY IDENTITY(1,1),
CustomerID INT NOT NULL REFERENCES Customer(CustomerID),
DatePlaced DATETIME DEFAULT GETDATE() NOT NULL,
CountyID INT NOT NULL REFERENCES County(CountyID),
AdvertText VARCHAR(MAX) NOT NULL,
);

So, two tables and where in this design are there issues?

How many counties are there in the England? Well certainly not this many: 2,147,483,647 which is the limit of the data type above 0. Yes you can store negatives also up to -2,147,483,648.

So what, they’re just numbers right? What’s the problem? The problem is the storage needed. The INT data type will require 4bytes of storage. The number of English counties is less than 255. Is it ever going to get larger than 255? I can’t see that happening any time soon so better to design the County table to be CountyID TINYINT and the related field in Advert to also be a TINYINT. This data type uses 1byte of storage. That’s a 75{3a76cbff1b5ce5ace3bd34bbdd68f71285d81fb9af60114c7a6db80d3a8688de} reduction and if you had an extremely large Advert table it could all start to add up.

The other field in question is the DatePlaced. Always question whether you need the increased size of the types you are setting. In my example, I don’t think I am too fussy about the extra precision which the DATETIME data type gives me, so I should have designed that as a SMALLDATETIME.

But storage is cheap right? Yes but this is not the point. It’s all about I/O and memory pressure. The smaller the data types are, the less pages have to be read from disk into the memory buffers. If the memory buffers are not constantly being emptied to make way for other data, then your database server and application will be a lot happier as there will be less read requests to disk.

3/ Adding and Removing Indexes

Identify queries which are performing badly. Can you add indexes which would make them faster? SQL Server 2008 Management Studio will even recommend which indexes to add when you view the execution plan. Identify where you need to add clustered indexes and put them in.

Identify unused indexes and remove them. For each record inserted to a table, every index is updated. If you do not need an index, get rid of it. It will help speed up inserts.

4/ Use stored procedures instead of in-line SQL

There are many advantages to using stored procedures. From a performance perspective, having them in place means that each time the procedure is executed, the execution plan is pulled out of cache. As the server does not need to work out the execution plan, the process from calling the procedure to it completing its execution is typically faster.

5/ Set up maintenance plans to rebuild indexes and statistics

All indexes will suffer from fragmentation and statistics will get out of date. Set up maintenance plans in SQL Server or batch tasks in MySQL which will periodically rebuild statistics and indexes to help the optimizer choose the best execution plan.

6/ Replace use of functions in WHERE clauses with SARG’s

It’s all about the search arguments (SARG’s)

Here’s an example. Two queries, the first uses a LIKE, the second uses the SUBSTRING function. There is an index on the LastName column in my table and the first query reports using an index seek which should normally be faster compared with the second which reports using a clustered index scan.

SELECT FirstName, LastName
FROM Employee
WHERE LastName LIKE 'Hay{3a76cbff1b5ce5ace3bd34bbdd68f71285d81fb9af60114c7a6db80d3a8688de}';

SELECT FirstName, LastName
FROM Employee
WHERE LastName SUBSTRING(LastName,1,3) = 'Hay';

Note that use of LIKE will not always reference an index for example this query would not use an index

WHERE LastName LIKE '{3a76cbff1b5ce5ace3bd34bbdd68f71285d81fb9af60114c7a6db80d3a8688de}es'

Other examples of SARG’s:

LastName = 'Hayes'
Amount > 0
Amount BETWEEN 10 AND 20

I’ve also seen UDF’s in place in WHERE clauses and unsuprisingly, performance is poor.

Get the query using a SARG and there is a higher chance that the optimizer will use an index.

7/ Ensure that foreign keys in are in place

Please refer to my post on the importance of foreign keys and how they can help improve query performance.

8/ Increasing chances of using the query cache (MySQL)

Minor changes to queries to make them deterministic can increase query cache hit rate. For example if a query uses CURDATE(), then the results will not be cached.

9/ Deal in sets not cursors

Try and build SQL queries which deal with sets of data. Leave any row by row processing to the application layer.

10/ Allocate more memory to buffers

You need a solution quickly and you have identified your server is under memory pressure.  You have spare RAM available and so it is an easy task to allocate more memory to MS SQL or MySQL database memory buffers. Personally, I will always try and carry out a more exhaustive analysis of the server but if time does not permit that then allocate the memory to improve performance and analyse a typical workload on a staging server to determine where other performance improvements can be made.

It is important to ensure that these issues are not brushed under the carpet after increasing hardware resource. Think about it for a moment. Your application has been slowing down and user complaints have increased. What’s an obvious cause of that? – Growth.

So in order to effectively deal with growth, from a DBA perspective, the database server needs to be monitored for performance problems with issues reported to the relevant technical staff at the earliest opportunity. Adding hardware will always help improve performance but the application will never scale very well unless the issues “underneath the bonnet” in the database layer are effectively dealt with.

Filed Under: All Articles, MySQL Performance, SQL Server Performance Tagged With: mysql, performance, sql server, t-sql

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • Next Page »

Categories

  • All Articles (82)
  • Career Development (8)
  • MySQL Administration (18)
  • MySQL Performance (2)
  • SQL Server Administration (24)
  • SQL Server News (3)
  • SQL Server Performance (14)
  • SQL Server Security (3)
  • SQL Tips and Tricks (19)

Top 10 Popular Posts

  • Using sp_change_users_login to fix SQL Server orphaned users
  • How to shrink tempdb
  • MySQL SHOW USERS? – How to List All MySQL Users and Privileges
  • How to Transfer Logins to Another SQL Server or Instance
  • How to Delete Millions of Rows using T-SQL with Reduced Impact
  • T-SQL – How to Select Top N Rows for Each Group Using ROW_NUMBER()
  • New T-SQL features in SQL Server 2012 – OFFSET and FETCH
  • How to Kill All MySQL Processes For a Specific User
  • Using exec sp_who2 to help with SQL Server troubleshooting
  • How to move tempdb

Recent Posts

  • How to Setup MySQL Master Master Replication
  • How To Use SQL to Convert a STRING to an INT
  • How to set up MySQL Replication Tutorial
  • How to Use SQL CASE for Conditional Logic in Your SQL Queries
  • Using ISNULL in SQL Server to Replace NULL Values

Search

Connect

  • Twitter
  • Facebook
  • Google+
  • RSS

About

  • Cookie Policy
  • Disclaimer
  • About
Copyright © ‘2021’ DBA Diaries built on the Genesis Framework

This site uses cookies. We assume you are happy with cookies but click the link if you are not. Close