DBA Diaries

Thoughts and experiences of a DBA working with SQL Server and MySQL

SQL Server 2016 New Features – Live Query Statistics

Posted on May 26, 2016 Written by Andy Hayes Leave a Comment

SQL Server 2016 Live Query Statistics provide a way to look into the execution plan to see what parts of the query are currently running.

The advantage of this feature is that for a slow query, instead of having to wait for it to complete before the execution plan is available, it’s now possible to view the execution plan in real time.

Sometimes execution plans can be huge, increasing the effort needed to work out which parts of the query are performing badly. With Live Query Statistics enabled, seeing this data in real time helps the DBA or developer find the root causes of poor performance faster.

How to Enable Live Query Statistics in SQL Server 2016

There is a new button on the SQL Editor toolbar which is responsible to activating this feature when inside a new query window.

 enable sql server 2016 new features live query statistics

Then simply execute the query and the results will appear on screen. In the following screenshot, the query is still in progress and all operators have not finished.

sql server 2016 new features live query statistics in action

I have circled two of them and you can see that they are at different stages according to the percentages listed. Overall, the query is 40{3a76cbff1b5ce5ace3bd34bbdd68f71285d81fb9af60114c7a6db80d3a8688de} complete as seen highlighted in yellow. There is data for the number of rows passing through the operator and for the time lapsed on that part of the query.

When the query finishes, you can compare this visualization with the actual execution plan to understand the cost of the components relative to the total cost of the query.

This feature can also be accessed via activity monitor under the “Active Expensive Queries”. Right click the query you are interested in and click “Show Live Execution Plan”

A very nice feature indeed! 🙂

Filed Under: All Articles, SQL Server Performance Tagged With: performance, sql server, sql server 2016

SQL Server 2016 New Features – Query Store

Posted on May 25, 2016 Written by Andy Hayes Leave a Comment

In this series of posts covering the new features in SQL Server 2016, I’m moving on to looking at the Query Store.

This tool provides a way to track query execution plan performance over time to help assist troubleshooting efforts. Figuring out why the database has suddenly started running slowly can be at times difficult to diagnose quickly. The query store is an extra tool in the DBA or developer’s armoury to help in such situations.

The store captures, queries, statistics and execution times for later review. Data is recorded by time making it possible for you to see when query plan changes happened on the server.

This is an improvement over just having query plans available in the query plan cache. The query plan cache only stores the last version of the execution plan and if the server comes under memory pressure, the plans are evicted.

What’s so important about the execution plans?

It’s the execution plans that tell the story about how the optimizer ran the query.

Such plans can be altered say if the table grows and the optimizer decided that instead of an index seek, it will be more efficient to do an index scan.

To name a few more scenarios; there might be a server wide change using sp_configure, some index was changed (or dropped), index statistics were updated, sp_recompile was called or the table schema was changed.

The execution plans are really important to help understand the decisions the optimizer made, enabling the performance troubleshooter to quickly see where the delays are in a given query.

At the time of writing I am using Release Candidate 3 of SQL Server 2016.

How to Enable the Query Store in SQL Server 2016

This is either done using T-SQL or via Management Studio and done on a per database basis.

T-SQL

ALTER DATABASE YourDB SET QUERY_STORE = ON;

Management Studio

Right mouse click the database you want to enable this for, choose “Properties->Query Store”.

Under “General”, you’ll see “Operation Mode (Requested)”, choose “Read Write” from the list of options.

On this screen that you can see further options, such as how long you want to keep the data in the store before it gets flushed, how big the store can be etc.

enable sql server 2016 new features query store

When would you typically use the Query Store?

  • To implement Plan Forcing
  • To determining the number of times a query plan was executed in a given window
  • To identify the top (N) queries by resource (CPU etc) in the past (X) hours
  • To audit the history of query plans
  • To analyze the resource usage patterns of a particular database

Plan Forcing

As there are multiple versions of the execution plan in the query store, it is possible through policies to direct a query to use a specific execution plan – this is known as “Plan Forcing”. By giving instruction to use a specific plan, this can help to quickly resolve a performance issue caused by an execution plan change.

Structure of the query store

The query store is divided up into two stores; the plan store and the runtime stats store. The former contains the execution plan information and the latter contains the execution statistics for each of the plans stored.

Viewing the data in the query store

Once the query store has been enabled, the contents of the store can be viewed using this query:

SELECT Txt.query_text_id, Txt.query_sql_text, Pl.plan_id, Qry.*
FROM sys.query_store_plan AS Pl
JOIN sys.query_store_query AS Qry
ON Pl.query_id = Qry.query_id
JOIN sys.query_store_query_text AS Txt
ON Qry.query_text_id = Txt.query_text_id;

There are also reports available in Management Studio. Under the Query Store section of the database, there are 4 reports exposed :

sql server 2016 new features query store reports in management studio

  • Query store regressed queries – these are the queries that have declined in performance. Data is available for each query in both tabular and graph form with differing options to report on metrics for duration, logical reads/writes, cpu time, memory consumption and physical reads.
  • Overall Resource Consumption – data is aggregated in graph form of the query runtime statistics during a specific time interval.
  • Top Resource Consuming Queries – shows the most expensive queries that were executed during a specific time interval.
  • Tracked Queries – shows the runtime statistics at query execution and provides a way for the troubleshooter to view the execution plan of a particular query.

Summary

The query store enables a DBA to record the changes to execution plans over time. Data is presented in easily digestible forms allowing quicker understanding of the reasons why a query might have deteriorated in performance.

It’s neat and it’s most welcome 🙂

Filed Under: All Articles, SQL Server Administration Tagged With: performance, sql server, sql server 2016

How to Find I/O Usage Per Database in SQL Server

Posted on May 15, 2016 Written by Andy Hayes 3 Comments

If you’re looking at performance and trying to understand I/O on a per database level in SQL Server, sadly there is not a report in Management Studio that you can run from those available.

It will need some T-SQL to pull out this information from an existing DMV – sys.dm_io_virtual_file_stats.

The information returned by the DMV is data which has been recorded since the SQL Server was last restarted and so it is therefore cumulative data.

It helps the DBA to understand where the I/O distribution is across databases at both the data and log file level.

Let’s take a look at I/O at the database level using this T-SQL:

WITH IO_Per_DB
AS
(SELECT 
  DB_NAME(database_id) AS Db
  , CONVERT(DECIMAL(12,2), SUM(num_of_bytes_read + num_of_bytes_written) / 1024 / 1024) AS TotalMb
 FROM sys.dm_io_virtual_file_stats(NULL, NULL) dmivfs
 GROUP BY database_id)

 SELECT 
    Db
    ,TotalMb
    ,CAST(TotalMb / SUM(TotalMb) OVER() * 100 AS DECIMAL(5,2)) AS [I/O]
FROM IO_Per_DB
ORDER BY [I/O] DESC;

On my test instance this produces the following output:

find io usage per database in sql server

I can see that the top I/O consumer is the sample MS database AdventureWorks and this is because I’ve been running some simple SELECT’s against the tables. If I start running queries against the others, the distribution will quickly change so it’s important not to assume anything about the results without regular sampling.

Take some scenario where workload patterns are constant across the day and the results do not change much. Then in the evening some large job runs which reads a lot of data from a database which is otherwise not very active during the day. At that time of the day, that database may show up at the top of the report having generated a lot of I/O. However after this time, normal workloads would resume and the results will change again.

Looking at I/O usage at the database file level in SQL Server

The results above are very high level. If you want to look in more detail, for example at the data file level, you can run something like this:

WITH IO_Per_DB_Per_File
AS
(SELECT 
    DB_NAME(dmivfs.database_id) AS Db
  , CONVERT(DECIMAL(12,2), SUM(num_of_bytes_read + num_of_bytes_written) / 1024 / 1024) AS TotalMb
  , CONVERT(DECIMAL(12,2), SUM(num_of_bytes_read) / 1024 / 1024) AS TotalMbRead
  , CONVERT(DECIMAL(12,2), SUM(num_of_bytes_written) / 1024 / 1024) AS TotalMbWritten
  , CASE WHEN dmmf.type_desc = 'ROWS' THEN 'Data File' WHEN dmmf.type_desc = 'LOG' THEN 'Log File' END AS DataFileOrLogFile
 FROM sys.dm_io_virtual_file_stats(NULL, NULL) dmivfs
 JOIN sys.master_files dmmf ON dmivfs.file_id = dmmf.file_id AND dmivfs.database_id = dmmf.database_id
 GROUP BY dmivfs.database_id, dmmf.type_desc)

 SELECT 
    Db
  , TotalMb
  , TotalMbRead
  , TotalMbWritten
  , DataFileOrLogFile
  , CAST(TotalMb / SUM(TotalMb) OVER() * 100 AS DECIMAL(5,2)) AS [I/O]
FROM IO_Per_DB_Per_File
ORDER BY [I/O] DESC;

I’ve included some extra columns here to help display the megabytes read and written as well as the data file the I/O was generated against.

In the following results, I ran an update against one of the larger tables to help demonstrate. You can see the log file for the AdventureWorks db saw some activity and the distribution of I/O is now more transparent.

find io usage per database per data file in sql server

Filed Under: All Articles, SQL Server Performance Tagged With: performance, sql server

How to List CPU Usage Per Database in SQL Server

Posted on May 14, 2016 Written by Andy Hayes 2 Comments

As a DBA there may be a time when you want to try and ascertain how much CPU time each of your databases is consuming. This is useful to complement high CPU investigations or to just try and understand which of your databases overall is the highest CPU consumer over time.

The out of the box reporting that Management Studio provides is sadly missing this report so we have to run some T-SQL to extract it.

As with other DMV’s, the statistics produced are only available from the time when the SQL Server was started and can change over the course of the day. It’s important to remember that these are cumulative figures and may not correlate to any current spike in CPU performance caused by some heavy query.

The DMV can only report on the data that is available in the plan cache. If the plan cache is experiencing lots of churn, the report becomes less accurate.

If you want to track this data over time, you will need to capture data at regular intervals and persist it to some kind of logging table or file.

So here is some code to do it:

WITH CPU_Per_Db
AS
(SELECT 
 dmpa.DatabaseID
 , DB_Name(dmpa.DatabaseID) AS [Database]
 , SUM(dmqs.total_worker_time) AS CPUTimeAsMS
 FROM sys.dm_exec_query_stats dmqs 
 CROSS APPLY 
 (SELECT 
 CONVERT(INT, value) AS [DatabaseID] 
 FROM sys.dm_exec_plan_attributes(dmqs.plan_handle)
 WHERE attribute = N'dbid') dmpa
 GROUP BY dmpa.DatabaseID)
 
 SELECT 
 [Database] 
 ,[CPUTimeAsMS] 
 ,CAST([CPUTimeAsMS] * 1.0 / SUM([CPUTimeAsMS]) OVER() * 100.0 AS DECIMAL(5, 2)) AS [CPUTimeAs{96f5cbaad583fb885ff5d18ecce5734bf1e8596949521e20c290401929518f75}]
 FROM CPU_Per_Db
 ORDER BY [CPUTimeAsMS] DESC;

This is what the results looks like on my test instance.

find cpu usage per database in sql server

I hope you found this useful 🙂

Filed Under: All Articles, SQL Server Performance Tagged With: performance, sql server

  • 1
  • 2
  • 3
  • …
  • 6
  • Next Page »

Categories

  • All Articles (82)
  • Career Development (8)
  • MySQL Administration (18)
  • MySQL Performance (2)
  • SQL Server Administration (24)
  • SQL Server News (3)
  • SQL Server Performance (14)
  • SQL Server Security (3)
  • SQL Tips and Tricks (19)

Top 10 Popular Posts

  • Using sp_change_users_login to fix SQL Server orphaned users
  • How to shrink tempdb
  • MySQL SHOW USERS? – How to List All MySQL Users and Privileges
  • How to Transfer Logins to Another SQL Server or Instance
  • How to Delete Millions of Rows using T-SQL with Reduced Impact
  • T-SQL – How to Select Top N Rows for Each Group Using ROW_NUMBER()
  • New T-SQL features in SQL Server 2012 – OFFSET and FETCH
  • How to Kill All MySQL Processes For a Specific User
  • Using exec sp_who2 to help with SQL Server troubleshooting
  • How to move tempdb

Recent Posts

  • How to Setup MySQL Master Master Replication
  • How To Use SQL to Convert a STRING to an INT
  • How to set up MySQL Replication Tutorial
  • How to Use SQL CASE for Conditional Logic in Your SQL Queries
  • Using ISNULL in SQL Server to Replace NULL Values

Search

Connect

  • Twitter
  • Facebook
  • Google+
  • RSS

About

  • Cookie Policy
  • Disclaimer
  • About
Copyright © ‘2021’ DBA Diaries built on the Genesis Framework

This site uses cookies. We assume you are happy with cookies but click the link if you are not. Close