You are on page 1of 8

The Best of

The Best of
Your community knowledge base

Your community knowledge base

Visit www.sqlserverpedia.com
Follow us on Twitter:

twitter.com/SQLServerPedia

Quest Software Incorporated. To learn more about our solutions, contact your local sales representative or visit www.quest.com Headquarters: 5 Polaris Way, Aliso Viejo, CA 92656, USA 2009 Quest Software Incorporated. ALL RIGHTS RESERVED. Quest Software and SQL Server are trademarks and registered trademarks of Quest Software, Inc. in the U.S.A. and/or other countries. All other trademarks and registered trademarks are property of their respective owners. 8_6_09

Visit www.sqlserverpedia.com

The Best of

The Best of The Best of

Examining Query Execution Plans


You can generate SQL Server execution plans several ways. Graphical execution plans are created most often followed by XML execution plans and plain text execution plans. In all versions of SQL Server, there are two fundamental types of execution plans: the estimated execution plan and the actual plan. The estimated plan does not require the query to be run, while the actual plan is output from the query engine, showing the plan used to execute the query. Most of the time, these two plans are identical, but in some circumstances, they are different. graphical, execution plan is created for the query in the query window. However, the query is not executed; it is merely run against the query optimizer within the SQL Server system and the output from the optimizer is displayed as a graphical execution plan. If objects that dont exist (such as temporary tables) are part of the query, the estimated plan will fail.

Contents
Examining Query Execution Plans SQL Server Troubleshooting Tips and Tricks Shrinking Databases Restoring File and Filegroup Backups Killing Sessions with SSIS Moving SQL Server Logins Between Servers Configuring Database Files for Optimal Performance Stored Procedure Execution 2 5 7 8 10 11 13 14

Actual Execution Plan

This article will explain how to generate both estimated and actual execution plans as well as how to interpret them.

Graphical Execution Plans

Graphical execution plans are accessed through the query window in Management Studio in SQL Server 2005/2008 or through Query Analyzer in SQL Server 2000. To a large degree, the functionality of graphical plans is the same in SQL Server 2000 and SQL Server 2008. However, there are some fundamental differences, which are highlighted in this section. All graphical plans are read from right to left and from top to bottom. Thats important to know so that you can understand other concepts, such as how a hash join works. Each icon represents an operation. Some operations are the same in the estimated plan and the actual plan, and some are not. Operators are connected by arrows that represents the data feedthe output from one operator and the input for the next. The thickness of the data feed varies according to the amount of the data it represents: thinner arrows represent fewer rows and thicker arrows represent more rows. Operators represent various objects and actions within the execution plan. A full listing of operators is available in Books Online.

An actual execution plan requires the query to be executed. To enable the generation of the actual execution plan, take the following steps: 1. Select the Include Actual Execution Plan button from the tool bar. 2. Right click within the query window and select Include Actual Execution Plan. 3. Select the Query menu and then the Include Actual Execution Plan menu choice. 4. Press Ctrl-M. After the query executes, the actual execution plan will be available in a different tab in the results pane of the query window.

Execution Plan Example

The primary reason for generating an execution plan is to work through it to understand what is happening in the query and what needs to get fixed. For example, consider the following query:

This query generates the following execution plan:

SQL Server 2005/2008

Estimated Execution Plan There are several ways to generate an estimated execution plan: Select the Display Estimated Execution Plan from the tool bar Right click within the query window and select Display Estimated Execution Plan Select the Query menu and then the Display Estimated Execution Plan menu choice Press Ctrl-L When any of these actions is performed, an estimated,
1 Visit www.sqlserverpedia.com

Start at the top right. There is an Index Seek (NonClustered) against the index named [SalesOrderHeader]. [IX_SalesOrderHeader_CustomerId]. This feeds data out to a Nested Loop (Inner Join). Working down you can see a Key Lookup (Clustered) operation against the PK_SalesOrderHeader_SalesOrderID. This is a classic key lookup, or what used to be called a bookmark lookup. You can see that the data feeds back up to the Nested Loop and then on down to another Nested Loop operator. Below that is a
2

Visit www.sqlserverpedia.com

The Best of
Clustered Index Seek (Clustered) against the [PK_SalesOrderDetail_SalesOrderId] primary key. Finally the data flow goes out to the SELECT operator. Thats the basic information available within the execution plan. We will explore this further a bit later in the article. Hover the mouse over any operator to see the tool tip for that operation type, showing some of the detail behind the operator. take a long time. Instead of running a query each time you make changes to it, you might want to examine the execution plan first. SET STATISTICS TIME ON will inform you about the CPU time used and SQL Server time used to execute a particular query. For example, consider the following query: The resulting messages are shown below:

The Best of The Best of


your expectations. For example, a warning might state that statistics are out-of-date or that a join predicate is missing.

Column Name StmtText

Description Repeats the submitted query (in which case its not very useful) or contains the physical and logical operations included in the query execution plan. Provides additional information about the physical operation. For instance if a clustered index is being scanned, this column will show the name of the index as well as index keys. Contains a comma-separated list of columns defined in the query, or the list of internal values examined by the query optimizer. Shows the number of rows affected by the query. Provides the estimated I/O for the operation mentioned in this row. Provides the estimated CPU usage for the operation mentioned in this row. Provides the average row size in bytes passed by this operation. Gives the estimated cost of this operation as well as all child operations. Lists the columns in the result set. Contains a coma-separated list of warnings that pertain to the current operation. For instance, it might warn you that the statistics on a particular index being queried are out of date. Contains the appropriate Transact-SQL command type (such as SELECT or UPDATE) for the statements referenced in the query. For the rows that show the actual execution plan, this column contains plan_row. Shows that the operation is running parallel if this column contains 1. Shows the estimated number this operation will have to be executed, satisfying the current query.

About the Author

Argument

SQL Server 2000

To generate an estimated execution plan in SQL Server 2000, simply choose Query, Display Execution Plan. This option is equivalent to setting NOEXEC and SHOWPLAN_ ALL on, but displays the execution plan in a graphical format. The query will not be executed; SQL Server will display the execution plan chosen by the optimizer. To execute the query and see the actual execution plan, choose Query and then select Show Execution Plan. The graphical output in the query analyzer is extremely helpful. This utility also lets you create and update statistics, and create, modify or drop existing indexes. If the statistics are missing or out-of-date the graphical output will be shown in red in the table or index. Getting used to various icons might take a little while. However, if you move your mouse pointer over an icon, a tool tip will appear with a brief explanation for the icon. It is not recommended that you memorize the meaning of each icon. After looking at this graphical plan you will be able to tell if your query has a problem. The icon that you rarely want to see is a table scan, which looks like a table with a blue arrow in the middle of it.

DefinedValues The output above might be somewhat confusing initially. The first statement refers to the time it took to execute the SET STATISTICS TIME ON statement, which is too small to measure. The second and third statements provide the parse and compilation time for two statements: GO and SELECT * FROM authors. The last statement is the one we are most interested in; that is the actual time spent executing SELECT * FROM authors command. SET SHOWPLAN_ALL ON will give you detailed information about the execution plan. The output of SHOWPLAN_ALL is not straightforward, but understanding it gives you the opportunity to know what is going on behind the scenes. The following table describes the output of SHOWPLAN: Column Name StmtText Description Repeats the submitted query (in which case its not very useful) or contains the physical and logical operations included in the query execution plan. Shows the number of the statements issued before the current statement in the current connection. Provides the Node ID in the query. Displays ID for the parent step of the current node. Shows the physical implementation of the algorithm chosen by the query optimizer. If the row type is not plan_ rows then this column is NULL. Shows the Logical implementation of the algorithm chosen by the query optimizer. If the row type is not plan_ rows then this column is NULL. Parallel Estimate Executions

Grant Fritchey works for FM Global, an industryleading engineering and insurance company, as a principal DBA. Hes developed large-scale applications in languages such as VB, C#, and Java. He has worked with SQL Server since version 6.0. He has worked in finance and consulting and for three failed dot coms. He is the author of Dissecting SQL Server Execution Plans (Simple Talk Publishing, 2008) and SQL Server 2008 Performance Tuning Distilled (Apress, 2009). His online presences include: Blog - http://scarydba.wordpress.com/ Twitter - http://twitter.com/GFritchey

EstimateRows EstimateIO EstimateCPU AvgRowSize TotalSubtreeCost OutputList Warnings

Toad for SQL Server Xpert


Analyze execution plans and optimize SQL with Toad for SQL Server Download a free 30-day trial: http:// www.quest.com/toad-for-sql-server/

Text Execution Plans

There are a few SET commands that can help you examine the query optimizers decisions and decide whether they produce the desired results. Just like other commands, these SET commands can be turned ON or OFF. They stay in force for the duration of the connection, or until you explicitly change the setting. SET STATISTICS IO ON will provide the number of physical reads (reads from the disk), the number of logical reads (reads from the memory cache), scan count, and the number of read-ahead reads (number of data or index pages placed in cache for the query). For example, consider the following query and the statistics it retrieves: Resulting message: SET NOEXEC ON will compile the query but will not execute it. This is helpful if you are testing a query that might
3

Type

Visit www.sqlserverpedia.com

StmtID

NodeID Parent Node PhysicalOP

LogicalOp

Perhaps the most useful column out of the entire SHOWPLAN_ALL output is the StmtText, which tells you about the type of operation performed: whether it is a table scan, clustered or non-clustered index scan, etc. Most of this information is repeated again in the PhysicalOp, LogicalOp or Argument columns (whichever is appropriate). Another column to watch is the Warningsit might give you a clue to why your query isnt performing up to
Visit www.sqlserverpedia.com 4

Visit www.sqlserverpedia.com

The Best of

The Best of The Best of

SQL Server Troubleshooting Tips and Tricks


This article shares some of the tweaks and tools that can make the job of a SQL Server database administrator (DBA) easier. Please note that any changes you make to these settings will not take effect until you open a new query window. Heres an example of how you could use these shortcuts: 1. Use Ctrl+4 to find a list of tables. 2. Copy a table into your query window. 3. Highlight the table name (I usually double-click on it) and press Ctrl+3 to view a sample of that tables data. You may want to remove or change the schema filters if you use schemas other than dbo.

Indexes

If you use included columns, you know the frustration associated with figuring out which columns are included. The following stored procedures can help: sp_helpindex A system stored procedure that reports information about the indexes on a table or view sp_helpindex2 A rewrite of the sp_helpindex stored procedure, written by Kimberly Tripp dba_indexLookup_sp A custom, non-system stored procedure, written by Michelle Ufford Take a look at all of these and use the one that best meets your needs.

Block selection

Copy Behavior

Query Execution Settings

This tip is not specific to SQL Server; its useful for any Microsoft product. Holding down Alt while you drag your mouse will change your selection behavior to block selection.

Missing index

About the Author

SMSS offers advanced settings to help prevent unintentional issues in production environments, such as a query that causes locking or blocking. To access these options in SSMS, choose Tools | Options| Query Execution | SQL Server | Advanced.

Object Detail Explorer

Keyboard Shortcuts

To choose a keyboard scheme in SQL Server Management Studio (SSMS), select Tools | Options | Environment | Keyboard.

One of the great updates available in SQL Server 2008 is the Object Detail Explorer. For example, you can quickly find the table size and row counts of all the tables in a particular database. The Object Detail Explorer requires SQL 2008 Management Studio, but you can connect SQL 2008 SSMS to a 2005 instance. Note: If these options are not visible, right-click the column headers and add them to the display.

This wiki article was adapted from a blog post by Michelle Ufford. Michelle is a SQL developer DBA for GoDaddy.com, where she works with high-volume, mission-critical databases. She has more than a decade of experience in a variety of technical roles and has worked with SQL Server for the last five years. She enjoys performance tuning and maintains an active SQL Server blog. Learn more at: Blog: http://sqlfool.com/ Twitter: http://twitter.com/sqlfool/

Query execution settings

Some suggestions include: Change SET TRANSACTION ISOLATION LEVEL to READ UNCOMMITTED. This will minimize the impact of your ad-hoc queries by allowing dirty reads. While this can be beneficial for many production environments, make sure to understand the implications of this setting before implementing it. Change SET DEADLOCK_PRIORITY to Low. This will tell SQL Server to select your session as the victim in the event of a deadlock. Here are some suggestions for additional shortcuts: Change SET LOCK TIMEOUT to a smaller, defined value, such as 30,000 milliseconds (30 seconds). By default, SQL Server will wait forever for a lock to be released. By specifying a value, SQL Server will abort after the specified timeout period when a lock is encountered. You can also make these same setting changes in Visual Studio.

Keyboard shortcuts

Pain-of-the-Week Webcasts
Dont let SQL Server challenges beat you down
Object detail explorer

The Standard keyboard scheme has the following shortcuts by default:

Missing Indexes

If you use SSMS 2008 to execute Display Estimated Query Plan (Ctrl+L), it will show whether youre missing any indexes. This will even work if you connect SSMS 2008 to SQL 2005.

Get solutions and learn best practices from these free, educational Pain-of-theWeek webcasts at http://www.quest.com/ backstage/pow.aspx Visit www.sqlserverpedia.com

Visit www.sqlserverpedia.com

Visit www.sqlserverpedia.com

The Best of

The Best of The Best of


is considerably smaller than the current size, you should try one last shrinking of the log file. truncation and shrinking after making a full backup, heed the warning to get a real recovery strategy in place. Set up a nightly job to issue BACKUP LOG WITH TRUNCATE_ONLY. This will remove all transactions from the log. Unfortunately, this will also remove your ability to recover those transactions should your database run into problems. Keep in mind that this command is also deprecated in coming versions of SQL Server.

Shrinking Databases
Dont Touch That Shrink Button!
Many of us encounter unnecessary database autoshrinks and scheduled shrink jobs. This article offers insight on what you should be doing about your database sizes. look at your backup and recovery plan. If you arent doing log backups or you just dont understand them, there are plenty of resources to help. It is relatively simple to begin working on a proper backup and recovery strategy and avoid future problems. If you dont need point-in-time recovery, you should consider simple recovery mode, which will truncate the log at certain events. However, do not go straight to simple recovery mode. You should analyze your situation and learn about recovery models to do what is right for your organization. Contain Your Transaction Log If your transaction log is growing out of control, there is a strong possibility that you are in full recovery mode and you are not backing up your log file on a regular basis. Your transaction log then continues to grow until you deliberately back it up (a full backup wont do). This is the expected result, since full recovery mode means you want the ability to back up to a point in time. As long as your backup is someplace safe, it will limit your losses according to the frequency of your backups. Solutions: Set up a log backup schedule that meets your business needs. Search books online and understand recovery models. Also, figure out the SLAs you are supposed to be supporting, and get your logs backed up on the same schedule. Make sure the backups will handle more than your mdf/ldf files so they are useful in the event of a failure. You could even send them to tape directly or after a copy. You should be able to see the size of your log files become more manageable. Get more space. Maybe you are doing log backups but you still dont have enough space. Either your activity is quite high or your allocated space is quite low. If its the former, try log backups more frequently. If its both, more space for your log files may be required. Switch to simple recovery mode. This is not to be done lightly. You are no longer able to restore to a point in time; you can restore only to the last full backup. If this is in line with your SLA and you have no desire to restore to a point in time, switch to this mode. Your log file will now truncate at certain intervals. Look at your growth ratio while you are adding that space or setting up your backup. The default growth rate for a transaction log is 10%. How large is your log file? Is 10% really the growth rate you want? On that same note, has your log file grown a lot larger than it need be because of poor management? Perhaps once you do your first T-Log backup, you should look at setting a reasonable size, knowing that it will be truncated on a regular basis. If that

The Difference Between Truncate and Shrink

What Happens When You Shrink a Database?

When you click that shrink button (or leave a database in autoshrink, or schedule a job to perform shrinks), you are asking SQL Server to remove the unused space from your databases files. Deallocate that space and let the O/S do what it needs with it. If you do, theres a good chance that your database will continue to grow (as the majority of non-static databases tend to do). Depending on your autogrowth settings, this growth will probably be necessary and you will end up shrinking it again. At best, this is just extra work (shrink, grow, shrink, grow), and the resulting file fragmentation can be handled by your I/O subsystem. At worst, this causes file fragmentation, interrupting what would have otherwise been contiguous files and potentially causing I/O-related performance problems.

Theres a lot of confusion surrounding the difference between truncate and shrink. You may have truncated your log file but you still have no free space and the file hasnt reduced its footprint at all. This is because a truncation does nothing to the physical size of the allocated file on the operating system. A shrink operation actually clears space from a file, and a truncate essentially frees up the used space within that file. This is why a shrinking of a log file that is using all of the space wont affect the size and why a truncation of a log file wont reduce the size. A truncation would have to happen first to make room available for the shrink to work. This is not recommended, however.

About the Author

Bad Advice to Avoid

Better Strategies

Just Delete the Log File This advice can take the form of advice like, Just stop SQL, detach your database, delete the log file, and reattach without log. This will definitely remove any transactions in your log, and possibly leave your database in a transactionally inconsistent state, meaning theres potential for loss of data or worse. If you are stuck without space for future growth, try the log backup. If you must do one last

This wiki article was adapted from a series of blog posts by Mike Walsh. Mike is an experienced SQL Server professional who has worked almost exclusively with SQL Server in various capacities for nearly 10 years. He has fulfilled the roles of DBA, developer, business analyst and performance team lead, but he always works his DBA experience into each role. Currently he is the principal DBA and SQL Server subject matter expert for a global insurance company. He also assists organizations with SQL problems through his consulting firm, StraightPath Solutions. Learn more at: Mikes Blog: http://www.StraightPathSQL.com/blog Twitter: http://twitter.com/mike_walsh

What are the alternatives to shrinks? Allocate More Space than Necessary Determine what your future data size needs will be, not what they are when the database initially goes live. Based on your needs, create the database size and set the autogrowth to a reasonable number in bytes rather than as a percentage. Then monitor your free space and look at size trending over time to plan for a larger allocation of space if your planning turns out to have been inaccurate. Your SAN teams may indicate that the free space in the file is just sitting there doing nothing, but you are better off having that extra space instead of scrambling to allocate space at the last minute. Dont Run Out of Space on Your Transaction Log If you are in a full recovery mode on a database, that means you intend to recover to a point in time in the event of a failure. It also means you plan to use a combination of full backups and transaction log backups (and possibly differentials). SQL Server understands your intent, and it will not truncate the log files of your database (the .LDF files). Instead, the files will continue to grow until you do a transaction log backup. If your transaction log growth is out of control, you are likely incurring the cost of full recovery mode (with a growing log file, the full logging of qualified events, etc.) but gaining none of the benefit. The simple solution is to
7

Restoring File and Filegroup Backups


Filegroup Restores
With small to medium databases, backup and restore operations are straightforward. You can generate full, differential and transaction log backups and restore them in the sequence they were taken. However, when you have a terabyte caliber database, you face a different scenario because restoring a full backup can take many hours, and while the database is being restored, no user can connect to the database. This can be particularly frustrating when the majority of your data is static and only a small portion is updated daily. Fortunately, SQL Server 2005 Enterprise Edition (and later editions) provides an answer to this challenge through piecemeal restore. Previous versions of SQL Server allowed database administrators to back up individual files and filegroups instead of backing up the entire database. However, before the database becomes available, you must restore every file and filegroup. This solution allows reducing the total backup time, but doesnt help with database availability. You can continue restoring other files and filegroups while users are querying the database. Only the portion of the database being restored will not be available to the users. In fact, if you dont have the backup files immediately available or if you dont want to impose any extra overhead on the system, you can wait as long as you want before restoring the rest of the filegroups. The filegroups that havent been restored remain in offline state. For example, suppose the northwind database has two filegroups: primary and secondary. Further suppose that the primary filegroup contains the majority of data users are interested in. The secondary filegroup was added recently and doesnt contain much data. You have two full backups: one for the primary filegroup and one for the secondary. The primary filegroup contains two data files and the secondary filegroup contains a single data file. For simplicity, assume that you take only full backups (no differential or transaction log backups). To restore just the primary filegroup, execute the following statement (you have to specify the MOVE clause only if you want to change the location of the data files and log files):
8

Get Online Faster When Disaster Strikes

With SQL Server 2005, the database can be available for querying after its PRIMARY filegroup has been restored.

Visit www.sqlserverpedia.com

Visit www.sqlserverpedia.com

The Best of

The Best of The Best of

Filegroup Design Tips and Tricks


At this point, the northwind2 database is available for reading and writing. To restore all read-write filegroups (as opposed to just the primary filegroup), we could modify the statement slightly:

At some point when user activity on the system is minimal, execute a statement similar to the following to restore the secondary filegroup:

Since the primary filegroup must come online first, keep the primary filegroup relatively small. If the database is one terabyte, dont create a 900 GB filegroup as the primary file, because filegroup restores wont be much faster than conventional restores. Instead, create a primary filegroup with the most urgently needed tables: configuration tables, user security, and whatever your application absolutely must have in order to run. Then create a secondary filegroup with the most commonly queried tables: customers, items, warehouses and employeeswhatever data is relatively small and helpful. Finally, create additional secondary filegroups with large tables that are not queried as frequently, like archived data or reporting tables.

you have to have the complete log chain to bring your secondary server up to speed with the rest of the database.

About the Author

Brent Ozar is the Editor-in-Chief at SQLServerPedia.com and a SQL Server Domain Expert with Quest Software. Brent has a decade of broad IT experience, having performed systems administration and project management before moving into database administration. In his current role, Brent specializes in performance tuning, disaster recovery and

automating SQL Server management. Previously, Brent spent two years at Southern Wine & Spirits, a Miami-based wine and spirits distributor. Brent has experience conducting training sessions, has written several technical articles, and blogs prolifically at http://www.BrentOzar.com. His online presence includes: Email: brent.ozar@quest.com Blog: http://www.brentozar.com/ Twitter: http://twitter.com/brento/

Note that users can continue accessing the database while the secondary filegroup is being restored.

Restoring a Filegroup from a Full Backup

Plan Ahead: Design Your Filegroups for Faster Restores

Things get more complicated when you want to restore a single filegroup from a backup. Lets say we have a oneterabyte data warehouse. If our SQL Server goes down, we cant restore one terabyte of data fast enough to make sure we dont get fired, but we cant afford a hot standby system. SQL Server 2005s new filegroup backup and restores give us a way around that. First, long before disaster strikes, we have to break the data up into a series of filegroups for easier management: 500 GB filegroup with old sales data (data that is more than one year old and that doesnt change), which weve made read-only 400 GB filegroup with old payroll data (also more than one year old) 100 GB primary filegroup with current sales and payroll data (data from the last year), which is where all current data goes When disaster strikes, heres what SQL 2005s new filegroup restore lets us do: 1. Restore the primary filegroup with the current data in a matter of minutes, and put the database online. The users can query, but if they try to query data from more than a year ago, theyll get an error message. 2. Restore the old sales filegroup while the users are already querying the current data. We want this faster than payroll, since payroll needs data only every two weeks while the sales people may want to query old sales data sooner. 3. Restore the old payroll filegroup, bringing the database fully online. But keep in mind that were talking about complete disasters here, like when the server craters altogether and we have to start with a restore of our primary filegroup.
9

Continuing our example above, lets say we want to restore the 400 GB old payroll filegroup out of a full backup chosen at random from last week, without any matching transaction logs to bring that filegroup up to speed. We know its only old data, so were sure nothings changed, and we just want SQL Server to restore it. That wont work, but to understand why, we have to zoom back out again to look at other ways filegroups can be configured. Forget the nice, clean breaks that we did in our example abovehere are some other different and completely valid ways to configure multiple filegroups: Scenario A: Load Balancing Data vs Indexes Primary filegroup has the data in it (the tables) Index filegroup has the indexes in it If we restored the index filegroup after weve been making changes to the primary (data) filegroup, the indexes would be garbage. They would point at records that may not even exist in the primary filegroup, or vice versawe may have records in the primary filegroup that dont have matching indexes in the index filegroup. Scenario B: Load Balancing Types of Tables Primary filegroup has our OrderLineItems table Secondary filegroup has our Orders table If we restored the secondary filegroup from a full backup without having all the transactions to match, we might have OrderLineItems with no matching Order records. To eliminate these risks, SQL Server wont let you pluck a filegroup out of a full backup unless you have the matching log to bring it up to speed with the rest of your database. You might have a perfect design and say, I know for sure nothing changed, believe me! but SQL wont take your word for it. You either have to have the disaster recovery scenario we talked about earlier (restoring the primary filegroup first, followed by the secondary filegroups), or else

LiteSpeed for SQL Server


Restore your filegroups fast with LiteSpeed for SQL Server Download a free 30-day trial at http://www.quest.com/litespeed-for-sql-server/ Visit www.sqlserverpedia.com

Killing Sessions with SSIS


We all know about the query from hellthe one executed by a departmental user that pulls the entire contents of the cube down into Excel and brings your server to its knees just at the worst possible moment. What can you do about it? Well, you can set a general timeout on all queries that are run against Analysis Services using the ServerTimeout property in msmdsrv.ini. However you should be mindful that all queries could possibly time out after two minutes. On the other hand, you can wait for users to provide feedback that the server is really slow, and then take a look at whats running and kill sessions manually. Neither option seems satisfactory. The ideal solution would be to kill sessions automatically but at the same time apply some rules such as: If the query has run for more than 30 seconds and the user is a departmental user, then kill it If the query has run for more than five minutes and the user is the CEO, send the DBA an email If the DBA role is running a long query, do nothing What better way to implement this logic than in an SSIS (SQL Server Integration Services) package? Here is a proof of concept: 1. First, you will need a way of finding the sessions that you want to kill. As weve already seen, you can find a list of currently executing commands by using the following AS2008 DMV: select * from $system.discover_commands But you may need more information in order to make the decision on whether to kill or not, so it could also be useful to run the two following DMV queries to find out more about sessions and connections: select * from $system.discover_sessions select * from $system.discover_connections 2. Then take each of these three queries and run them in an OLEDB Source in your SSIS dataflow, and join the result sets. You can then implement the logic you want to use to decide whether a session should be killed or not in a Conditional Split, such as using an expression like this: (COMMAND_ELAPSED_TIME_MS > 30000) && ([SESSION_USER_NAME] != MyPC\\ChrisWebb). You could
10

Visit www.sqlserverpedia.com

Visit www.sqlserverpedia.com

The Best of
then store the SPIDs of the sessions you want to kill in a Recordset destination. The dataflow looked like this: 4. The last step is to schedule the package to run frequently, perhaps every 30-60 seconds, using SQL Server Agent. It is very easy to do. Of course you could add loads more functionality to this basic package. For example, you can send an e-mail to each user whose session is killed that explains what has happened, or you might want to kill a session only if there are other users running queries at the same time.

The Best of The Best of


database, even though the user has the same name. This is similar to what happens when you restore a database on another server and recreate the log in there:

A Workaround

If we try to use the login to access the database or create the login, neither will work:

A workaround for this problem is to specify the SID value when creating the login in TSQL. It is an optional parameter. If you provide the same value as on the other server, you dont have the problem. For example, instead of executing sp_change_users_login or ALTER USER as in the previous example, we could have done the following:

About the Author

3. With the resulting recordset stored in a variable, you can then loop over the recordset. For each task in your control flow and use an Analysis Services Execute DDL task to run the XMLA Cancel command to kill each query:

This wiki article was adapted from a blog post by Chris Webb. Chris is an independent consultant specializing in SQL Server analysis services cube design, tuning and troubleshooting, and the MDX query language. Hes a co-author of MDX Solutions with Microsoft SQL Server 2005 Analysis Services and Hyperion Essbase and a regular speaker at user groups and conferences in the UK and Europe. He can be contacted via his company web site, Crossjoin.co.uk. Learn more at: Blog - http://cwebbbi.spaces.live.com/

The Standard Approach

What we could then have done was:

The standard resolution for this has been to use sp _change_users_login. It has an option to list any mismatched logins and database usersthose with the same names but different SIDs.

The upside of this is that its a permanent fix. The next time you restore the database, you wont have to fix it again.

sp_change_users_login then offers an option to fix it. The way sp_change_users_login fixes this issue is to update the SID in the database user to match the login: In Service Pack 2 of SQL Server 2005, new syntax was introduced to deal with this:

Issues with the Standard Approach

Moving SQL Server Logins Between Servers


The Challenge
login and add the user to the database: A very common challenge for a SQL Server DBA is moving SQL Server logins between servers. When you recreate a SQL Server login (not a Windows login), you get a new security ID (SID) by default, even though you have the same user name and password. Issues arise with SIDs when you restore a database from another server. You cant access the database. If you try to create the user entry in the database, you get an error message that says it already exists and fails. But if you try to list the users in the database, the list doesnt show up. Here is an example below. First create a database and a
11

There are issues with this standard approach: it temporarily fixes the problem or, at worst, propagates it to other servers. Its not the database SID that needs fixing; its the logins SID. If the logins SID were correct, there wouldnt be a problem with copying the databases around. Here are couple common scenarios: A database is restored from another server (or a reinstalled server) The logins that use the database need to be recreated

This wiki article was adapted from a blog post by Greg Low. Greg is an internationally recognized consultant, developer and trainer. He has been working in development since 1978 and holds a Ph.D. in computer science and a host of Microsoft certifications. Greg is the country lead for Solid Quality, a SQL Server MVP, and one of only three Microsoft regional directors for Australia. Greg also hosts the popular SQL Down Under podcast (www.sqldownunder.com), organizes the SQL Down Under Code Camp, and co-organizes CodeCampOz. He is a board member of PASS (the Professional Association for SQL Server). He speaks regularly at SQL Server events around the world. Learn more at: E-mail: glow@solidq.com Twitter: http://twitter.com/greglow Web: http://www.sqldownunder.com

Author Credits

SQLServerPedia.com
Next, detach the database, and then drop and recreate the login:

Need a quick tutorial on backup and recovery or performance tuning? Receive priceless information through SQLServerPedias video tutorials to help you get started with DBA tasks. Visit www.sqlserverpedia.com
Visit www.sqlserverpedia.com 12

If we reattach the database, we now have a new login with a SID thats different from the one the user has in the

Visit www.sqlserverpedia.com

The Best of

The Best of The Best of

Configuring Database Files for Optimal Performance


Breaking Up SQL Server Databases Into Multiple Files TempDB Database Configuration
SQL Server databases consist of a few file types: MDF - the first data file LDF - the log files (Any number of log files for a database. All have an LDF suffix.) NDF - additional data files (a database will only have one MDF file, but more NDF files can be added) For performance tuning, DBAs may want to add additional data or log files. This article addresses when to add data files, when to add log files, and when to leave well enough alone. Ideally, have one TempDB data file per physical core. For a server with eight cores, that would be one .mdf file and seven .ndf files. TempDB log files do not need to be tweaked: stick with the standard of one TempDB log file. Log files are filled up sequentially, not by round robin. The TempDB files should not be on the system (C) drive. If TempDB grows out of control, perhaps due to a large sort query, the system can run out of hard drive space and fail to start. The script below will move TempDB from its current location to a folder on the T drive. Change the drive letter and folder location to suit your system. The script uses only a 1 GB file size because of an odd behavior in SQL Server that checks the current file location to see if theres enough space, instead of checking the new file location. If the user specifies a 100 GB TempDB data file on the T drive (which does have 110 GB of free space), SQL Server checks the current location (C) for 100 GB of free space. If that space doesnt exist, the script will fail. Therefore, use a small 1 GB file size first, and then after SQL Server restarts, alter the file to be the full, desired size.

Stored Procedure Execution


Executing Stored Procedures
Permissions The permission to execute a stored procedure defaults to its owner. However you can use the GRANT statement to grant permission to a procedure to other users, as follows: Execution Options Generally, stored procedures are executed with EXECUTE procedure_name syntax. However, if the stored procedure call is the first step in the batch, then the EXECUTE keyword is not required. As an alternative, you can use the sp_executesql system stored procedure to call a userdefined (or system) stored procedure. Sp_executesql can be used to call any statement or batch, not just a stored procedure. Executing SQL statements that are built on the fly is referred to as dynamic SQL. You can use either sp_executesql or the EXECUTE command to execute your dynamic SQL statements. The following paragraphs summarize the usage of each alternative. The system stored procedure sp_executesql behaves very similar to the EXECUTE command, however, it offers two advantages: Parameters can stay in their native data type. With EXECUTE, you have to pass a string, and, therefore, everything needs to be converted to string data types. The query optimizer is more likely to reuse the existing execution plans if you have to run the same query with different parameters. The reason for this is that the text of the query does not change with sp_executesql. With EXECUTE, you pass a different string each time. Example 1 The following query shows how to build a query string dynamically with sp_executesql and EXECUTE: Or you can use EXEC syntax to accomplish the same result: Either command will return all rows from the author's table. Since there are no numeric values involved, both queries will perform exactly the same. Example 2 Now, lets consider another situation: wed like to return sales where quantity exceeds a particular value. We could do this with EXEC command as follows: On the other hand, sp_executesql would not require explicit conversion of the @qty variable data type. Instead youd have to provide the datatype as well as the parameter name within the sp_executesql call, as follows: Although both of the above queries return the same results, the second one is more efficient. Notice that sp_executesql requires that the statement parameter be of a Unicode data type. Example 3 You can pass system or user-defined functions to the EXEC or sp_executesql. For instance the following statements behave identically:

Configuring Databases Smaller Than 100GB

Generally speaking, it doesnt make sense to break up small databases into multiple files until you can conclusively prove that theres an I/O bottleneck that will be solved by dividing the database files up.

Functions with sp_executesql:

Configuring Partitioned Databases

SQL Server 2005 introduced partitioning: splitting tables and indexes up into multiple partitions, with different sets of data going into different filegroups. For example, a data warehouses 500-million-row sales table might be partitioned by year in order to keep the most recent data on faster, more expensive hard drives. On the other hand, it might be partitioned by state in order to facilitate faster data loads: data could be loaded by section of the country. Partitioning is outside of the scope of this article, but were mentioning it here because it affects file configuration. Each partition is usually stored in its own filegroup. For more information about how to configure partitioned databases, read the Partitioning articles on SQLServerPedia.com.

Same thing with EXEC:

Then restart SQL Server for the changes to take effect.

Example 4 Some limitations still apply with EXEC and sp_executesql. For instance, you cannot create and use local variables within the string passed to either command. The following example will fail: SQL Server will inform you that you must declare @qty variable before using it. Similarly, any temporary objects created within dynamic SQL exist only for the duration of dynamic SQL execution and are not visible outside of dynamic SQLs scope. Changing the Database Content Sp_executesql behaves identically to EXEC when it comes to changing the database context. If you change the database context within the stored procedure or batch and execute that module with sp_executesql or EXEC, youll be back to the database where you started as soon as the module is done executing. SET Commands Similarly, the SET commands issued within the context of EXEC or sp_executesql do not affect the main block of code and are effective only while the dynamic SQL is executing. On the other hand, the SET commands used in a batch prior to calling the dynamic SQL will have an effect on how dynamic SQL executes.

SQLServerPedia.com
Boost your street cred in the SQL Server community by contributing or revising articles See how by checking out: http://sqlserverpedia.com/wiki/How_To_Help Visit www.sqlserverpedia.com

13

Visit www.sqlserverpedia.com

Visit www.sqlserverpedia.com

14

You might also like