You are on page 1of 22

T-SQL Cookbook Microsoft SQL Server 2012 Enhancements Tobias Ternstrom - Senior Program Manager Lead

Okay, welcome to this presentation on the T-SQL cookbook and what's new in Microsoft SQL Server 2012. I'm Tobias Ternstrom, and I'm a lead program manager in the SQL Server Engine team, and I'll fill you in in what we've been doing over the last few years for this release.

We'll start by going through a couple of the new query and schema constructs, slide into error handling improvements, new functions that we've been adding, robust metadata discovery, as well as improvements that we've made to Dynamic SQL.

So, first off, we added support for simplified paging. This is basically typical web use case where you want the first set of rows to be displayed on the first page, second set of rows next page and so on. And this basically allows you to specify the number of rows to skip in the new offset clause and the number of rows to fetch after the skipped rows in the fetch next or fetch first clause. Let's take a quick peek at this. If I just write a simple query, select from the sales customer table in AdventureWorks, and we can add some simple where clause. So, we only want the ones in TerritoryID 1. And now I'd like to page over this. So, I'll say, okay, offset 10 rows, which means skip 10 rows. If I tried to execute this now, we will fail, and this is because you have to specify an order by a clause. And this is also mainly because paging doesn't really make sense if you don't have an order of the rows. We don't enforce that the order by clause actually has to make sense, so you can do something like order by TerritoryID now, and obviously here your pages would not be strictly ordered. So, if you get the next page, depending on what query plan we pick, you may not actually get the second page. So, in order to make sure that you're actually moving over something that makes more sense you would want to order by something that at least includes a unique column. So, I could say, hey, sort by StoreID, and then I add AccountNumber or CustomerID maybe, which is unique. So, I skip the first 10 rows, and then I will say, okay, now fetch me the next 10 rows only, and now I get the second page. Obviously I could use expressions or variables here. So, I could say, okay, declare skip and give it a value of 20, and now I can say, okay, this is the number of rows you should be skipping, and the same works for fetch.

Note that this is not the same thing as a cursor. So, if there are inserts, updates, deletes happening in between your queries, you're not guaranteed to look at the same set of data.

Secondly, we also added support for UTF-16 in SQL Server. And this is basically an extension for us to support a wider set of Unicode characters. So, before NVARCHAR and NCHAR types in SQL Server supported UCS-2, which means every character is encoded using 2 bits. Now with the extensions that have been happening, this doesn't actually encompass all possible characters. So, there has been additions known as supplementary characters or surrogate pairs, which include things like ancient scripts, music notation, math symbols, and so on. So, if you try to use these in SQL Server before SQL Server 2012, we'll happily store them for you and retrieve them, there will be no problems; we'll just look at them as two separate characters. The problem occurs when you try to use our string functions on them. So, now we correctly handle these characters. And I have one of these characters here. So, if we just try and pass it to the server, you'll see everything looks okay. So, we didn't do anything to the character. But now if I'll try to say, okay, give me the first character in this string, let's say we do this and pretend it's an H, we get something weird back, because we gave you the two first bytes of that character. Now, if I say instead collate this and use a collation that supports this, so this is Finish_Swedish, just because I happen to be from Sweden, in the collation set that we created in SQL Server 2008 to 100, CI case insensitive, accent insensitive, and then I say SC for supplementary characters, and now you will see we handle this correctly. And obviously these SC collations you can use on columns, databases and so on. So, if you're interested in doing string management on top of these types of characters, you should make sure to use supplementary character collation.

Next we added support for sequence generators, and this is something that's been in the ANSI standard for a little now. It's something that you will find in other database software, and it's similar to what we had in our IDENTITY property on a column. The major difference here is that the IDENTITY property is always tied to a specific column in one table, and you can't have more of these columns in the same table, and you can't share now identities across tables. So, if you want a central generator of sequences, of sequence numbers, you can instead create a sequence generator object and use it to populate your tables. So, basically it separates the number generation part from the storage. And here we have an example of creating a simple sequence. We have a set of properties that you can specify. So, here we say, hey, generate integers and please start at 10,000, and increment by one. And you can obviously say increment by a negative number if you wanted to decrement And then you use the NEXT VALUE FOR function to get the next value from the sequence generator. In this example you can see that we're adding now generating numbers for both employees and contractors using the same sequence. So, now you will know that employee ID and contractor ID are actually unique across both of these tables.

And this is just what properties we support. So, you can specify the type, and we support all user defined or all integer types, as well as user defined integer types, up to decimal 38,0. You can specify again the starting point, the increment, what is the minimum and maximum value for the sequence, and whether or not it should cycle. So, if it hits the end of the sequence, depending on if it's an ascending or descending sequence, what should it do; should it give you an error message since you reached the end or should it cycle back to max or min value. The cache option is basically setting, specifying what type of performance you're looking for. So, once we generate a sequence number it's been used, and if you roll back you don't get it back, for example. The interesting thing is what happens if you shut down the service, will you actually get holes in your sequence or not. So, when you restart again will you have these values in the sequence generator, the ones that you didn't use, or will they be lost? So, if you specify a cache size of 100, it means that every 100 values we persist to disk, and we guarantee that we don't generate obviously duplicates across this. But it also says that you can have a number of up to 100 values that can be lost in case of an unclean shutdown of the server. By default the cache size is specified. We don't actually publish what the default is because we want to be able to change it is necessary, but by default cache is on. So, here I can go ahead and now create the sequence s1, and then we can try and use it. And you can see that the default type seems to bigint or a 64-bit integer, and we start at the beginning of the domain for the type, right, so on the -922 whatever it is after billion, right? And obviously I can use it in a query as well. So, let's say we query the customer table, and now we keep generating new values.

I can drop my sequence, can create it again and say now start with 1. And now we can see that it doesn't start at the beginning of the domain of the type anymore. NEXT VALUE FOR is a new type of function for SQL Server. It's only allowed because it's a function with side effects basically with changing the state of the server or the database every time we execute this function. We only allow it in places where we can guarantee you that if you ran this query independent of the query plan you will use the same number of values. So, in a select statement you can only use it basically in the column list. So, if I try to say where this is greater than 2, now how many times will we actually execute the function? It's unclear. If we add another filter here or another predicate, now how many times will we execute it? So, if you try to do this, we'll fail the statement and say, hey, this is not allowed in these places of the query. But you can use it in the column list, you can use it in an update, and you can use it in values in an insert or in an insert with select. You can obviously not use it in delete.

We also provide a stored procedure for you to generate a range of sequence values, and that's the sp_sequence_get_range stored procedure. So, if you do sp_sequence_get_range, specify this S1 sequence, you can say give me 1,000 values. And then we will return through output parameters this is the first value in the range that you got, this is the last value, this is the number of times you cycled, and these are the properties of the sequence. And in this way you can now use this sequence to either populate the sequence on another database on another server in order to have them generate unique values across the servers or you could use it, for example, to populate a state machine that maybe inserts unique values into a text file before you import it into the database. We also support using NEXT VALUE FOR inside of default constraints. So, in this way if you want the comfort basically or ease of use of identity, just specify it for a column but you don't have to think about it when you do your inserts, you can now create a sequence and you can reference the NEXT VALUE FOR function in a default constraint for the column.

We still obviously support identity. Identity also is also part of the ANSI standard for SQL. So, there's no reason to go and replace identity with sequence generators unless you want this behavior of being able to, one, retrieve values without actually inserting them, or you want uniqueness across tables.

We also did a lot of work in extending our support for window functions, and this is the functionality we've added is also part of the ANSI SQL standard, so SQL Server 2008 or SQL 2008 compliant. By the way, the query paging I showed earlier is also part of the ANSI SQL standard. That's why we went to the syntax of offset, fetch next and first. So, window aggregates were actually introduced in SQL Server, the OVER clause was introduced in SQL Server in 2005 where we had the partition_clause and the order by clause for analytical functions. Now we're also extending it to support the frame_clause where you can specify within the partition in the query which rows should actually be included in the calculation. We're also adding a few ranking functions, the cumulative distribution and PERCENT_RANK, as well as inverse distribution functions like the continuous percentile and the discrete percentile. And basically with the continuous percentile you can get the median of a query. You can say give me the 50th or .5 percentile, continuous percentile, and that would return you the median. We add LAG and LEAD functions, so that you can specify, hey, what is the value in the previous row for this column, and FIRST and LAST_VALUE, what is the first value within this window and what is the last value.

So, let's take a bit closer look on this frame clause. So, let's say I have a query now, I'll just use orders as an example. So, FROM Sales.SalesOrderHeader, and I'll join this with SalesOrderDetail. So, this gives me all order rows basically. And I'll go ahead and say, okay, please give me this sorted by OrderDate and SalesOrderID. And then I'd like to have let's just go with the UnitPrice as a simple example and SalesOrderDetailID. So, what if I would now like to know something more about the aggregate over this without removing the details from the query? So, traditionally if I want to use an aggregate function like sum or average I'd have to use a GROUP BY clause. So, starting with SQL Server 2005 we supported things like this where you could say, okay, give me now the sum of UnitPrice OVER -and I can just say this, this is the TotalSum for the entire query. TotalSum, sounds a bit odd, but let's go with that. So, this is the full sum for the query. Let's call it QuerySum. What if I wanted to know, okay, what is the total? And so this is now the window I'm looking at is the full query, including my whatever/where expressions and whatnot. What I could also say is I want to partition this and say I want to do BY let's say SalesOrderID, so by order I want to partition this as well. So, let's add the SalesOrderID here. Now you can see that for this order this is the particular row's unit price and this is the full price for the partition. So, let's say PartitionSum. And this is the full sum for the whole query. So, this you could do in SQL Server 2005.

What if I now wanted to say, okay, what is, for example, the cumulative sum for these order rows? So, this makes more sense on a transaction kind of table for this example, but let's work with this table for now. So, when I look at this row the cumulative would be 2,024. Here I would like to include both of these rows, so it's 4,000-something here, these three rows it's 6,000-something, and so on. What you can do now then is you can say partition, and we can say framed. So, I say, okay, still I'm looking over this partition, but within the partition I want to say ROWS BETWEEN UNBOUNDED PRECEDING, so start from the beginning of the window, AND CURRENT ROW. The example of this one is basically saying rows between unbounded preceding and unbounded following. So, in this case -- and obviously this doesn't make much sense if I don't know the order. So, you should say PARTITIO BY this, and I need to make an ORDER BY and let's do SalesOrderDetailID. So, obviously comparing the ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW, if you don't have an order, there is no meaning to this. There we go. So, now we get exactly what I was asking for, 2,000 or 4,000, 6,000, 8,000, 10,000 and so on. And you can see once we get to the next order we reset again and start adding and then next, again reset again. What if I wanted to do this across the whole query? Then I could just add this, remove the PARTITION BY clause and say, okay, now it's QueryFramedSum, and again we get the cumulative, but now it doesn't reset when we go past each order. So, this introduces a whole new way of working with aggregates in detail queries, and it's extremely powerful when you want to do interesting calculations over data. And we support all of the standard aggregates. In SQL Server 2012 we don't yet support userdefined aggregates for the frame clause. We also have full support for this row framing. The range framing, which instead of being rows is basically physical rows that you specify unbounded or X preceding, X following, range is based on the values instead of the row counts.

So, we do support both rows and range, but for range we only support the absolute. So, you can say UNBOUNDED PRECEDING, UNBOUNDED FOLLOWING, and you can use CURRENT ROW, but for range you cannot specify values, so say, hey, X months back, for example.

And these are all of the options that we currently support in SQL Server 2012, so all of the rows functionality as well as range with only the absolute boundaries.

Okay, so let's take a closer look at improved error handling in SQL Server 2012.

So, in earlier versions of SQL Server we added support for basically try and catch blocks in SQL Server 2005. And even before that the way you raised errors was using a command known as RAISERROR or it's actually spelled slightly differently. I look at it is IntelliSense of the '80s. So, it's not this, right, it's this way we're spelling it. So, I say, hey, raise this error, provide this number, and off you go, we get the error message back. The problem we have with RAISERROR or we actually called it "raise roar" -- so the problem we have with RAISERROR is that depending on the parameters you give it, it behaves differently. And especially it behaves differently depending on how you're nested in your current scope.

So, let's say I do something like this, I just put SELECT 1 after this guy and execute. As you can see, we execute both of these. So, the RAISERROR basically sent a message to the client with the error message but it continued executing. Okay, that's all fine; what happens now if -- this is the functionality, but what happens if I put it in a try block and now obviously this could have been inside a stored procedure, so the person who wrote the stored procedure are fine with this behavior, someone else puts the stored procedure execution inside a try block, and now you can see that it changes. So, clearly it didn't execute SELECT 1, because it never came to me. And presumably the RAISERROR happened and it moved to the catch block and that's why we're not getting any messages back. If I modify this interesting first integer here and say 10 instead, different things happen. So, RAISERROR, depending on this first integer apparently, it either sends a message or it may move you to the catch block depending on if you are in a try block or not. The other problem we have is that if you want to re-raise the error here, we have no built-in functionality for this. You can use the error number, error message and so on functions to grab the individual messages or the individual properties of the error, but you can't actually simply re-raise it. So, in 2012 what we introduced was the THROW statement. So, THROW you specify which error number you want to throw. We have no correlation to sysmessages here anymore. So, if you want to use sysmessages that's fine, and sp_addmessage and so on is still supported. If you want to grab that message and use it, you have to grab it from SUS messages first using format message. So, I say, okay, this is the error number, 50,000, and then this first integer that specified severity level we don't have. So, we always use severity level 16. But we still have the second integer which is basically a state, one byte that you can send, and you can correlate that with, for example, an enumeration on the client side. And now I get this behavior, which is the same as before, but if I do this you can see it never executes SELECT 1, because THROW always aborts the current batch. And within the catch block I can say THROW with no parameters, and this will just re-raise what I caught. So, it greatly simplifies error handling in SQL Server.

So, let's have a look at the few scalar functions that we added to make life a little easier. It's a common request that we have from customers.

So, first, we have my personal favorite. It's TRY_CONVERT and TRY_CAST. As you may know, CONVERT and CAST are basically analogous; it's just that CAST is the ANSI standard alias for converting between types. The only difference between CAST and TRY_CAST is that TRY_CAST if it fails returns null. So, I can now try something like this, and I want to, for example, get where AccountNumber is convertible to an integer. If I try to do this, CAST AS INT, just put the value here, obviously I'll get a conversion error. If I do TRY_CAST, now I will get all of the rows that actually succeeded in the conversion. And as you can see, we had no rows that actually succeeded.

So, you can use it, one is for finding data, you can use it for scrubbing data, you can have a case statement, for example, or a case function where you say, hey, if the TRY_CAST to integer works do this, if the TRY to decimal works do this, and so on. So, very powerful, very simple functionality. FORMAT function is basically take the functionality we have in .NET and support formatting either using a picture string or using just a specified format identifier in the culture to convert base types into strings. So, I can say, hey, FORMAT, I can say select from Sales.SalesOrderHeader, and I want to format OrderDate as large date or long date format in American culture. And now we get the long date format in American culture. I can also specify, hey, I'd like it in let's say Swedish culture, and then we have this. You can also use the picture strings. So, you can say, hey, I want the date, and I actually want it in my own format, which is I want to have year first and then I want it to say colon-colon, and then I want month and then colon-colon and then day, and then I get it in my own format here. And these are the same, exactly the same functionality that you have in .NET Framework and we actually leverage .NET Framework behind the scenes here so you'll have the same experience. We added PARSE, which is basically the inverse of FORMAT, right? You get the string and you say, hey, PARSE this from the string using this culture into whatever data type we want to get to such as integer and date or a decimal. TRY_PARSE again is analogous to PARSE, but if the PARSE fails we return null. We also added support for IIF, which is basically similar to what you have in some of the Office products, which is the Boolean expression returns true we return what you have in the second argument, and if it returns false we return what we have in the third argument. CHOOSE is basically like an index into an array. So, you specify first which argument you want to return, and then you specify the argument. So, if I say index one, we will return the first argument. And CONCAT is just a concatenation of a list of arguments. It's not an aggregate. So, it's not like you don't say select CONCAT over a column, group by something and we get a concatenated list back. It's a static list of values. So, I can say CONCAT this with this with this with this, and it basically creates nulls as empty strings. It also handles if there are different types for you, simply is conversions. So, it knows the target is always a string.

We also added support for a few date-related functions. One is a long time ask is EOMONTH, which basically returns the end of the month for a particular date. So, I can specify here, okay, what is EOMONTH for today, and it's apparently the last of February or rather the last of February obviously, so the 29th of February. And I can also ask, hey, what is the end of the next month, and that's apparently the 31st of March. We put time into looking at this EOTIME function, but it's apparently hard to figure out when the end of time is actually going to be. We also have DATEFROMPARTS, TIMEFROMPARTS, and so on. These are basically simple constructor methods or functions to help you easily construct a new date time type based on integer inputs. So, I can say, okay, select DATEFROMPARTS, and then I say, okay, I would like 2012, and I would like it to be October 16th, happens to be my wife's birthday, and then we get it back in the date data type. So, very practical, you don't have to concatenate strings anymore to create the date type that you want.

Okay, next let's have a look at Robust Metadata Discovery in SQL Server 2012.

What is this really about? Well, it's basically when you want to figure out if I were to execute a certain batch, what would it return to me? Let's first just test this functionality. So, I'll do sp_describe_first_result_set, and then I'll pass it a simple query. SELECT * FROM Sales.Customer. And then I have execution plans turned on, so I'll turn those off, and now you can see that we get back a list of these are the columns that you will get back if you were to execute this batch. The interesting thing here why you would need this, an example would be if you're creating some sort of, for example, report designer kind of tool, you don't necessarily want to execute the query that the end user typed in, because it might be a long-running query. You might just want the metadata for the query. And that's when you use this stored procedure, and it's available through ODBC, OLEDB and SQL client that will return the metadata using those APIs, or you can call it directly.

The interesting thing here is also we handle multi-statement batches correctly. So, what if I do something like what IF (1 = 2) SELECT FROM customer, ELSE I'd like to SELECT FROM SalesOrderHeader. So, in this case when we execute we obviously get either one, depending on the result of the expression, but these obviously are totally different metadata. The way we used to discover metadata in the past was with SET FMTONLY option. So, I say SET FMTONLY ON, which basically turns the engine into the behavior which is ignore all control flow and just enter ALL, IF, ELSE, WHILE, et cetera, once. And in this case you can see that we get both of these back. And the way the most customers use it is to just grab the first result set and go with that metadata. But this is where the robust part comes in. This is obviously not robust as some executions, you may get the second result set instead. The other problem is you can obviously trick this, right, because if you have something like IF (1 = 2), you know this is never true, but it's actually true in FMTONLY land, right? So, you could say something like, hey, if this happens, actually turn FMTONLY off and go do something. It's also part of it why it's not a robust solution. So, now if I put this into this call, we will actually complain and say, hey, we found two possible first result set statements, SELECT * FROM Sales.Customer, and from Sales.Order.Header, and they're not actually compatible. So, now I find the problem, and if I want them to be compatible I could say something like, hey, here I return CustomerID, and in this guy I return SalesOrderID. These are both integers, and I could maybe add -- I don't know what else I have there -- as an example. Now we see that these are compatible, and the compatibility is fairly strict. It's not like in a union example, for example, where we convert to a certain type and go by type precedence. Here instead the types have to be absolutely the same. The only time that we allow differences in type is if you have variations in length of variable types, so VARCHAR, NVARCHAR, VARBINARY, then we will just pick the larger of them, the larger length. And the other thing is if nullability differs, one is nullable or one is not nullable, you can see we return nullability here, we'll just say it's nullable. And as you can see here, if the name differs, we will just say, we don't know the name. But if I say, okay, this one AS ID and this one AS ID, now they're compatible and I'll actually say, oh, the name is ID. So, this is very, very useful if you want to create something like a report designer or something, for example, where based on a query you want to automatically render a form. Now you can just look at the query or pass us the query, and we'll give you the shape of the query back without actually executing it.

So, as you saw in my demo, FMTONLY is clearly a broken API, and this is why we're also deprecating it starting from this release. So, if you're using FMTONLY, you should start working your way towards the new APIs. As always, we will not remove FMTONLY in the next release. So, we are removing it over a number of releases before we actually take it out of the product.

So, what you can do is you can use sp_describe_first_result_set. You can also use a dmf or two dmf sys dm_exec_describe_first_result_set and sys dm_exec_describe_first_result_set_for_object where you can query these. The last one is very interesting, the for object one, because you could then select from for example, sys SQL modules and alter apply with first_result_set_for_object and pass the stored procedures ID, and now you'll get the result set for each of your stored procedures returned. So, this is great if you want to generate, for example, documentation.

We also have a very interesting procedure or API, which is sp_describe_undeclared_parameters. So, we also refer to this as the Jedi mind trick. So, let's try that one out. So, that's the other part of the puzzle, right? You have now someone who put in a query into your tool, sp_describe_undeclared_parameters, and obviously the tool wants to allow you to specify the parameters that are specified in the query should surface in the tool. So, let's say they say something like, hey, WHERE CustomerID = @Customer or @X. Now the question is, first, are there any parameters in here, right, so find the parameters, and secondly, what is actually the type of the parameters. So, if I execute this we'll see that we will return to you, hey, we found a parameter in here that's not declared, and it's called @X, and we suggest that you use an integer as a type. What do we mean by suggests? Well, in this case it seems quite clear, we're comparing it directly to one column. What if we do CustomerID -- what other columns do we have in here -- CustomerID + TerritoryID. It's probably similar. Ah, they still think it's an integer. Okay, what if we divide it by TerritoryID? Still an integer. Okay, yeah, integer division returns integer, right? What if we say multiply it by 2.0? Now there are two different types involved here. So, we'll say, hah, for this we suggest numeric 13.1 or 13,1. So, depending on your expression we will always suggest you a type. And this is very practical. And why we also say suggest, it doesn't mean that this is the intended type of the person who wrote the query, they may have some other intention, but it gives you a good starting point for which type to use until the end user, when you say, hey, we suggest integer, they can say, no, no, I actually meant decimal 38,0 for example.

This is the guarantee that we provide, and this is clearly a developer in the SQL engine that wrote this.

This is more what we're saying is basically if we analyze a batch with sp_describe_first_result_set, and we give you some metadata back, the guarantee is when you actually execute this batch, as long as no schema in the database was changed, we will return compatible result set with the metadata that we actually gave you, barring the nullability and the names that I mentioned, or we will give you an error back, or we will give you nothing back. And this is to cover obviously errors can happen during query execution or there may be a THROW statement somewhere. It's also a common path then to have something like IF expression RETURN, and then we don't want to fail these queries. So, that's why you'll get the result set we told you you'll get, nothing, or an error message, that's the guarantee.

The last thing I wanted to cover is improvements that we made to Dynamic SQL, and this is also an interesting one.

So, before we had the execute statement, and the execute statement had one argument that you could provide in the WITH clause. You can say WITH recompile to force SQL Server not to use a plan that was compiled and put in the plan cache. What now support both for executing stored procedures, remote procedures, as well as Dynamic SQL is that you can specify the RESULT SETS clause. And execute result sets undefined is basically the old behavior, which means you don't have to tell me what the result set is, we'll just return whatever happens. But now you can also say specify result sets none, which means if the stored procedure or Dynamic SQL tries to return something we will fail the batch and say no, the contract was nothing, and no results were to be returned. And we can try this here. So, EXEC and let's just provide a query here. Let's use this guy. This is the batch. And I say WITH RESULT SETS NONE. And now you can see we specified there should be no result sets and the query tried to return something, so we failed the execution. If I just do this you see we get the result set we expected. You can also specify exactly which result set you want, either the shape inline like you see at the bottom of the page here or you can specify as object. So, we require the contract is that the result set has to look exactly like this particular table view or table value function, or as this particular table type. You can also specify that it needs to be a for XML being returned. And you can specify multiple result sets. This doesn't mean it's one of them, it means that this is the shape of the first result set, this is the shape of the second one. So, let's try this, RESULT SETS. So, these are the RESULT SETS, and the first one I say should be called Customer ID, and it should be a BIGINT, and it should not allow NULL. And now you can see that we're returning it as CustomerID even though I renamed it ID In the query.

So, here this is not the same as in metadata discovery. Here we're looking at this as a cast. So, you're saying this, we should try to cast it to BIGINT and we should rename the column to CustomerID, and we should not allow nulls, meaning if you find a null we will actually error out. If I say something like NVARCHAR (100) here, we'll do the same thing, now we're converting it to NVARCHAR and actually you're returning a string. If I say this is the second result set, say I would like now AccountNumber in a separate result set, if I try to execute this, we'll get an error message after the first result set, because the contract was there would be two result sets, one with CustomerID and one with AccountNumber, and you only returned one. So, I try to add this now another query, AccountNumber, and now it works out that we get the second query back as well. If I add a third query, we'll go ahead and fail it, because the contract again was two exact result sets with this particular shape.

For more information about certification and training on SQL Server 2012, please visit the Microsoft learning site. I hope this was interesting for you, and you got an insight into the additions that we did into the programmability space in SQL Server 2012, and please download the product, try it out. I know you won't be disappointed. Thanks. END

You might also like