Exploring AlwaysOn Availability Group Template in Azure

This is one of those posts which is due for a long time. I was really excited when Microsoft announced the release of an AlwaysOn AG template for the Azure Portal back in August 2014.

I had decided to explore this AG template soon after the launch, and this post will explain how easy it is to deploy a 2 node AlwaysOn AG deployment in Azure. Trust me, it’s just a matter of few clicks.

Sit back and relax,we are now going to deploy AG in Azure.Here are the steps :

After you login to the Azure portal, you can access the Market Place


You will be surprised to see the number of options which are available in market place now.


Search for AlwaysOn ( I really hope MSFT will correct this, it should be rather be AlwaysOn AG) in the search box.



Choose SQL Server 2014 AlwaysOn and proceed by clicking Create.



Enter details for the required fields and also select the details needed for SQL Server settings. For the purpose of this post, I will be choosing default settings for the domain controller,virtual network,storage etc.


Note : Ignore the warning messages in the picture above.Warning for Resource Group is shown as I already have a RG with the same name.

That’s it, and you click create.

It takes around 40 – 45 minutes for the whole process to complete and once it was completed, I had the below set of virtual machines.



2 machines for the domain controllers, 1 for the file share witness and 2 for the AG nodes.

Here is a view of the Resource Group and its resources :


If you login to one of the SQL nodes and open up SSMS, you can see that AG is fully configured and its ready for use.


Here is the dashboard view and the quorum info :



Conclusion :

Templates are interesting, and this is an excellent way of deploying AGs in large-scale. I’m pretty sure that MSFT will be working more to optimize this, and will add more and features to it.Looking forward to it.

Thanks for reading, and keep watching this space for more.

This one enhancement makes AlwaysOn Availability Group better in SQL Server 2014 !

I keep getting questions on what is improved other than the increased number of secondaries(8 now) with AlwaysOn Availability Group(AG) in SQL 2014. In this blog post  I would like to talk about one major enhancement around the increased readable secondaries availability in SQL Server 2014.

In SQL Server 2012 there is a good chance that your read workloads can be killed during network failures. Typically this happens in a multi subnet environment where you have AG nodes in 2 or more data centers.If network failure results a communication failure between primary and secondary replicas, then the secondaries will go into resolving state.In resolving state, the read workloads are basically killed. (Yes, yet another reason why you need to test your AG deployments)

With SQL Server 2014, there is good news. Read workloads are not impacted during network failures, primary replica down or cluster quorum issues.

Yes, it stays up all the time, and will be serving your read workloads. The replica will be in resolving state, however the databases are still available for your read connections/workloads.

Keep in mind : Read-only routing is not yet supported for this, and you cannot use listener for your reporting workloads.Brent Ozar calls out this in his blog post here.

Lets now walk though this enhancement :

I will be using a 2 node AG setup and the data transfer mode(Availability Mode) is asynchronous.

If I query the DMV sys.dm_hadr_availability_replica_states I can see the replica states.

SELECT role_desc,connected_state_desc  FROM sys.dm_hadr_availability_replica_states

role_desc               connected_state_desc

Lets double-check if the data transfer is set as asynchronous

SELECT  is_primary_replica, synchronization_state_desc FROM sys.dm_hadr_database_replica_states

is_primary_replica       synchronization_state_desc
0                                  SYNCHRONIZING
1                                  SYNCHRONIZED

Everything looks good, and now lets take down the primary replica.

Note : Dont try this at home ! (I mean production)

Okay, As this is a test instance I really don’t care and what I did was shutdown the SQL Services to take the primary down.

If we query the DMV again, we can see

SELECT role_desc,connected_state_desc  FROM sys.dm_hadr_availability_replica_states

role_desc                connected_state_desc

And note, there is no entry for primary as we I have taken it down.

Even though the secondary replica is in resolving state, read connections just works.

SELECT COUNT (*) AS [TotalRowCount] FROM [dbo].[InsertCount]


The secondary database is in NOT SYNCHRONIZING, NOT_HEALTHY state, however its ONLINE.

SELECT synchronization_state_desc,synchronization_health_desc,database_state_desc FROM sys.dm_hadr_database_replica_states

synchronization_state_desc         synchronization_health_desc       database_state_desc
NOT SYNCHRONIZING               NOT_HEALTHY                           ONLINE

Here is a view from Management Studio :

AG Resolving


Yes, this is a very good enhancement which will make sure your readable secondaries are highly available and I’m pretty sure that this is one of those enhancement which will force you to deploy AGs in SQL 2014 rather than opting SQL 2012.

Thanks for reading and keep watching this space for more !

SELECT INTO Improvements in SQL Server 2014 !

SQL Server 2014 is a great release with high performance features like In Memory OLTP, Updatable Column Store Indexes etc.

These features still remains as the top favorite and the most compelling reasons for organizations to do an upgrade, However SQL Server 2014 also includes very small but very relevant and useful enhancements, and this post talks about one of those enhancement:

SELECT INTO now runs Parallel !


When I first heard about this feature my first impression was: What…was it not running parallel before ?

No, SELECT INTO was single threaded all these years and with SQL Server 2014 this will now use multiple threads.

Let’s do a quick walk-through of this enhancement.

In this walk-through we will use a table named MasterData which has around 12 million rows.

--Select Count
 SELECT COUNT(*) AS Row_Count FROM [dbo].[MasterData]


Data from this table will be copied over to a different table named MasterData_Copy using SELECT INTO operation.

--Turn on Actual Execution Plan
 SELECT * INTO [dbo].[MasterData_Copy] FROM [dbo].[MasterData]

This operation was completed in 6 seconds (Desktop class machine) and if we examine the execution plan we can observe parallelism.


And the distribution of row’s among multiple threads:


Now let’s make this little more interesting. We will change the database compatibility level to SQL2012(110) and see what impact that brings to the SELECT INTO operation.

--Change Database Compatibility
--Turn on Actual Execution Plan
 SELECT * INTO [dbo].[MasterData_Copy] FROM [dbo].[MasterData]

This operation was completed in 11 seconds and it generated a serial plan.


And here is the row distribution.


<Update 1/13/2015 2 PM> 

I received a comment from Toni which says “It might be a good idea to set MAXDOP, most likely having 16 threads isn’t the best option”.

I absolutely agree with this and a well tested MAXDOP setting will definitely bring better results and this is something which you have to keep in mind when queries goes parallel.



If you are using SELECT INTO a lot in your environment, then I would highly recommend you to compare and see what improvements you can again out of SQL Server 2014.


Let’s talk about something simple today. You might already be knowing this, but if you haven’t seen/noted this before then this post will definitely help you.

I was working on requirement wherein I had to simulate a workload which will do massive inserts to a table which had computed column. I tested my T-SQL script to create the workload via SSMS and it worked like a gem.

Yay, all good and that’s the end of this post!  No, not really.

Things started little interesting when I used SQLCMD as a test case to call the same T-SQL. I got an error which said:

Msg 1934, Level 16, State 1, Server <Name>, Line 4
INSERT failed because the following SET options have incorrect settings: ‘QUOTED_IDENTIFIER’. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or query notifications and/or XML data type methods and/or spatial index operations.



That’s interesting because my commands worked just great when I used SSMS earlier! What’s wrong with SQLCMD now?

I quickly realized that this has to something to do with query execution options, and cross checked SSMS options.



So, SET QUOTED_IDENTIFIER is ON for SSMS connections, cool and that’s the reason why my commands where running cool earlier.

I quickly checked books online article for sqlcmd and found this. [Well, this is a very good article which is written for sqlcmd by those technical writers at Microsoft]


So, adding -I to sqlcmd as a query execution option fixed my problem.



Little tips and tricks like these always surprises me when I’m dealing with SQLServer, and it helps me learn something or the other daily.

Thanks for reading and keep watching this space for more. Yep, lot of SQL2014 goodness coming in next few days.

Data Collector in SQLServer 2014 – What’s New !

Today I noticed couple of changes(Neat changes !) for Data Collector in SQLServer 2014. This post is a detailed walk-through of setting up Data Collector in SQL2014. In the walk-through if there is a new feature or change which is specific to SQL2014,then I will highlight that.

As soon as you right click Data Collector and choose Tasks you can see 3 options(This is a change in SQL2014) –

  • Configure Management Data Warehouse
  • Configure Data Collection
  • Cleanup Data Collectors (This is a change in SQL2014)

Lets talk about each option in detail now.

Option 1

The first option will create and configure a Management Data Warehouse.


We will proceed by clicking Next and it will take you to the screen which will help you to create/choose the MDW database. I have already created a blank database, so I will select that db in this walk through and will proceed.


Next up is to grant permissions to users. Reader role is for reports, writer is for uploading data and admin has all these rights.


Final step is to finish the operation.



Option 2

Configure data collection option helps you start data collection and store collected data to the MDW.


This step lets you choose the MDW db,a directory to store the data before its uploaded to the MDW and the collection sets. There are 2 collection sets –

  • System Data Collection Sets
  • Transaction Performance Collection Sets (This is new in SQL2014)

The above selections will create 2 new data collector sets called Stored Procedure Usage Analysis and Table Usage Analysis along with regular 3 data collector sets which was available in the previous versions.


Complete the wizard by clicking finish.



Now that we have configured a MDW and started Data Collection, lets see what happens in the cached folder C:\MDW_Temp

There you go,lot of CACHE files out there and its getting loaded/cleared very fast.


 Option 3 – (This is new in SQL2014)

Cleanup Data Collectors is an option which will be enabled after you enabled data collection and this wizard will help you the clean up the collection set.




So as the 1st phase of configuring data collector is now over, we will drill down little more.

If you collapse the Data Collection tree you can see 2 new data collection sets( This is new in SQL2014)

  • Stored Procedure Usage Analysis
  • Table Usage Analysis



These data collectors are integrated with data collector for a purpose. The data which is collected will provide you with valuable insights on tables and stored procedures which are good candidates to migrate to In Memory OLTP(Hekaton).

I would recommend you to read a great blog post by Benjamin Nevarez(B/T) on this very topic.

Regular data collection sets which were there in the previous versions are also available and its under System Data Collector Sets



Lets have a quick look at the reports which data collector offers. These are pretty cool out of the box basic reports and they will help you understand some level of trending.

Query Stats History Report –

This report will give you details of Top 10 TSQL queries and you have the option to rank queries on the basis of CPU, Duration, Total I/O, Physical Reads, Logical Writes. There is also an option to do play back.




You also have an option to drill down to particular queries to get more details.



Server Activity History Report –

This is one of my favorite reports. Its tell you about CPU, Memory, Disk, Network usage trends. The report also gives valuable insights about Wait Stats and Server Activity. Again you have the option to drill down for some of the graphs.


Disk Usage Summary Report –

This is a great source to know the growth trends.

DataCollector201421 Conclusion

Data Collector is a great feature, and In Memory OLTP being the buzz word, the data it collects can provide you with valuable insights.

Thanks for reading and keep watching this space for more ! (Next up is some cool things about WindowsAzure(Oops !) Microsoft Azure !!!

SQL Server 2014 Install – Step by Step

Hot cake for the day is SQLServer 2014 RTM bits. Its generally available starting today, and I downloaded a fresh copy to start testing it.

I will be writing a separate post on the features I love about SQLServer 2014,but here is the step by step install process for a standalone install.

There are no changes for the GUI based install process when you compare with SQLServer 2012,but one thing I’m pretty sure that the installer is sleek and faster.

Step 1  – The landing page looks pretty similar to the earlier versions, and you can choose the installation tab to get things started.

SQL2014 RTM Install1

Step 2 – Choose the stand-alone install option, and it will take you to the product key page.

SQL2014 RTM Install2

Step 3 – This screen is where you need to accept the license terms. There is also an option to turn on select Customer Experience Improvement Program and Error Reporting.

SQL2014 RTM Install3

Step 4 – 6 These are the steps where Global Rules, Product Updates are checked and Setup files are extracted and installed. If the checks are good, then the install will proceed to the next step automatically.

SQL2014 RTM Install4

Step 7 – Install rules are checked for this step, and I have highlighted a check which was interesting to me.

SQL2014 RTM Install5

 Step 8 – This step provides you with Feature installation options. In this install I will discuss about the first option, ie SQLServer Feature Installation.

SQL2014 RTM Install6

Step 9 –  This step is where you will select the features. I’m choosing only those ones which I need for my testing.

SQL2014 RTM Install7

Step 10 – This step lets you decide if you want to go with Default Instance or a Named Instance.

SQL2014 RTM Install8

Step 11 – This step lets you select the account for the services. I’m choosing all defaults as that’s good for my test lab.

SQL2014 RTM Install9

Step 12 – This step lets you choose the Authentication mode, Data directories, and File Stream options.

SQL2014 RTM Install10 SQL2014 RTM Install11 SQL2014 RTM Install12

Step 13 – Feature Configuration Rules step. As mentioned earlier if the checks are fine, setup automatically proceeds to the next step.

SQL2014 RTM Install13

Step 14 – This is the final step of the installation, and you can pick the ConfigurationFile.ini from the path

C:\Program Files\Microsoft SQL Server\120\Setup Bootstrap\Log\Date_Time\ConfigurationFile.ini

SQL2014 RTM Install14


If everything goes smooth, your install will succeed.

SQL2014 RTM Install15

I’m really excited about some real good features in SQLServer 2014,and I will be writing about them during the next couple of days.

Thanks for reading and keep watching this space for more !


Memory Optimization Advisor – SQL Server 2014 CTP2

SQLServer 2014 CTP2 came with an inbuilt tool called Memory Optimization Advisor which will help you in migrating your normal tables to memory optimized tables.

Where can I find this tool ?

All you need to do is right click the table you want to migrate, and choose the option Memory Optimization Advisor.


The tool will launch with a detailed description of what its capable of


I decided to play around with this tool, and here are my observations –

Note – This is still CTP, so things can change during RTM/GA phase.

I started of with a normal table named Employee which has 3 Col and 3 rows data on it. Nothing big, pretty simple.

Launched the advisor, and the initial checks were all green. [Be sure to carefully analyze the checks, they are all interesting.]


The wizard also has the ability to export a report (Who doesn’t like a report these days !)

I clicked next to proceed, and the wizard gave me some information about the limitations of memory optimized object, and a link which will explain the limitations in detail.


I liked these warnings, because its telling me well in advance about the limitations so that I will be more careful on what I’m up to.(Everything has a cost associated with it !)

Next up is some interesting stuff. The wizard is forcing me to select the options for memory optimized objects. I have the option to mention memory optimized file group,name and the file path.

I also have the option to re-name the original table, copy data from the original table to the memory optimized table and a check box to mention if the table needs to be moved with no data durability.(Default being both schema/data durability).

I decided to go with all defaults as this was a test case.


One of the other cool option which the above wizard window gave is this value –


I presume that this value will be the cost of size in memory which will be needed when the table is moved as a memory optimized object. I might not be right at this point, but I will update this post in case this is not true.

Next screen in the wizard talks about primary key and index creation. I decided to make column ID as the primary key with a NON-CLUSTERED HASH Index and a bucket count of 1024.


The final screen provided me a summary of my selections.


Yet another cool feature in the wizard is that it allowed me to script everything before I finalize my selections.

I decided to hit Migrate, and wanted to see how it goes.

Viola,all clean and green !


The wizard was smart enough to rename the old table,and created a memory optimized one for me.



I had scripted out everything before hitting migrate and that file looked like this –



This is a very neat feel good to have tool and those warnings and pre-checks will definitely help users to streamline issues well before they are IN MEMORY !

Thanks for reading and keep watching this space for more !