SQL Server 2016 CTP 3 Setup – UI Improvements


There are some improvements in the setup UI for SQL Server 2016 CTP3, and its great to see Microsoft making these changes as they receive feedback from multiple customers and early adopters.

My earlier post on setup UI changes for CTP 2 is available here.

The 1st change is around granting the SQL Engine Service account necessary privileges to enable INF(Instant File Initialization) during the setup itself.

This was a surprise when I saw it for the 1st time :

New Setup

You can notice the check box for Grant Perform Volume Maintenance Task privilege.

There is an interesting post by Nic Cain(B/T) on this topic and you can read it here.

The 2nd change is around Tempdb configuration and here are the new options :

New Setup2

The wizard will now check the no of cores on the server and will come with a pre-populated value for the no of temp db data files.

It also provides options to define the initial size and the auto growth size.

This might not be something which the power users care,(When I said power users,I meant people who automate lot of SQL server deployments and are already doing this with custom scripts) however this is really a value add for people who do manual SQL installation.

Going forward we also don’t need to turn on Trace Flags 1117 and 1118 as they are taken care by default.

Conclusion:

I’m really glad that Microsoft is giving options for users to follow some of the long-standing best practices during the install process itself and this is a welcoming change.

Thanks for reading and keep watching this space for more!

Azure Virtual Machine Blog Series – Changing the size of a VM


I’m starting a new blog series on Microsoft Azure Virtual Machines, and my goal is to write blog posts exclusively on Azure VM/SQL Server related topics.Here is the 1st post –

“Changing the size of a VM”

When creating an Azure VM to host your SQL Server, you have an option to select the pricing tier. This pricing tier states what will be your VM configuration in terms with CPU cores,memory,No of disks, IOPS etc.

Azure provides plenty of pricing tier options,and you have the power to create a VM with the least amount of configuration or pick a monster like below –

AzureVM1

Gone are those days where you had to think twice to move your SQL Server workload to the cloud because of the lack of compute power. 32 cores, 448 GB, 64 disks with 80,000 ! Can’t get better than this !

What if I created a VM with a standard pricing tier like DS1 Standard, and after running it for a while realize that it’s not meeting my needs, and my SQL instance starts freaking out.

AzureVM2

Hmm, bad planning isn’t? Yeah, it’s definitely bad planning, however Azure will cover you pretty well.

If I check the VM configuration, I can see its current configuration as 1 Core and 3.5 GB of memory. AzureVM3

If I go to the portal and choose the settings option for the VM and the select size, I have multiple selection options for pricing tier-

AzureVM4

If I scroll down on the selection options, I can see that the G series are not available. This means that you cannot scale up from a D series to a G series directly. Again, not planning ahead can give you trouble.

AzureVM5

 

What if I select DS3-Standard. Will it change my VM configuration on the fly?  Lets check it out –

On the portal I’m selecting the DS3 option which is 4 cores and 14 GB memory.

Note – This will restart your VM behind the scenes.If you are following this article and trying to change the pricing tier of a critical production server, then stop and plan to do this activity during a planned downtime.

AzureVM7

Within few seconds, I got an alert which says –

AzureVM6

 

Neat ! Lets check the configuration –

AzureVM8

Indeed the new configuration is applied after a quick reboot.

Conclusion – 

Flexibility to change the VM size pretty easy can come handy sometimes,however carefully analyzing the workload compute demands and choosing the correct sized VM is always the recommended approach.

Thanks for reading and keep watching this space for more Azure VM related posts.

SQL Server Data Tools is now a separate installer – Things to know!


Recently I had to develop some SSRS report, and my desktop machine had SQL Server 2014 installed on it. I had previously used SQL Server Data Tools, and I tried to search for it under Programs. To my surprise, it was missing.

I thought I might have missed adding it while installing SQL 2014, and re-ran the setup to add it again. There comes my next surprise –

There is no option to add SQL Server Data Tools!

SSDT1

After some research got to know that Microsoft moved away from bundling data tools with SQL 2014(For obvious reasons!) and this link provides the required bits.

Picking the right installer was again little confusing after browsing the above URL and eventually I was able to download the right product after carefully checking the description.

SSDT2

Installing the tool was pretty straightforward and here are the instructions –

  1. Run the setup.exe(After extracting the installer).
  2. Click on New SQL Server stand-alone installationSSDT3
  3. Perform a new installationSSDT4
  4. Choose SQL Server Data Tools – BI for Visual Studio 2013SSDT5
  5. Install.
  6. Eventually you will have it on the Program list SSDT6

Conclusion –

Microsoft’s decision to un-bundle SSDT with SQL 2014 installer is acceptable as this gives them the opportunity to update this tool on a fast pace.

Thanks for reading,and keep watching this space for more!

 

Why I won’t be running SQL Server Standard Edition on Business Critical Servers!


I keep getting questions around what core features are missing in Standard Edition of SQL Server,and why there is a need to install Enterprise edition for business critical systems.Here are my top features which are missing in standard edition –

Scale limits

Memory – Standard only supports 128 GB memory.

Memory is relatively cheap now,and I can’t think of any reason why someone can’t have more memory on business critical servers.

CPU cores Limited to lesser of 4 Sockets or 16 cores.

High Availability

AlwaysOn Failover Cluster Instances – Possible but only 2 nodes supported.

AlwaysOn Availability Groups*.

*AG is the future of HA/DR strategy within SQL Server and because of great demand, Microsoft will be allowing AG in standard edition of SQL 2016, however it will only support 2 nodes. Support of just 2 nodes is not sufficient for a solid HA/DR architecture.

Online page and file restore.

Page restores can come handy if you have to deal with corruption of a very large database and corruption is only for very few pages.

Online indexing.

You don’t really want to lock down the table while doing index maintenance right? Online index operations are key to maintain the uptime of the database.

Online schema change.

Hot Add Memory and CPU.

Database Snapshots.

Scalability and performance

Table and index partitioning.

Database size is increasing like never before and 3-5 TB DBs are very common these days. Maintaining these DBs without a partitioning strategy can be a very bad idea.

Data compression.

One of the most ignored features in SQL Server. Compression can be really helpful to save some space without much overhead.

Resource Governor.

IO Resource Governance.

In-Memory OLTP.

In Memory OLTP might be having lot of restrictions today, however things are going to change soon. I’m pretty sure that there will be multiple use cases when this feature is stable.

Memory optimized columnstore indexes.

Column store indexes dramatically improves DW workload performance and this feature will get more and more enhancements in the future.

Security

Transparent database encryption.

Conclusion – 

Business critical systems needs to be treated as critical as there is a potential risk of losing revenue if these systems are not highly available and uptime requirements are not met.Performance is also a key factor, and SQL Server standard edition is not the right edition for these requirements.

Installing SQL Server 2016 CTP2 – UI Changes


SQL Server 2016 CTP2 public build was released by Microsoft yesterday, and if you haven’t downloaded it yet, then drop everything(No, not a Production DB !) and go and get it from here.

This post is based on the UI changes/additions which I observed while I installed this specific build. Its going to a very short post as there are no major changes.

First addition or change was around the Feature Selection page –

2016CTP2

 

You can notice a new instance feature called PolyBase Query Service for External Data. 

This feature enables truly integrated querying across Hadoop data and SQL Server data using standard T-SQL statements.

Second change or addition was around the Database Engine Configuration page.

2016CTP22

This one is beautiful, I was pretty much excited when I saw this for the first time. As the description says the default is 8 or the number of cores, whichever is lower.

My test machine had 2 cores and it configured 2 files for me.

2016CTP23

However, the growth is still 10% for these files ! You may want to change that.

2016CTP24Conclusion –

Those are the quick UI changes which I could notice during the install process.

Thanks for reading and keep watching this space for more ! (Hint : Lot of good SQL Server 2016 stuff coming your way)

Full Domain Trust and Natively Compiled Stored Procedures – In Memory OLTP


Recently one of my developers brought this specific problem to my attention. He was trying to create a natively compiled stored procedure(With Execute as Owner), and was getting an error –

ERROR:

Could not obtain information about Windows NT group/user ‘Domain\UserName’, error code 0x5.

My first reaction was to check the builds and to ensure that the user is running latest CU.(Not yet SP1 !) . The user was indeed running the latest SQL 2014 build.

I decided to repro this issue at my end, and I was able to create/compile the procedure without any errors.

My next troubleshooting direction was to check user permissions, and understand more about the environment where the user is running into this error.

The user in this particular scenario was using his domain credentials to create the SPROC. The account was something like  HQ\Developer. HQ is the domain and Developer is the user name.

SQL Engine Services was running under account LAB\SQL.LAB is the domain and SQL is the SQL Service Account.

I read more about Natively Complied Stored Procedures, and especially focused well on this MSDN article, Creating Natively Compiled Stored Procedures.

This portion of the article was my focus area(Underlying the main point) –

” Regarding EXECUTE AS and Windows logins, an error can occur because of the impersonation done through EXECUTE AS. If a user account uses Windows Authentication, there must be full trust between the service account used for the SQL Server instance and the domain of the Windows login. If there is not full trust, the following error message is returned when creating a natively compiled stored procedure: Msg 15404, Could not obtain information about Windows NT group/user ‘username’, error code 0x5.”

In my case there was one way trust between HQ and LAB, however LAB didn’t trust HQ. The error message which the user got was obvious.

There are multiple workarounds to solve this problem and its listed in the article too –

  • Use an account from the same domain as the Windows user for the SQL Server service.
  • If SQL Server is using a machine account such as Network Service or Local System, the machine must be trusted by the domain containing the Windows user.
  • Use SQL Server Authentication.

I decided to leverage a SQL account and the user was able to create the procedure with that specific account.

Conclusion – 

In memory OLTP is a very exciting feature and there are a lot of dependencies which you should be aware before deciding to deploy it on Production.

Thanks for reading and keep watching this space for more !

SQL Server 2016 Announced, and it stretches to the Cloud!


Last week during Ignite, Microsoft CEO Satya Nadella announced the next version(Till then it was called as SQL vNext) of SQL Server, and its SQL Server 2016.

Satya started the Ignite keynote, and he looked really pumped when he announced SQL 2016.

“SQL 2016 perhaps is the biggest breakthrough you have ever seen in database technology”

2016_1

 

He was focusing on Stretch Database capabilities, and he called it can infinite database where you have the capability to pick a single table and stretch it to Azure without any code changes.

The keyword here is “without any code changes”.

SQL Server 2016 is packed with some real exciting features, and below is my top favorite list(Yes, ranked from 1 to 3) –

  1. Operational Analytics – In Microsoft SQL Server 2016, users can run analytic queries on in-memory and disk-basedOLTP tables with minimal impact on business criticalOLTP workloads, requiring no application changes.The concept is based on leveraging In Memory OLTP + In Memory Column Store.This indeed is game changing. The idea of running analytic queries on OLTP tables might be a strict ‘No’ now, however as SQL Server evolve we can see changes in thought process.
  2. Always Encrypted –  

We had Transparent Data Encryption(TDE) all these years,and its been doing a great job. Then, why there is a need for something like Always Encrypted ?

TDE encrypts data at rest. As soon as the data is in motion, its vulnerable. Classic example is man-in-the-middle-attacks.

Always Encrypted protects data in rest and in motion. This feature lets application encrypt and decrypt critical data automatically, and it happens seamlessly in ADO.NET.

 The master-key resides in the application and there are no application changes which are required.

3.   Stretch Database – 

We all have hot and cold data in our databases, and in majority of the cases this data will be in the same database, same table without any specific partitions.

Years old data which might come useful only during audit purposes will be utilizing costly storage space and will add extra maintenance overhead.

SQL Server 2016 will bring in a new concept called stretch database, where in you have the ability to pick a specific table and stretch it to Azure.

If an application queries for the historical data, it will be retrieved from Azure.

Storage in Azure is cheap and the price will always go down as long as the “Cloud War” is happening between Microsoft, Amazon and Google.

With concepts like Always Encrypted, the data which resides or in motion to Azure is secure.

Apart from these 3 features, I’m really looking forward for Query Store and the Round Robin Load Balancing capabilities for secondaries in AlwaysOn Availability Groups.

There will be plenty of In Memory OLTP enhancements and it will cover larger T-SQL surface area. More on this will follow shortly.

Conclusion – 

Yes, SQL Server 2016 is a good release with lot of nice features and enhancements.

I’m pretty happy with the vision of the Data Platform Group within Microsoft, and I’m really looking forward to write more about these features in the upcoming months.

Thanks for reading and keep watching this space for more !

Exploring AlwaysOn Availability Group Template in Azure


This is one of those posts which is due for a long time. I was really excited when Microsoft announced the release of an AlwaysOn AG template for the Azure Portal back in August 2014.

I had decided to explore this AG template soon after the launch, and this post will explain how easy it is to deploy a 2 node AlwaysOn AG deployment in Azure. Trust me, it’s just a matter of few clicks.

Sit back and relax,we are now going to deploy AG in Azure.Here are the steps :

After you login to the Azure portal, you can access the Market Place

AGTemplate1

You will be surprised to see the number of options which are available in market place now.

AGTemplate2

Search for AlwaysOn ( I really hope MSFT will correct this, it should be rather be AlwaysOn AG) in the search box.

AGTemplate3

 

Choose SQL Server 2014 AlwaysOn and proceed by clicking Create.

AGTemplate4

 

Enter details for the required fields and also select the details needed for SQL Server settings. For the purpose of this post, I will be choosing default settings for the domain controller,virtual network,storage etc.

AG6

Note : Ignore the warning messages in the picture above.Warning for Resource Group is shown as I already have a RG with the same name.

That’s it, and you click create.

It takes around 40 – 45 minutes for the whole process to complete and once it was completed, I had the below set of virtual machines.

AG5

 

2 machines for the domain controllers, 1 for the file share witness and 2 for the AG nodes.

Here is a view of the Resource Group and its resources :

AG4

If you login to one of the SQL nodes and open up SSMS, you can see that AG is fully configured and its ready for use.

AG1

Here is the dashboard view and the quorum info :

AG2

AGQ

Conclusion :

Templates are interesting, and this is an excellent way of deploying AGs in large-scale. I’m pretty sure that MSFT will be working more to optimize this, and will add more and features to it.Looking forward to it.

Thanks for reading, and keep watching this space for more.

This one enhancement makes AlwaysOn Availability Group better in SQL Server 2014 !


I keep getting questions on what is improved other than the increased number of secondaries(8 now) with AlwaysOn Availability Group(AG) in SQL 2014. In this blog post  I would like to talk about one major enhancement around the increased readable secondaries availability in SQL Server 2014.

In SQL Server 2012 there is a good chance that your read workloads can be killed during network failures. Typically this happens in a multi subnet environment where you have AG nodes in 2 or more data centers.If network failure results a communication failure between primary and secondary replicas, then the secondaries will go into resolving state.In resolving state, the read workloads are basically killed. (Yes, yet another reason why you need to test your AG deployments)

With SQL Server 2014, there is good news. Read workloads are not impacted during network failures, primary replica down or cluster quorum issues.

Yes, it stays up all the time, and will be serving your read workloads. The replica will be in resolving state, however the databases are still available for your read connections/workloads.

Keep in mind : Read-only routing is not yet supported for this, and you cannot use listener for your reporting workloads.Brent Ozar calls out this in his blog post here.

Lets now walk though this enhancement :

I will be using a 2 node AG setup and the data transfer mode(Availability Mode) is asynchronous.

If I query the DMV sys.dm_hadr_availability_replica_states I can see the replica states.

SELECT role_desc,connected_state_desc  FROM sys.dm_hadr_availability_replica_states

role_desc               connected_state_desc
PRIMARY              CONNECTED
SECONDARY        CONNECTED

Lets double-check if the data transfer is set as asynchronous

SELECT  is_primary_replica, synchronization_state_desc FROM sys.dm_hadr_database_replica_states

is_primary_replica       synchronization_state_desc
0                                  SYNCHRONIZING
1                                  SYNCHRONIZED

Everything looks good, and now lets take down the primary replica.

Note : Dont try this at home ! (I mean production)

Okay, As this is a test instance I really don’t care and what I did was shutdown the SQL Services to take the primary down.

If we query the DMV again, we can see

SELECT role_desc,connected_state_desc  FROM sys.dm_hadr_availability_replica_states

role_desc                connected_state_desc
RESOLVING           DISCONNECTED

And note, there is no entry for primary as we I have taken it down.

Even though the secondary replica is in resolving state, read connections just works.

USE DemoAG
GO
SELECT COUNT (*) AS [TotalRowCount] FROM [dbo].[InsertCount]

TotalRowCount
6886

The secondary database is in NOT SYNCHRONIZING, NOT_HEALTHY state, however its ONLINE.

SELECT synchronization_state_desc,synchronization_health_desc,database_state_desc FROM sys.dm_hadr_database_replica_states

synchronization_state_desc         synchronization_health_desc       database_state_desc
NOT SYNCHRONIZING               NOT_HEALTHY                           ONLINE

Here is a view from Management Studio :

AG Resolving

Conclusion:

Yes, this is a very good enhancement which will make sure your readable secondaries are highly available and I’m pretty sure that this is one of those enhancement which will force you to deploy AGs in SQL 2014 rather than opting SQL 2012.

Thanks for reading and keep watching this space for more !

SELECT INTO Improvements in SQL Server 2014 !


SQL Server 2014 is a great release with high performance features like In Memory OLTP, Updatable Column Store Indexes etc.

These features still remains as the top favorite and the most compelling reasons for organizations to do an upgrade, However SQL Server 2014 also includes very small but very relevant and useful enhancements, and this post talks about one of those enhancement:

SELECT INTO now runs Parallel !

MP900386077

When I first heard about this feature my first impression was: What…was it not running parallel before ?

No, SELECT INTO was single threaded all these years and with SQL Server 2014 this will now use multiple threads.

Let’s do a quick walk-through of this enhancement.

In this walk-through we will use a table named MasterData which has around 12 million rows.

--Select Count
 SELECT COUNT(*) AS Row_Count FROM [dbo].[MasterData]

Row_Count
11,999,880

Data from this table will be copied over to a different table named MasterData_Copy using SELECT INTO operation.

--Turn on Actual Execution Plan
 SET STATISTICS TIME ON
 GO
 SELECT * INTO [dbo].[MasterData_Copy] FROM [dbo].[MasterData]

This operation was completed in 6 seconds (Desktop class machine) and if we examine the execution plan we can observe parallelism.

SELECTINTO1

And the distribution of row’s among multiple threads:

SELECTINTO2

Now let’s make this little more interesting. We will change the database compatibility level to SQL2012(110) and see what impact that brings to the SELECT INTO operation.

--Change Database Compatibility
 ALTER DATABASE SELECTINTO_Demo SET COMPATIBILITY_LEVEL = 110
--Turn on Actual Execution Plan
 SET STATISTICS TIME ON
 GO
 SELECT * INTO [dbo].[MasterData_Copy] FROM [dbo].[MasterData]

This operation was completed in 11 seconds and it generated a serial plan.

SELECTINTO3

And here is the row distribution.

SELECTINTO4

<Update 1/13/2015 2 PM> 

I received a comment from Toni which says “It might be a good idea to set MAXDOP, most likely having 16 threads isn’t the best option”.

I absolutely agree with this and a well tested MAXDOP setting will definitely bring better results and this is something which you have to keep in mind when queries goes parallel.

</Update>

Conclusion:

If you are using SELECT INTO a lot in your environment, then I would highly recommend you to compare and see what improvements you can again out of SQL Server 2014.