Why you should think about upgrading to SQL Server 2016!


SQL Server 2005 is out of support and you might be (Or rather, you should be) planning an enterprise wide upgrade of SQL Server. Are you thinking of upgrading to SQL Server 2014? If yes, then wait! SQL Server 2016 is packed with great features and mainly, there is plenty of enhancements. Here is a quick walk-through of my 6 favorite features, and I will explain why SQL Server 2016 is a great choice for your next platform upgrade(When it’s released(RTM)):

1. In Memory OLTP Improvements

As you already know In Memory OLTP, also known as ‘Hekaton’ and ‘In-Memory Optimization’, is Microsoft’s latest in-memory processing technology. In-Memory OLTP is optimized for Online Transaction Processing (OLTP). It is integrated into SQL Server’s Database Engine and can be used in the exact same manner as any other Database Engine component.

In-Memory OLTP originally shipped with SQL Server 2014 and it mainly features two new data structures which are Memory-Optimized Tables, and Natively-Compiled Stored Procedures.

SQL Server 2016 provides an improved In Memory OLTP engine, and it overrides many restrictions which existed in SQL 2014(Phew! I was waiting for this to happen).

In Memory OLTP SQL 2016 Improvements

Feature/Limit SQL Server 2014 SQL Server 2016
Maximum size of durable table 256 GB 2 TB
LOB (varbinary(max), [n]varchar(max)) Not supported Supported
Transparent Data Encryption (TDE) Not supported Supported
ALTER PROCEDURE / sp_recompile Not supported Supported (fully online)
ALTER TABLE Not supported
(DROP / re-CREATE)
Partially supported*
DML triggers Not supported Partially supported
(AFTER, natively compiled)
Feature/Limit SQL Server 2014 SQL Server 2016
Indexes on NULLable columns Not supported Supported
Foreign Keys Not supported Supported
Check/Unique Constraints Not supported Supported
Parallelism Not supported Supported
OUTER JOIN, OR, NOT, UNION [ALL], DISTINCT, EXISTS, IN Not supported Supported
SSMS Table Designer Not supported Supported

The above 2 tables should help you understand why SQL Server 2016 should be the platform of your choice if you are planning to deploy In Memory OLTP for your critical workloads.  I would still say In Memory should be your last resort to fix issues like latch contention or heavy blocking as there are other techniques like Hash partitioning with a computed column(to reduce contention) or use Read Committed Snapshot Isolation (RCSI)(to avoid blocking). There are other overheads associated with these two, and you should evaluate all the available options and pick the right solution.

2. Column Store Indexes

SQL Server 2016 introduces real-time operational analytics, the ability to run both analytics and OLTP workloads on the same database tables at the same time (Yes, to an extend and we are not talking about replacing SSAS cubes).

Benefits of Column Store Indexes

A columnstore index can provide a very high level of data compression, typically 10x, to reduce your data warehouse storage cost significantly. Plus, for analytics they offer an order of magnitude better performance than a btree index. They are the preferred data storage format for data warehousing and analytics workloads. Starting with SQL Server 2016 ,you can use columnstore indexes for real-time analytics on your operational workload.

Recommended use cases:

  • Use a clustered columnstore index to store fact tables and large dimension tables for data warehousing workloads. This improves query performance and data compression by up to 10x.
  • Use a nonclustered columnstore index to perform analysis in real-time on a OLTP workload.

You might have many analytics queries today which you will never run directly against a OLTP database mainly because it might impact the OLTP workload performance, but with Operational Analytics you have an opportunity to test and ensure that those analytics queries can indeed be run against a OLTP database or even consider leveraging Always On AG secondary replicas powered by nonclustered columnstore to offload your analytics workloads .

3. Enhanced AlwaysOn Availability Groups

The AlwaysOn Availability Groups feature is a high-availability and disaster-recovery solution that provides an enterprise-level alternative to database mirroring. Introduced in SQL Server 2012, AlwaysOn Availability Groups maximizes the availability of a set of user databases for an enterprise. An availability group supports a failover environment for a discrete set of user databases, known as availability databases, that fail over together. An availability group supports a set of read-write primary databases and one to eight sets of corresponding secondary databases. Optionally, secondary databases can be made available for read-only access and/or some backup operations.

AlwaysOn AG SQL 2016 Improvements

Failover on database health
Distributed Transaction Coordinator support
3 Synchronous replicas
Optimized log transport
Load balancing across readable secondary replicas

The first enhancement is self-explanatory,now a failover can be triggered according to the state of the database health. That can come handy in some situations.

DTC support is promising. In the past I have worked with some 3rd party apps which relied heavily on DTC and it’s good to see that SQL 2016 is supporting it.

Optimized log transport is my favorite enhancement. Normally on a high concurrent OLTP system, there is always a chance for the secondaries to stay behind and impact the RTO/RPO service level agreements. With optimum log transport and parallel redo threads, this overhead will be reduced.

4. Native JSON Support

SQL Server 2016 natively supports JSON. With native support you have the ability to format and export data as JSON string. You also have the ability to load JSON text in tables, extract values from JSON text, index properties in JSON text stored in columns, etc.

First thing we should be aware is that built-in JSON support is not the same as the native JSON type. In SQL Server 2016, JSON will be represented as NVARCHAR type.

JSON Use cases

  • You can accept JSON, easily parse and store it as relational
  • You can export relational easily as JSON
  • You can correlate relational and non-relational

5. Always Encrypted

Always Encrypted is a feature designed to protect sensitive data, such as credit card numbers or national identification numbers (e.g. U.S. social security numbers), stored in SQL Server databases. Always Encrypted allows clients to encrypt sensitive data inside client applications and never reveal the encryption keys to the Database Engine (SQL Database or SQL Server). As a result, Always Encrypted provides a separation between those who own the data (and can view it) and those who manage the data (but should have no access).

Always Encrypted makes encryption transparent to applications. An Always Encrypted-enabled driver installed on the client computer achieves this by automatically encrypting and decrypting sensitive data in the client application. The driver encrypts the data in sensitive columns before passing the data to the Database Engine, and automatically rewrites queries so that the semantics to the application are preserved. Similarly, the driver transparently decrypts data, stored in encrypted database columns, contained in query results.

Data is vulnerable when it’s in rest and in motion. Features like TDE(Transparent Data Encryption) protects the data which is at rest, but till SQL 2016, there was no way to protect the data when it’s in motion. Attacks like man in the middle attack can compromise the data which is in motion, however with SQL 2016 and Always Encrypted we now have a solution to this problem.

6. Temporal tables

Temporal tables, also named system-versioned tables, allow SQL Server to automatically keep history of the data in the table. Temporal tables were introduced in the ANSI SQL 2011 standard and is now available in SQL Server 2016.

A system-versioned table allows you to query updated and deleted data, while a normal table can only return the current data. For example, if you update a column value from 5 to 10, you can only retrieve the value 10 in a normal table. A temporal table also allows you to retrieve the old value 5. This is accomplished by keeping a history table. This history table stores the old data together with a start and end data to indicate when the record was active.

A classic use case for temporal table is for Audit purposes. With temporal tables you can find out what values a specific entity has had over its entire lifetime.

Another use case is a quick recovery when you or someone deletes a row from a table (Yes, the same old delete without a clause!). With temporal tables, you can retrieve the deleted row details from the history table and insert it back into the main table.

So, you might be curious about this. What is the difference between Temporal tables and CDC (Change Data Capture). Here is the explanation:

CDC is usually useful to keep changes for a short period of time to feed those to external consumers such as an ETL process.Temporal tables can keep historic data for very long periods. Usually used for auditing/legal purposes and time travel queries.

Conclusion:

These 6 features should convince you on why you should consider upgrading to SQL 2016 when it’s released. There are many other exciting features like Polybase, Completely overhauled SSRS etc and we will talk more on that during the upcoming posts.

Thanks for reading and keep watching this space for more!

Curious case of SYS.MESSAGES


A strange error struck me last week, and this post is all about that.

What you normally expect when you run SELECT * FROM sys.messages? The query will return a row for each message_id or language_id of the error messages in the system, for both system-defined and user-defined messages.

Rather than returning the rows, system generated an error for me:

Msg 18058, Level 17, State 0, Line 1
Failed to load format string for error 362, language id 1033. Operating system error: 317(The system cannot find message text for message number 0x%1 in the message file for %2.). Check that the resource file matches SQL Server executable, and resource file in localized directory matches the file under English directory. Also check memory usage.

As always, my first instinct was to check what version of SQL Server I was running. It was SQL 2008 R2 SP3.

Build : Microsoft SQL Server 2008 R2 (SP3) – 10.50.6000.34 (X64)

I knew that there are already some issues which we hit during our SP3 patching, and this specific portion of the error message which said “Check that the resource file matches SQL Server executable, and resource file in localized directory matches the file under English directory” added to my suspicion that SP3 upgrade process might have messed-up some files.

A quick search on the MSDN forum revealed info about file sqlevn70.rll which is available in the installation folder( MSSQL\Binn\Resources\1033\sqlevn70.rll) which might not got updated when SP3 update was done, and this info opened new doors.

Some of the folks had success by replacing this file from a server(same build). I tried the same thing by renaming the old file and copied the new one.

The below picture will highlight the size/date difference for both files :

Capture

 

After this file changed I tried to run the same query SELECT * FROM sys.messages, and got the same error. I just tried this to see if changes can reflect on the fly or I need to restart the services. A restart was indeed needed and the issue was resolved after the same.

Conclusion:

The same error might be happening for your servers too, and I’m still working with Microsoft to see if there are any additional implications (critical ones) because the installer is not updating the files for some specific cases. I will be updating this post in case I have more info in this regard.

Thanks for reading and keep watching this space for more.

Encrypting AlwaysOn Availability Group Transport- Some thoughts!


Transport security for AlwaysOn Availability Group is an interesting topic, and any day it’s a topic which is debatable.

The transport security involves encryption of packets exchanged between the servers involved, and for AlwaysOn AG the encryption is made possible on the Hadr_endpoint.

By default, its encrypted using AES standard. If you are interested to know more about AES, then please ref here.

Here is the code to create the Hadr_endpoint:

USE [master]
GO

/****** Object: Endpoint [Hadr_endpoint] Script Date: 1/27/2016 3:25:01 PM ******/
CREATE ENDPOINT [Hadr_endpoint] 
 STATE=STARTED
 AS TCP (LISTENER_PORT = 5022, LISTENER_IP = ALL)
 FOR DATA_MIRRORING (ROLE = ALL, AUTHENTICATION = WINDOWS NEGOTIATE
, ENCRYPTION = REQUIRED ALGORITHM AES)
GO

This means that by default the end point will ensure that data is encrypted when its sent over. The end point can only connect to end points which uses encryption.

You also have an option to turn off encryption. Yes, you heard that correct. You have an option to disable encryption by using the ALTER END POINT command.

If you closely observe the ALTER command for end point, there are 3 options:

[ , ENCRYPTION = { DISABLED | {{SUPPORTED | REQUIRED } [ ALGORITHM { RC4 | AES | AES RC4 | RC4 AES } ] } ]

Disabling the end point is where I have some thoughts –

Think about this scenario:

What if you have a very well secured network infrastructure, and there is no way something bad like a sniffing can happen. In that case is there a need for turning on encryption?

Yes, as stated earlier this is a debatable topic and the best answer to this question will be “It depends”.

The best answer for this question is to understand the overhead of AES encryption (AES encryption is usually combined with padding) to the data transfer, and to analyze if there are any potential benefits by disabling it.

There might be some edge cases where in disabling encryption in a controlled fashion yielded benefits.

This definitely is food for thought, and I’m really interested to know your thoughts around this.

Thanks for reading, and keep watching this space for more!

SQL Server 2016 CTP 3 Setup – UI Improvements


There are some improvements in the setup UI for SQL Server 2016 CTP3, and its great to see Microsoft making these changes as they receive feedback from multiple customers and early adopters.

My earlier post on setup UI changes for CTP 2 is available here.

The 1st change is around granting the SQL Engine Service account necessary privileges to enable INF(Instant File Initialization) during the setup itself.

This was a surprise when I saw it for the 1st time :

New Setup

You can notice the check box for Grant Perform Volume Maintenance Task privilege.

There is an interesting post by Nic Cain(B/T) on this topic and you can read it here.

The 2nd change is around Tempdb configuration and here are the new options :

New Setup2

The wizard will now check the no of cores on the server and will come with a pre-populated value for the no of temp db data files.

It also provides options to define the initial size and the auto growth size.

This might not be something which the power users care,(When I said power users,I meant people who automate lot of SQL server deployments and are already doing this with custom scripts) however this is really a value add for people who do manual SQL installation.

Going forward we also don’t need to turn on Trace Flags 1117 and 1118 as they are taken care by default.

Conclusion:

I’m really glad that Microsoft is giving options for users to follow some of the long-standing best practices during the install process itself and this is a welcoming change.

Thanks for reading and keep watching this space for more!

Azure Virtual Machine Blog Series – Changing the size of a VM


I’m starting a new blog series on Microsoft Azure Virtual Machines, and my goal is to write blog posts exclusively on Azure VM/SQL Server related topics.Here is the 1st post –

“Changing the size of a VM”

When creating an Azure VM to host your SQL Server, you have an option to select the pricing tier. This pricing tier states what will be your VM configuration in terms with CPU cores,memory,No of disks, IOPS etc.

Azure provides plenty of pricing tier options,and you have the power to create a VM with the least amount of configuration or pick a monster like below –

AzureVM1

Gone are those days where you had to think twice to move your SQL Server workload to the cloud because of the lack of compute power. 32 cores, 448 GB, 64 disks with 80,000 ! Can’t get better than this !

What if I created a VM with a standard pricing tier like DS1 Standard, and after running it for a while realize that it’s not meeting my needs, and my SQL instance starts freaking out.

AzureVM2

Hmm, bad planning isn’t? Yeah, it’s definitely bad planning, however Azure will cover you pretty well.

If I check the VM configuration, I can see its current configuration as 1 Core and 3.5 GB of memory. AzureVM3

If I go to the portal and choose the settings option for the VM and the select size, I have multiple selection options for pricing tier-

AzureVM4

If I scroll down on the selection options, I can see that the G series are not available. This means that you cannot scale up from a D series to a G series directly. Again, not planning ahead can give you trouble.

AzureVM5

 

What if I select DS3-Standard. Will it change my VM configuration on the fly?  Lets check it out –

On the portal I’m selecting the DS3 option which is 4 cores and 14 GB memory.

Note – This will restart your VM behind the scenes.If you are following this article and trying to change the pricing tier of a critical production server, then stop and plan to do this activity during a planned downtime.

AzureVM7

Within few seconds, I got an alert which says –

AzureVM6

 

Neat ! Lets check the configuration –

AzureVM8

Indeed the new configuration is applied after a quick reboot.

Conclusion – 

Flexibility to change the VM size pretty easy can come handy sometimes,however carefully analyzing the workload compute demands and choosing the correct sized VM is always the recommended approach.

Thanks for reading and keep watching this space for more Azure VM related posts.

SQL Server Data Tools is now a separate installer – Things to know!


Recently I had to develop some SSRS report, and my desktop machine had SQL Server 2014 installed on it. I had previously used SQL Server Data Tools, and I tried to search for it under Programs. To my surprise, it was missing.

I thought I might have missed adding it while installing SQL 2014, and re-ran the setup to add it again. There comes my next surprise –

There is no option to add SQL Server Data Tools!

SSDT1

After some research got to know that Microsoft moved away from bundling data tools with SQL 2014(For obvious reasons!) and this link provides the required bits.

Picking the right installer was again little confusing after browsing the above URL and eventually I was able to download the right product after carefully checking the description.

SSDT2

Installing the tool was pretty straightforward and here are the instructions –

  1. Run the setup.exe(After extracting the installer).
  2. Click on New SQL Server stand-alone installationSSDT3
  3. Perform a new installationSSDT4
  4. Choose SQL Server Data Tools – BI for Visual Studio 2013SSDT5
  5. Install.
  6. Eventually you will have it on the Program list SSDT6

Conclusion –

Microsoft’s decision to un-bundle SSDT with SQL 2014 installer is acceptable as this gives them the opportunity to update this tool on a fast pace.

Thanks for reading,and keep watching this space for more!

 

Why I won’t be running SQL Server Standard Edition on Business Critical Servers!


I keep getting questions around what core features are missing in Standard Edition of SQL Server,and why there is a need to install Enterprise edition for business critical systems.Here are my top features which are missing in standard edition –

Scale limits

Memory – Standard only supports 128 GB memory.

Memory is relatively cheap now,and I can’t think of any reason why someone can’t have more memory on business critical servers.

CPU cores Limited to lesser of 4 Sockets or 16 cores.

High Availability

AlwaysOn Failover Cluster Instances – Possible but only 2 nodes supported.

AlwaysOn Availability Groups*.

*AG is the future of HA/DR strategy within SQL Server and because of great demand, Microsoft will be allowing AG in standard edition of SQL 2016, however it will only support 2 nodes. Support of just 2 nodes is not sufficient for a solid HA/DR architecture.

Online page and file restore.

Page restores can come handy if you have to deal with corruption of a very large database and corruption is only for very few pages.

Online indexing.

You don’t really want to lock down the table while doing index maintenance right? Online index operations are key to maintain the uptime of the database.

Online schema change.

Hot Add Memory and CPU.

Database Snapshots.

Scalability and performance

Table and index partitioning.

Database size is increasing like never before and 3-5 TB DBs are very common these days. Maintaining these DBs without a partitioning strategy can be a very bad idea.

Data compression.

One of the most ignored features in SQL Server. Compression can be really helpful to save some space without much overhead.

Resource Governor.

IO Resource Governance.

In-Memory OLTP.

In Memory OLTP might be having lot of restrictions today, however things are going to change soon. I’m pretty sure that there will be multiple use cases when this feature is stable.

Memory optimized columnstore indexes.

Column store indexes dramatically improves DW workload performance and this feature will get more and more enhancements in the future.

Security

Transparent database encryption.

Conclusion – 

Business critical systems needs to be treated as critical as there is a potential risk of losing revenue if these systems are not highly available and uptime requirements are not met.Performance is also a key factor, and SQL Server standard edition is not the right edition for these requirements.