Amazon RDS Custom for SQL Server – A new beginning


Imagine getting SA (sysadmin) access to your RDS SQL Server Instance! Well, that was not possible in the past, but during AWS re:Invent 2021, Amazon RDS Custom for SQL Server was announced and that changes the game and it’s a new beginning. With RDS SQL Server, AWS manages the hardware and routine tasks like OS installation, OS patching, DB patching, HA, scaling etc. and helps you focus on DB and App optimization. These routine tasks are what’s called undifferentiated heavy lifting and they often don’t add value to your business. By offloading these tasks to AWS, you are getting a chance to utilize your time to focus on things that matters to your business. Like optimize the queries, work closely with developers to understand their goals, model the data and help them chose the right purpose build engine etc. This is the value add for RDS SQL Server.

RDS SQL Server was innovating at a rapid fashion and a lot of features were added recently (like SSIS, SSRS, SSAS etc.) to close the gap and offer a near close experience of running SQL Server on EC2 or in on premises, however there were still gaps. As a user, you never had access to the host operating system, you didn’t have SA access to the instance, you couldn’t run CLR and anything that needs elevated privileges was a no go. This is what is changing with RDS Custom for SQL Server. With RDS Custom, there is a notion of shared responsibility that’s in play now. You are now allowed to make custom changes to the database, get access to the host etc. and do things that was never possible before.

Support Perimeter is a new concept within RDS custom and it monitors and alerts invalid configurations based on your custom changes. The focus for the support perimeter is around the infrastructure, operating system and the database. If your changes are within the allowed perimeter, you are good. The need for support perimeter is to ensure that your changes won’t break RDS automation. RDS Custom still offers you the ability to resume automation and RDS will monitor, backup and manage the instance for you as usual.

Let’s take a quick look at spinning up RDS Custom for SQL Server –

Walkthrough

First things first, there are some prerequisites that needs to be set before you create a RDS Custom for SQL Server instance. They are:

IAM instance profile

KMS key

VPC endpoints

Instance profile role will allow RDS custom instance to talk to many other AWS services like S3, CloudWatch Logs, KMS, Secretes Manager, SSM etc. These privileges are needed for ongoing automation. Below is a quick snapshot of the policy summary:

By default, RDS Custom instances are required to have encryption at rest (Yes, security is job zero!) and the KMS key will be required to launch an RDS Custom instance.

VPC endpoints will facilitate outbound connections to services like S3, SSM, CloudWatch etc. when an RDS Custom instance is created as Private. Below is a quick snapshot of all the required VPC endpoints:

To help ease out the creation of all these perquisites, you can leverage the CloudFormation Template which is available in the documentation.

Once the prerequisites are taken care, you can head to the RDS console to create a database instance. Under the create database section, select standard create, select SQL Server and select Amazon RDS Custom.

You can select the usual settings like edition, version, instance identifier, master username/password, instance class, storage, VPC, subnet group, public access(yes/no), VPC security group, AZ, database port etc. The new changes are around RDS custom security section where you can select the instance profile that was created in the pre-requisite section and the KMS key. Below is a quick snapshot for ref:

At the time of instance creation, you have to choose full automation and you have the ability to pause the automation after the database instance is created to apply your custom changes. The automation mode determines whether RDS Custom provides monitoring and instance recovery for your instance. If you want to make changes to the host, you can pause RDS Custom automation for a specific period. The pause ensures that your customizations don’t interfere with RDS Custom automation.

Select additional configurations like backup enabled, maintenance window like normal and proceed to create the database.

It will take a while for the instance/database to be created, however while the creation process is going on, you can see that a new EC2 instance is created in your account. Below is a quick snapshot for reference:

This is a change in behaviour with RDS Custom for SQL Server. When you create a traditional RDS SQL Server database, the instance will be created in an AWS account which is managed by the RDS team and its abstracted away from you. You will just get an endpoint that you connect to. Having an EC2 instance in your account means, you now can RDP to the box and take custom actions.

The quickest way to connect to the instance is to leverage Sessions Manager. Select the EC2 instance, select Connect and choose Sessions Manager.

And for the very first time, I’m logged into the box that runs RDS SQL Server! Isn’t that cool?

Like I stated at the beginning, the admin user is now SA for the instance:

Conclusion

With RDS Custom for SQL Server, the door is pretty much open and you have a lot of flexibility now to migrate more workloads to a managed platform. Take RDS Custom for SQL Server a quick spin and do let me know your experience around it. I will be testing the HA capabilities, will do some custom changes and will simulate some on- premises HA behaviors soon and will come with a new post to talk about those results.

Keep watching this space for more! #BuildON

SQL Server 2016 CTP 3 Setup – UI Improvements


There are some improvements in the setup UI for SQL Server 2016 CTP3, and its great to see Microsoft making these changes as they receive feedback from multiple customers and early adopters.

My earlier post on setup UI changes for CTP 2 is available here.

The 1st change is around granting the SQL Engine Service account necessary privileges to enable INF(Instant File Initialization) during the setup itself.

This was a surprise when I saw it for the 1st time :

New Setup

You can notice the check box for Grant Perform Volume Maintenance Task privilege.

There is an interesting post by Nic Cain(B/T) on this topic and you can read it here.

The 2nd change is around Tempdb configuration and here are the new options :

New Setup2

The wizard will now check the no of cores on the server and will come with a pre-populated value for the no of temp db data files.

It also provides options to define the initial size and the auto growth size.

This might not be something which the power users care,(When I said power users,I meant people who automate lot of SQL server deployments and are already doing this with custom scripts) however this is really a value add for people who do manual SQL installation.

Going forward we also don’t need to turn on Trace Flags 1117 and 1118 as they are taken care by default.

Conclusion:

I’m really glad that Microsoft is giving options for users to follow some of the long-standing best practices during the install process itself and this is a welcoming change.

Thanks for reading and keep watching this space for more!

Azure Virtual Machine Blog Series – Changing the size of a VM


I’m starting a new blog series on Microsoft Azure Virtual Machines, and my goal is to write blog posts exclusively on Azure VM/SQL Server related topics.Here is the 1st post –

“Changing the size of a VM”

When creating an Azure VM to host your SQL Server, you have an option to select the pricing tier. This pricing tier states what will be your VM configuration in terms with CPU cores,memory,No of disks, IOPS etc.

Azure provides plenty of pricing tier options,and you have the power to create a VM with the least amount of configuration or pick a monster like below –

AzureVM1

Gone are those days where you had to think twice to move your SQL Server workload to the cloud because of the lack of compute power. 32 cores, 448 GB, 64 disks with 80,000 ! Can’t get better than this !

What if I created a VM with a standard pricing tier like DS1 Standard, and after running it for a while realize that it’s not meeting my needs, and my SQL instance starts freaking out.

AzureVM2

Hmm, bad planning isn’t? Yeah, it’s definitely bad planning, however Azure will cover you pretty well.

If I check the VM configuration, I can see its current configuration as 1 Core and 3.5 GB of memory. AzureVM3

If I go to the portal and choose the settings option for the VM and the select size, I have multiple selection options for pricing tier-

AzureVM4

If I scroll down on the selection options, I can see that the G series are not available. This means that you cannot scale up from a D series to a G series directly. Again, not planning ahead can give you trouble.

AzureVM5

 

What if I select DS3-Standard. Will it change my VM configuration on the fly?  Lets check it out –

On the portal I’m selecting the DS3 option which is 4 cores and 14 GB memory.

Note – This will restart your VM behind the scenes.If you are following this article and trying to change the pricing tier of a critical production server, then stop and plan to do this activity during a planned downtime.

AzureVM7

Within few seconds, I got an alert which says –

AzureVM6

 

Neat ! Lets check the configuration –

AzureVM8

Indeed the new configuration is applied after a quick reboot.

Conclusion – 

Flexibility to change the VM size pretty easy can come handy sometimes,however carefully analyzing the workload compute demands and choosing the correct sized VM is always the recommended approach.

Thanks for reading and keep watching this space for more Azure VM related posts.

Why I won’t be running SQL Server Standard Edition on Business Critical Servers!


I keep getting questions around what core features are missing in Standard Edition of SQL Server,and why there is a need to install Enterprise edition for business critical systems.Here are my top features which are missing in standard edition –

Scale limits

Memory – Standard only supports 128 GB memory.

Memory is relatively cheap now,and I can’t think of any reason why someone can’t have more memory on business critical servers.

CPU cores Limited to lesser of 4 Sockets or 16 cores.

High Availability

AlwaysOn Failover Cluster Instances – Possible but only 2 nodes supported.

AlwaysOn Availability Groups*.

*AG is the future of HA/DR strategy within SQL Server and because of great demand, Microsoft will be allowing AG in standard edition of SQL 2016, however it will only support 2 nodes. Support of just 2 nodes is not sufficient for a solid HA/DR architecture.

Online page and file restore.

Page restores can come handy if you have to deal with corruption of a very large database and corruption is only for very few pages.

Online indexing.

You don’t really want to lock down the table while doing index maintenance right? Online index operations are key to maintain the uptime of the database.

Online schema change.

Hot Add Memory and CPU.

Database Snapshots.

Scalability and performance

Table and index partitioning.

Database size is increasing like never before and 3-5 TB DBs are very common these days. Maintaining these DBs without a partitioning strategy can be a very bad idea.

Data compression.

One of the most ignored features in SQL Server. Compression can be really helpful to save some space without much overhead.

Resource Governor.

IO Resource Governance.

In-Memory OLTP.

In Memory OLTP might be having lot of restrictions today, however things are going to change soon. I’m pretty sure that there will be multiple use cases when this feature is stable.

Memory optimized columnstore indexes.

Column store indexes dramatically improves DW workload performance and this feature will get more and more enhancements in the future.

Security

Transparent database encryption.

Conclusion – 

Business critical systems needs to be treated as critical as there is a potential risk of losing revenue if these systems are not highly available and uptime requirements are not met.Performance is also a key factor, and SQL Server standard edition is not the right edition for these requirements.

SQL Server 2016 Announced, and it stretches to the Cloud!


Last week during Ignite, Microsoft CEO Satya Nadella announced the next version(Till then it was called as SQL vNext) of SQL Server, and its SQL Server 2016.

Satya started the Ignite keynote, and he looked really pumped when he announced SQL 2016.

“SQL 2016 perhaps is the biggest breakthrough you have ever seen in database technology”

2016_1

 

He was focusing on Stretch Database capabilities, and he called it can infinite database where you have the capability to pick a single table and stretch it to Azure without any code changes.

The keyword here is “without any code changes”.

SQL Server 2016 is packed with some real exciting features, and below is my top favorite list(Yes, ranked from 1 to 3) –

  1. Operational Analytics – In Microsoft SQL Server 2016, users can run analytic queries on in-memory and disk-basedOLTP tables with minimal impact on business criticalOLTP workloads, requiring no application changes.The concept is based on leveraging In Memory OLTP + In Memory Column Store.This indeed is game changing. The idea of running analytic queries on OLTP tables might be a strict ‘No’ now, however as SQL Server evolve we can see changes in thought process.
  2. Always Encrypted –  

We had Transparent Data Encryption(TDE) all these years,and its been doing a great job. Then, why there is a need for something like Always Encrypted ?

TDE encrypts data at rest. As soon as the data is in motion, its vulnerable. Classic example is man-in-the-middle-attacks.

Always Encrypted protects data in rest and in motion. This feature lets application encrypt and decrypt critical data automatically, and it happens seamlessly in ADO.NET.

 The master-key resides in the application and there are no application changes which are required.

3.   Stretch Database – 

We all have hot and cold data in our databases, and in majority of the cases this data will be in the same database, same table without any specific partitions.

Years old data which might come useful only during audit purposes will be utilizing costly storage space and will add extra maintenance overhead.

SQL Server 2016 will bring in a new concept called stretch database, where in you have the ability to pick a specific table and stretch it to Azure.

If an application queries for the historical data, it will be retrieved from Azure.

Storage in Azure is cheap and the price will always go down as long as the “Cloud War” is happening between Microsoft, Amazon and Google.

With concepts like Always Encrypted, the data which resides or in motion to Azure is secure.

Apart from these 3 features, I’m really looking forward for Query Store and the Round Robin Load Balancing capabilities for secondaries in AlwaysOn Availability Groups.

There will be plenty of In Memory OLTP enhancements and it will cover larger T-SQL surface area. More on this will follow shortly.

Conclusion – 

Yes, SQL Server 2016 is a good release with lot of nice features and enhancements.

I’m pretty happy with the vision of the Data Platform Group within Microsoft, and I’m really looking forward to write more about these features in the upcoming months.

Thanks for reading and keep watching this space for more !

SELECT INTO Improvements in SQL Server 2014 !


SQL Server 2014 is a great release with high performance features like In Memory OLTP, Updatable Column Store Indexes etc.

These features still remains as the top favorite and the most compelling reasons for organizations to do an upgrade, However SQL Server 2014 also includes very small but very relevant and useful enhancements, and this post talks about one of those enhancement:

SELECT INTO now runs Parallel !

MP900386077

When I first heard about this feature my first impression was: What…was it not running parallel before ?

No, SELECT INTO was single threaded all these years and with SQL Server 2014 this will now use multiple threads.

Let’s do a quick walk-through of this enhancement.

In this walk-through we will use a table named MasterData which has around 12 million rows.

--Select Count
 SELECT COUNT(*) AS Row_Count FROM [dbo].[MasterData]

Row_Count
11,999,880

Data from this table will be copied over to a different table named MasterData_Copy using SELECT INTO operation.

--Turn on Actual Execution Plan
 SET STATISTICS TIME ON
 GO
 SELECT * INTO [dbo].[MasterData_Copy] FROM [dbo].[MasterData]

This operation was completed in 6 seconds (Desktop class machine) and if we examine the execution plan we can observe parallelism.

SELECTINTO1

And the distribution of row’s among multiple threads:

SELECTINTO2

Now let’s make this little more interesting. We will change the database compatibility level to SQL2012(110) and see what impact that brings to the SELECT INTO operation.

--Change Database Compatibility
 ALTER DATABASE SELECTINTO_Demo SET COMPATIBILITY_LEVEL = 110
--Turn on Actual Execution Plan
 SET STATISTICS TIME ON
 GO
 SELECT * INTO [dbo].[MasterData_Copy] FROM [dbo].[MasterData]

This operation was completed in 11 seconds and it generated a serial plan.

SELECTINTO3

And here is the row distribution.

SELECTINTO4

<Update 1/13/2015 2 PM> 

I received a comment from Toni which says “It might be a good idea to set MAXDOP, most likely having 16 threads isn’t the best option”.

I absolutely agree with this and a well tested MAXDOP setting will definitely bring better results and this is something which you have to keep in mind when queries goes parallel.

</Update>

Conclusion:

If you are using SELECT INTO a lot in your environment, then I would highly recommend you to compare and see what improvements you can again out of SQL Server 2014.

#SQLPASS #Summit14 Day 2 Keynote Highlights !


This was one of those days I was waiting to happen. You might be wondering whats special? Yes, this day is very special because Rimma Nehme will be giving a keynote on  Cloud Databases 101.

Rimma Nehme is one of those super smart people who works for Microsoft Gray Systems Lab along with the very special Dr David Dewit.

Day 2 started with a talk by Adam Jorgenson, PASS Exec Vice President of Finance.
Adam started talking on how PASS operate, from where money comes and how it’s used? The key thing to note is that PASS is in good financial state with good amount of rainy day fund.

IMG_3247

Adam on stage !

Thomas LaRock was back on stage on day 2, and he started talking by thanking outgoing board member Sri Sridharan. Kudos to Sri for all his efforts. He also introduced the new board member Grant Fritchey.

Next up was Denise McInerney,VP Marketing and she talked about her experience of PASS and the career changing decisions she took. Denise mentioned how PASS changes people’s career.

She thanked the volunteers for their hard work, and the winner of PASSion award was announced and the winner was Andrey Korshikov.

Denise confirmed that Summit will be back in Seattle Oct 27-30 2015, and that’s great news !!!

Crowd was eagerly waiting for the keynote to start and Denise requested Rimma to take over the stage.
Rimma talked little about her and cracked some jokes about Dr. David Dewit. She added that she is a great fan of SQLServer/PASS community.

IMG_3249

Rimma Nehme on stage giving Day 2 Keynote on Cloud Database 101

The agenda of the talk was based on :
  1. Why Cloud? 
  2. What is a Cloud DB?
  3.  How they are built?
  4.  What is my role as a DBA?
When people like Dr. David Dewit or Rimma talk, they come with great simple slides with lot of information on them. They also tend to use real life analogy to explain things, which is really nice.
Here is an abstract of what Rimma talked:
Why cloud computing?
Why cloud? (Also known as 5th utility! )
Cloud is special because of its characteristics:
  • On demand self-servicing.
  • Location transparent resource pooling.
  • Ubiquitous network access.
  • Rapid elasticity.
  • Measured service pay per use.
History of cloud :
1960’s (Super computers/Mainframe) ,1990(Sales force) ,2002(AWS), 2006(EC2), 2008 – present (Azure, Google Apps).
Where cloud lives: Data Centers(Of course !), and the move to the Next generation Data Centers.
Data center efficiency is measured in terms with PUE (Power Usage Effectiveness) :
 IMG_3253
PUE =
Total Facility Power /  IT Equipment power.
Traditional PUE 2.0 vs Modular PUE 1.15(cooling 0%).
Microsoft Data-centers :

Over 100 DC in more than 40 countries, More than 1 Million Servers.

IMG_3254
What is a Cloud DB
 
Platform As a Service, Infrastructure As a Service ,Software As a Service.
Pizza analogy of cloud:
IMG_3256
What kind of cloud databases are out there: We have Managed RDBMS, Managed No SQL, Cloud Only DBaaS. Analytics as a Service, Object Store.
IMG_3257
How they are build
 
Why Virtualization : Put under-utilized resources in use. With a drawback that direct access to resources will be lost.
Multi Tenancy : 4 common approaches. Private OS,Private Process/DB, Private Schema,Shared Schema.
IMG_3261
Inside Azure SQL DB :
The concept is each account as 1 or more servers, each server has one or more databases, each database has one or more objects.
IMG_3264
Everything designed with HA in mind.
Layers of abstraction : Client layer, services layer,platform later, infra layer.
Network topology :
IMG_3265
MY role as a DBA
 
Last but not least Rimma talked about the role of DBA for Cloud.
IMG_3267

Rimma wrapped up Cloud 101 talk by requesting Dr.Dewitt to join her on stage, and there was a standing ovation by the crowd.

IMG_3269

 Conclusion: 

We #SQLFamily is expecting Rimma to be back next year to give a more in-depth talk on Cloud and we are looking forward to it.

Thanks for reading, and keep watching this space for more !

Monitoring Memory Usage of Memory Optimized Objects – SQL Server 2014


Starting SQL 2014 monitoring memory usage of the memory optimized objects is super important to ensure that your instance don’t run out of memory and cause real bad situations.

One way of ensuring that the memory optimized objects wont utilize a certain amount of memory is by setting up resource governor. This is a great level of control when you have multiple databases running on the same instance and you don’t want memory optimized tables eating up the whole memory.

Is there an easy way to get a quick overview of memory usage of the memory optimized tables in SQL2014?

Yes, you have a real good SSMS report just for this purpose. This one report utilizes DMVs under the hood and provides you with some valuable information.

InmemOLTP Reports

Lets do a quick walk through of what this report gives –

InmemOLTP Reports1

At any point of time I will be interested to see the usage of Table Used Memory counter.

Index Used Memory is based on the bucket count which you mention during non clustered hash index creation.

The query which the report runs under the hood to give you the usage related value is –

SELECT t.object_id, t.name, 
ISNULL((SELECT CONVERT(decimal(18,2),(TMS.memory_used_by_table_kb)/1024.00)), 0.00) 
 AS table_used_memory_in_mb,
ISNULL((SELECT CONVERT(decimal(18,2),(TMS.memory_allocated_for_table_kb - TMS.memory_used_by_table_kb)/1024.00)), 0.00) 
 AS table_unused_memory_in_mb,
ISNULL((SELECT CONVERT(decimal(18,2),(TMS.memory_used_by_indexes_kb)/1024.00)), 0.00) 
 AS index_used_memory_in_mb,
ISNULL((SELECT CONVERT(decimal(18,2),(TMS.memory_allocated_for_indexes_kb - TMS.memory_used_by_indexes_kb)/1024.00)), 0.00) 
 AS index_unused_memory_in_mb
FROM sys.tables t JOIN sys.dm_db_xtp_table_memory_stats TMS 
ON (t.object_id = TMS.object_id)

This query leverages the power of a new DMV dm_db_xtp_table_memory_stats

Conclusion

SSMS reports are great ways to get a quick overview of what is happening, and I expect more and more powerful reports getting incorporated to SQL 2014 during the coming days.

Thanks for reading and keep watching this space for more.

Checkout SQL Server 2014 CTP1 and Windows 2012 R2 preview up in the cloud


You must be really busy spinning up VMs to test SQL Server 2014 CTP1 and Windows Server 2012 R2 preview.

If your on-premise desktop computer/laptop is running low on resources and if you don’t want to take the pain of handling multiple VMs locally,then you have an option to test them out right up in the cloud.

Yes, You can always spin up a Windows Azure VM and do all your testing and shut that down once you are done with your work to save some money.

Sounds like a deal right? Yes, its pretty easy and fast. It takes less than 10 minutes to provision a Windows Azure VM.

Lets look at the options we have in Azure for the test VMs –

1. SQL Server 2014 CTP1 Evaluation Edition on Windows Server 2012

This image will setup SQL2014 CTP1 for you on Windows Server 2012. Once the VM is provisioned and is in running state, you can connect it via RDP to see that SQL Server is ready to test.

VMs

2. Windows Server 2012 R2 Preview

This image will provision a VM for you with Windows Server 2012 R2 preview. You can spin up multiple VMs to do your hardcore testing with Windows.

VMs1

Conclusion

This is indeed a great way to test the latest builds with minimum efforts and you always have the option to shutdown the VMs after use to avoid a heavy bill.

Thanks for reading and keep watching this space for more.

Installing SQL Server 2014 CTP1 – Step by Step


Great news !!! SQL Server 2014 CTP1 is out and the wait is over. Drop everything(Except the databases !) and download your fresh copy from here.

SQL2014Install16

Here is the official announcement with some high level details.

In this post we will do a quick step by step walk through of the whole install process and see if there are any changes from the prior installs.

* Note  – The below point from the download page should be noted

Microsoft SQL Server 2014 CTP1 is pre-release software and should be installed only on a clean machine. It does not support upgrade from or side-by-side installations with any previous version of SQL Server, including SQL Server 2012.

Below is the step by step install process –

[Please click on the images for a better resolution]

Step 1 –  

We will run the setup.exe to get the regular install screen.

SQL2014Install1

Step 2 – 

We will choose stand-alone install and the wizard will do the initial checks

SQL2014Install2

Step 3 – 

We will specify the product key and accept the terms

SQL2014Install3

SQL2014Install4

Step 4 – 

The wizard will now do some real checks and will provide you with a list of details which are either passed or failed.

SQL2014Install5

Step 5 –

We will proceed with Feature install selection once the check phase is completed.

SQL2014Install6

SQL2014Install7

The wizard will do couple more checks as we proceed

SQL2014Install

Step 6 – 

In this step we will do the Instance configuration,Server configuration and the Database Engine configuration.

SQL2014Install8

SQL2014Install9

SQL2014Install10

SQL2014Install11

SQL2014Install12

Step 7  – 

Wizard will do one final round of checks before doing the install

SQL2014Install13

And finally we are ready to do the install

SQL2014Install14

Its always a pleasure to get the below set of messages once the wizard completes its work

SQL2014Install15 Thats it and SQL Server 2014 CTP1 is all yours.
/* Microsoft SQL Server 2014 (CTP1) – 11.0.9120.5 (X64)
Jun 10 2013 20:09:10 
Copyright (c) Microsoft Corporation
Enterprise Evaluation Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (Hypervisor) */

Conclusion

SQL Server 2014 installation wizard doesn’t have any major changes,and everything looks just same as the earlier version.

Thanks for reading and keep watching this space for some real fun with SQL2014.