Database Backup Encryption – SQLServer 2014 CTP2


SQL Server 2014 CTP2(All new,shiny shiny !) availability was announced yesterday at #SQLPASS Summit.

CTP2

Native backup encryption is one of the key features which was announced with this release, and I’m sure this feature will be widely used.

Native backup encryption will encrypt the data while creating the backup, and eventually you will end up creating an encrypted backup file. This is one of those features which was only provided by 3rd party tools [Similar to backup encryption].

Now backup encryption is out of the box for SQLServer 2014 !

So,what all are the pre-requisites to get started with this feature –

1.  You will need either Standard,Enterprise or BI edition of SQLServer 2014.

2. You will need to have a Certificate or a Asymmetric key.

3. You will need to choose the required encryption algorithm.

As we now know the pre-requisites ,lets try to create an encrypted backup and follow the process one by one.

First we will create a master key –

-- Creates a database master key. 
-- The key is encrypted using the password "Pa55word"
USE master;
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'Pa55word';
GO

 Followed by creating a Certificate

--Creates a certificate.
Use Master
GO
CREATE CERTIFICATE BackupCertificate
 WITH SUBJECT = 'Backup Encryption Certificate';
GO

Once all the keys/certificate requirements are taken care, we will proceed to create an encrypted backup file by specifying the certificate and a backup algorithm. The syntax is pretty simple –

--Create an encrypted backup file.
BACKUP DATABASE RockStar TO DISK =
'C:\SQL2014CTP2Backup\RockStar_Encrypted.bak'
WITH COMPRESSION,
 ENCRYPTION 
 (ALGORITHM = AES_256, SERVER CERTIFICATE = BackupCertificate);

That’s it ! Now we have an encrypted backup file and the result after running the above statement is   –

Warning: The certificate used for encrypting the database encryption key has not been backed up. You should immediately back up the certificate and the private key associated with the certificate. If the certificate ever becomes unavailable or if you must restore or attach the database on another server, you must have backups of both the certificate and the private key or you will not be able to open the database.
Processed 296 pages for database ‘RockStar’, file ‘RockStar’ on file 1.
Processed 2 pages for database ‘RockStar’, file ‘RockStar_log’ on file 1.
BACKUP DATABASE successfully processed 298 pages in 0.045 seconds (51.573 MB/sec).

 

You might have noticed the warning, it tells you that the certificate is not backed up. Its always recommend to backup the certificate as soon as you create it. [Be safe always !]

Also the algorithm which I mentioned in the above query, ie AES_256 is one among the 4 options available. The rest 3 are  –

AES 128, AES 192, and Triple DES

That’s a simple and straight forward way for creating an encrypted backup file using T-SQL.

GUI also offers this ability and the encryption options are available in the backup options.

BackupEncryGUI

Conclusion  – 

Native backup encryption is certainly a good to have feature and I’m looking forward to test this really well during the next few days.

Thanks for reading and keep watching this space for more.

Backup your Windows Azure SQL Databases – Yes,you should !


Care about RPO’s and RTO’s?  Then you should be backing up your Windows Azure SQL Databases(Formerly SQLAzure).

clock

Windows Azure SQL Database is highly available and data is redundant(3 copies of your database is stored elsewhere),however that doesn’t help you to recover from the below situation  –

“Hey DBA, I deleted few records from the database accidently !!! Can you please recover the database for me ? “

You definitely need a backup of the database to recover from this situation.

One of the assumptions which I normally hear while talking about Windows Azure SQL Database is that you don’t need to backup your databases and Microsoft takes care of it under the hood ! This is wrong, and you should do it in-case you have a need to tackle situations like what was mentioned above.

You can either do a manual export of your database to the storage account or you can schedule the exports(New Update, Scroll down for details).This exported copy can be used to do the restores(Imports).

The import options are really limited. You cannot do operations like overwriting(Replace) a database etc. I’m really confident that Azure will reach that point pretty soon.

In this post we will see how the manual export process works and will also see how we can import an exported database back.

When doing this manually its always a good idea to get a transactionally consistent backup copies. For this purpose we will need to copy the database to the same server or to a different server. In this post we will do a copy to the same server.

So, we have a database called WASDRocks with a table named ChildTable. The table as 2 records as shown below

Azure backup1

We will now do a database copy to the same server using command

CREATE DATABASE WASDROCKS_Copy AS COPY OF [WASDROCKS]

There you go, we have the new database ready now which is a transactionally consistent copy.

Azure backup2

We will now export the WASDROCKS_Copy and keep it safe under our storage account. Export option is available right below the database selection.

Azure backup3

Storage account needs to be selected along with container details and once the credentials are entered correctly(Real time check of passwords !!!) the .BACPAC will be available.

Azure backup4

Azure backup5

Great, so now we have a transactionally consistent database backup. We can drop this database to avoid additional costs(Yes,its billed)

Now, lets’ do some deletes !!! We will delete a record from the ChildTable

Azure backup6

We can recover this data using the backup which we had taken earlier. All we need to do is an Import

Azure backup7

Note – In a real world situation be very careful about your RPO values. You might have to increase  or decrease the number of exported copies to achieve your SLA. More number of exported copies means, more cost overheads for the storage.

If you try to overwrite the database by giving the same database,ie WASDROCKS is our case,then there will be an error.

Azure backup8

This clearly states the limitations of import which we talked about earlier. You cannot overwrite an existing database.

We will import the backup copy as a different database named WASDROCKS_Recovery.

Azure backup9

There you go, the recovered database is ready for use now.

Azure backup10

If we connect to the recovered database and check for the table, then we can find the details of the deleted row.

Azure backup11

Yes, this is not something which is really flexible to do point -in-time restores,but it works just fine. What really matters is your RPO/RTO numbers and you can plan your exports according to that need.

Is there a way to schedule exports, so that manual intervention is limited ?

Yes,this is what I really love about Windows Azure Team. They are really aggressive and they keep adding features at a great pace.

Automated Database Exports was related last month and please refer this blog post by Scott Guthrie for more details.

Keeping backing up your databases and do random restores to ensure that they are good.

Thanks for reading and keep watching this space for more !

Page restore – Smart utility within SSMS !


Page level restore is not something new for us, however page restore via SSMS (GUI) is something new, and it’s a feature available for SQL Server 2012.

Page level restores is a great flexible option to restore a particular page from the backup in the event of a corruption been identified for one or more pages.

Would you really like to restore a 50GB backup set when there is a requirement to replace a single corrupted page? No, you won’t and this is where page restore comes in.

You can find your corrupted pages and replace it with a restore.

Till SQL 2012 page level restore was possible via T-SQL code, and SQL 2012 provides you this option directly from the UI.

Let’s do a quick demo to understand this feature better.

Demo

1. For the purpose of this demo I will be using a database called Test_Page_Restore which is an exact replica of Adventureworks2012 database (Available in codeplex)

2. I am interested in corrupting an index page for Table HumanResources.Employee.

Let’s find an Index page for this table.

DBCC ind('Test_Page_Restore', 'HumanResources.Employee', 1)

Wait!!! Before I do some serious corruptions, let me take a quick backup of the database to the disk.

I will be taking a Full Backup directly.

BACKUP DATABASE [Test_Page_Restore] TO DISK= 
'C:\Test_Backups\Test_Page_Restore_BEFCorruption.bak' WITH stats=10

3. As I have a backup now,I can afford to do some real interesting corruptions !

Note A backup set doesn’t mean that you are good and you have a sound disaster recovery plan. A backup is valid if and only if it can be restored. As I am doing a demo in a controlled environment I am not worried about backup validity.

I picked page ID 985 and will be corrupting it. I will also need to find the offset of the page for corrupting it and in my case it will be 985*8192(No of bytes for the page)  = 8069120

I am using hex-editor called XVI32 to corrupt the pages, and I have learned this technique from Kendra Little(B/T)

Note –  You should never ever try these steps in a production environment. It should be done only in a controlled LAB environment.

Before opening the data file using hex-editor I will need to take the database offline.

ALTER DATABASE [Test_Page_Restore] SET offline

Once the database is offline I can open the data file using the hex-editor. Using the above mentioned offset, I can go to the exact page and corrupt it.


I went to the exact page and corrupted that page by entering some letters. Once done, I saved the file directly from hex-editor and closed it.

4. Technically speaking now I have wonderfully corrupted page ID 985.

I can proceed and take the database Online Now.

ALTER DATABASE [Test_Page_Restore]  SET online

5. I am pretty sure that if I do a Select * from the table, I will get I/O errors.

Let’s test that out now.

SELECT TOP 1000 * 
FROM   [Test_Page_Restore].[HumanResources].[employee]

Msg 824, Level 24, State 2, Line 1
SQL Server detected a logical consistency-based I/O error: 
incorrect checksum (expected: 0x77d4e0e1; actual: 0x5cd4e0e1). 
It occurred during a read of page (1:985) in database ID 5 at offset 0x000000007b2000 
in file 'C:\Program Files\Microsoft SQL Server\
MSSQL11.SQL2012A\MSSQL\DATA\Test_Page_Restore.mdf'. 

The error message clearly says that I have a corrupted page and it matched the one which I corrupted using hex-editor.

Yay!!!

6. As mentioned earlier, starting SQL 2012 we can do a corrupt page restore directly from SSMS.For this I will need to right click the database and choose Restore option and then Page.

7. As soon as I select this option I will get the new restore window.

If you carefully look at the Pages section, it already did a DBCC CHECKDB and pulled the corrupted page from the database.

That’s one real cool option to have.

It exactly matched the page which I corrupted earlier.

The wizard will also let you know from which backup set you can restore the corrupted page back. In my case, I had already taken a backup earlier and it picked it up automatically.

8. I did a restore of the page using the wizard to bring back the corrupted page.

9. As the restore is completed. I checked the data to see if everything is fine.

SELECT TOP 1000 * 
FROM   [Test_Page_Restore].[HumanResources].[employee] 

Great we are back in action yet again.

I would like to thank Paul Randal(B/T) and Kendra Little(B/T) here as I learned to work with Pages/Corruption/Repairs by attending their sessions and reading their articles. They are just awesome.

 Conclusion

SQL Server 2012 has multiple enhancements and GUI support for Page restores are just one of them.

Thanks for reading.

Partial database availability – A walk through


Partial database availability is an exiting feature,and I decided to write this blog post after observing many doubts related to this feature in forums.

Lets assume a situation like mentioned below –

We have a database with multiple file groups and data files reside separately in respective file groups.Now assume a situation where we have a severe disk failure and one of the .ndf file residing drive is corrupted!

This will make the database inaccessible.We have multiple options to recover from this situation and one among them is to do a restore of the database using backup sets.Think of a situation where our database is super large and a restore will take around 30 – 45mins.

Do we really want our database users to wait until we complete the restore? What if we give them a portion of the database online,while we work on the recovery part and bring everything online slowly.

Wow!!! (Business will just love these ideas as soon as I tell them).However this one solution is not so simple and require lot of planning,testing and the application should be able to work without a portion of data.

Lets do a demo of this situation and understand how we can achieve partial database availability –

1. We will create a demo database

--Created a Database
CREATE DATABASE TEST_FILEGROUP

2. Create a new file group

--Create a new FileGroup
ALTER DATABASE TEST_FILEGROUP
ADD FILEGROUP ADDITIONAL

3. Add one additional data file to the database

--Add an additional data file to the database
ALTER DATABASE TEST_FILEGROUP
ADD FILE (NAME='NEW_DATA_FILE',
FILENAME='D:\Program Files\Microsoft SQL Server\MSSQL10_50.SQL2008R2RD\MSSQL\DATA\NEW_DATA_FILE.ndf')
TO FILEGROUP ADDITIONAL

4. Validate them

sp_helpdb TEST_FILEGROUP
Name
------------------
TEST_FILEGROUP
 TEST_FILEGROUP_log
 NEW_DATA_FILE

5. Create a table [Employee] on primary file group and insert some data

--Create a Table on primary file group and Insert some data rows
 USE [TEST_FILEGROUP]
 CREATE TABLE Employee(ID Int Identity(1000,1),Name Varchar(20))
USE [TEST_FILEGROUP]
 INSERT INTO Employee (Name)
 SELECT 'John'
 UNION ALL
 SELECT 'Tim'
 UNION ALL
 SELECT 'Tracy'
 UNION ALL
 SELECT 'Jim'
 UNION ALL
 SELECT 'Ancy'

6. Create another table [HRRECORDS] on the additional file group and Insert some data

--Create another table on the additional file group and Insert some data
 USE [TEST_FILEGROUP]
 CREATE TABLE HRRECORDS(ID Int Identity(1000,1),Description Varchar(20))
 ON ADDITIONAL

7. Now we will proceed to take a file group backup for the purpose of this demo

--Take a ADDITIONAL filegroup backup for purpose
 BACKUP DATABASE TEST_FILEGROUP
 FILEGROUP='ADDITIONAL'
 TO DISK='C:\TestBackup\Additional_FileGroup.bak'
Processed 16 pages for database 'TEST_FILEGROUP', file 'NEW_DATA_FILE' on file 1.
 Processed 5 pages for database 'TEST_FILEGROUP', file 'TEST_FILEGROUP_log' on file 1.
 BACKUP DATABASE...FILE=<name> successfully processed 21 pages in 0.318 seconds (0.495 MB/sec).

Now this is the real interesting part of this demo.We are going to simulate an error situation –

We are going to stop the SQL Engine service to delete the additional data file(.ndf file).Once the service is stopped we will be able to delete the ndf file.

Note – This is just for a demo purpose and should not be simulated in real time production environment.[Word of caution before the CTO/Manager gives you surprises! ]

Once the ndf file is deleted,start the engine and you will observe the below error straight away if you try to access our demo database.

This was expected and simply means that deleting ndf file caused failure for the database.What are we going to do now to bring this database up and running?

Definitely we can restore the backup to bring this up,however just think about this situation.Your database backups are huge as we might be dealing with a huge database and users have to wait until the whole backup set is restored.

Do we have a RTO of around 45 mins – 1 hr? Do we really need to wait for the whole restore to complete to fix and issue with another file group before users can connect to the database and access tables which are residing in Primary file group?

The short and sweet answer to this question is  – NO,starting SQL 2005,database can be made available to users as soon as the primary file group is up and running for a database.

Now lets go back to our situation were database is offline because of a corrupted/missing .ndf file.

Users compromised(This should be actually part of DR strategy and should not be decided at the last minute) that they can work with out table HRRECORDS,the one which was residing in the additional file group which just failed. The users just need Employee  table to continue their work.

Wow!!! Now we can feel some fresh air to breath.

8. We can acheive this by taking the additional data file offline

Note – If we make this file offline,we can bring this back only using a file/file grp backup or a regular database full backup.

--Taking additional file offline
ALTER
DATABASE TEST_FILEGROUP
MODIFY
FILE (NAME='NEW_DATA_FILE', offline);

We will need to recycle the service once again for changes to take effect and we can verify this change by checking the sys.database_files table

SELECT name,state_desc from
sys.database_files
Name                         state_desc
 TEST_FILEGROUP              ONLINE
 TEST_FILEGROUP_log          ONLINE
 NEW_DATA_FILE               OFFLINE

9. Now as the file is offline the database is accessable.

Our query to Employee Table will give details like

SELECT TOP 100 [ID]
 ,[Name]
 FROM [TEST_FILEGROUP].[dbo].[Employee]
ID    Name
1000 John
1001 Tim
1002 Tracy
1003 Jim
1004 Ancy

If we attempt to query the table on the additional data file will give an error like

SELECT TOP 100 [ID]
 ,[Description]
 FROM [TEST_FILEGROUP].[dbo].[HRRECORDS]

Msg 8653, Level 16, State 1, Line 1
The query processor is unable to produce a plan for the table or view ‘HRRECORDS’ because the table resides in a filegroup which is not online.

This is Partial Database Availability were one whole database is available without some tables and we have achieved this using File Groups/Files.

Now how can we bring this ndf file back from the backup? Here is the process to show that to you –

1.We will restore the file group from the backup which we had taken earlier

RESTORE DATABASE TEST_FILEGROUP
FILEGROUP = 'ADDITIONAL' FROM DISK = 'C:\TestBackup\Additional_FileGroup.bak' WITH RECOVERY

Oops we are missing something here –

/*Msg 3159, Level 16, State 1, Line 1
The tail of the log for the database “TEST_FILEGROUP” has not been backed up. Use BACKUP LOG WITH NORECOVERY to backup the log if it contains work you do not want to lose. Use the WITH REPLACE or WITH STOPAT clause of the RESTORE statement to just overwrite the contents of the log.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally. */

What does the message says – It says that the tail of the log is not backed up and it needs to be done before we do a restore of the file group.

2. So lets go ahead and backup the tail of the log

BACKUP LOG TEST_FILEGROUP TO DISK='C:\TestBackup\Tail.trn' WITH NORECOVERY
/*Processed 10 pages for database 'TEST_FILEGROUP', file 'TEST_FILEGROUP_log' on file 1.
BACKUP LOG successfully processed 10 pages in 0.223 seconds (0.345 MB/sec).*/

3. Lets try to restore the file Group now.

RESTORE DATABASE TEST_FILEGROUP
FILEGROUP = 'ADDITIONAL' FROM DISK = 'C:\TestBackup\Additional_FileGroup.bak' WITH RECOVERY

I specifically used RECOVERY here for the restore command to show the error message and show what is the need of tail of log backup.

As soon as we run the above command we will get another message

Processed 16 pages for database 'TEST_FILEGROUP', file 'NEW_DATA_FILE' on file 1.
Processed 5 pages for database 'TEST_FILEGROUP', file 'TEST_FILEGROUP_log' on file 1.
The roll forward start point is now at log sequence number (LSN) 21000000019100001. 
Additional roll forward past LSN 21000000024300001 is required to complete the restore sequence.
This RESTORE statement successfully performed some actions, 
but the database could not be brought online because one or more RESTORE steps are needed. 
Previous messages indicate reasons why recovery cannot occur at this point.
RESTORE DATABASE ... FILE=<name> successfully processed 21 pages in 0.214 seconds (0.736 MB/sec).

4.Finally we will bring the database up by restoring the tail of log backup

RESTORE DATABASE TEST_FILEGROUP
FROM DISK = 'C:\TestBackup\Tail.trn' WITH RECOVERY
Processed 0 pages for database 'TEST_FILEGROUP', file 'TEST_FILEGROUP' on file 1.
Processed 0 pages for database 'TEST_FILEGROUP', file 'NEW_DATA_FILE' on file 1.
Processed 7 pages for database 'TEST_FILEGROUP', file 'TEST_FILEGROUP_log' on file 1.
RESTORE LOG successfully processed 7 pages in 0.117 seconds (0.463 MB/sec).

Now our database is completely available with both tables and as a test case we can just query HRRECORDS table to validate data

SELECT TOP 100 [ID]
 ,[Description]
 FROM [TEST_FILEGROUP].[dbo].[HRRECORDS]
ID   Description
1000 IT Spec
1001 DBA
1002 Developer
1003 Java Guy
1004 .NetSpec

Conclusion  – Partial database availability is very much useful for huge databases and you have all your secondary file groups to store historical data and primary file group is critical for business.

Backup and restore of file/file groups is a very interesting topic and I will simulate this feature in SQL 2012 to see if there are any changes and will come up with more details.

I would love to hear your experience dealing with file groups and thanks for reading.



BACKUP LOG WITH TRUNCATE_ONLY!!! Thank heaven that it got discontinued from 2008 – Above


Today I witnessed yet another user who was dealing with BACKUP LOG [database] WITH TRUNCATE_ONLY.

This is one of the features which got discontinued from SQL Server 2008 and above,and this was one of the best decisions Microsoft had taken after SQL Community experts gave valid reasons for discontinuing it.

I decided to write this blog post to explain why this feature is an evil and force SQL Server 2000(Yes,you heard it right! There are many companies which still uses it) and SQL Server 2005 users not to use it.

When you do a Truncate_Only it simply means that the log backup chain is broken and you are loosing your ability of doing a point in time restore.

This is a worst case situation if you are hit with a disaster.

Lets do a quick demo to demonstrate this –

[Using SQL Server 2005 with SP4 for the demo]

1. I have created a test database named as Test_Truncate

CREATE DATABASE [Test_Truncate]

2. Created a table and inserted one record

USE [Test_Truncate]
CREATE TABLE dbo.Test_Truncate (ID INT IDENTITY (100,1), Description VARCHAR (50))
USE [Test_Truncate]
INSERT INTO dbo.Test_Truncate VALUES('Mercedes')

3.  Now we will proceed and take a backup of this database

BACKUP DATABASE Test_Truncate TO DISK='D:\Test_Truncate_1st_Fullbackup.bak'

4. Insert one more record

USE [Test_Truncate]
INSERT INTO dbo.Test_Truncate VALUES('BMW')

5. Now we will proceed and take a transaction log backup

BACKUP LOG Test_Truncate
TO DISK='D:\Test_Truncate_1st_Tranlog.trn'

6.Insert 3 more records

USE [Test_Truncate]
INSERT INTO dbo.Test_Truncate VALUES('Mini Cooper')
GO
INSERT INTO dbo.Test_Truncate VALUES('Ford')
GO
INSERT INTO dbo.Test_Truncate VALUES('GMC')

7. Now we will do a BACKUP lOG WITH TRUCATE_ONLY

Command(s) completed successfully.

8. Insert 3 more records

USE [Test_Truncate]
INSERT INTO dbo.Test_Truncate VALUES('Impala')
GO
INSERT INTO dbo.Test_Truncate VALUES('Cruz')
GO
INSERT INTO dbo.Test_Truncate VALUES('Nissan')

9. Now we try to take a transaction log backup again

BACKUP LOG Test_Truncate
TO DISK='D:\Test_Truncate_2nd_Tranlog.trn'

As soon as we execute this tran_log backup command we will get the error

Msg 4214, Level 16, State 1, Line 1
BACKUP LOG cannot be performed because there is no current database backup.
Msg 3013, Level 16, State 1, Line 1
BACKUP LOG is terminating abnormally.

This means that we truncated the transaction log prior to this backup and broke the log chain.

If you dont take a full backup immediately after the error and continue with normal log backups,then you dont have a recovery chain to recover the database in the event of a failure.

I have observed a different behavior in SQL Server 2000(Yes! Again I said 2000) were transaction log backups are created after Truncate_Only,however there are not valid for a restore and you will end up getting error message like this if you try to restore them after a full backup restore.

Server: Msg 4330, Level 16, State 4, Line 1
The log in this backup set cannot be applied because it is on a recovery path inconsistent with the database.
Server: Msg 3013, Level 16, State 1, Line 1
RESTORE LOG is terminating abnormally.

SQL Server 2005 is much better that it doesn’t allow any transaction_log backups soon after Truncate_Only.

Conclusion –

You have two options to deal with log filling issues

1 . Take a regular tran_log backup if you are running on FULL RECOVERY MODEL

2. Put your databases to Simple RECOVERY MODEL if you dont need point in time recovery.

Thanks for reading.

Restoring a database with 3 different modes :


Standby mode will leave the DB in restoring mode,writes are not allowed but user can connect and give select statements.

restore database adventureworksLT
from disk=’c:\backup\adventureworksLTfull.bak’with replace,
standby =’c:\backup\adventureworksLT ful.stn’

With recovery mode will ensure that no more restore will happen and DB will be online.

restore database adventureworksLT
from disk=’c:\backup\adventureworksLTfull.bak’with recovery

With norecovery will ensure more restore can happen,like restoring differential or Tran_Logs.

restore database adventureworksLT
from disk=’c:\backup\adventureworksLTfull.bak’with norecovery

Error while restoring a Database : ‘Exclusive access could not be obtained because the database is in use’


Solution :
When we restore the database system should have Exclusive lock.

The error means that some process is using this database,we should kill those process which are using database,

Otherwise use this command :

Use Master
ALTER DATABASE <> SET MULTI_USER WITH ROLLBACK IMMEDIATE;

RESTORE DATABASE <> FROM DISK = ‘C:\Program Files\Microsoft SQL Server\MSSQL\BACKUP\YOURDB.bak’