Central Management Servers to Evaluate Policies – “All about being in control”


Policy Based Management or PBM was introduced in SQL Server 2008 initially.This feature will help Database Administrators to create policies and ensure that his/her environment is in compliance with that policy.

Lets take a simple example of recovery model.Using PBM a DBA has got the ability to enforce a policy which will check for the recovery model and database creation with recovery model other than Full can be evaluated or even it can be enforced.

What about if DBA is running an environment with 100 production servers and he/she wants to evaluate a policy?

The short answer is we can use Central Management Servers to import policies and Evaluate them.

Demo

1. Using SSMS I created a policy called ‘Check_Autoclose_Policy_Enterprise’ which will use condition ‘Check_AutoClose_Condition_Enterprise’

You can ref BOL topic [here] to understand how to create a policy and a condition.

This policy simply means that I have set a condition with an expected value ‘False’ for Autoclose property and a policy is created for that condition.

If I evaluate that policy for my servers,it will check the condition for each and every database and will report me in case there are any deviation.If Autoclose = True,then I am notified.

2. Now I will proceed and export this policy.Export action will create an XML file and I will store the XML file to the local drive.

3. I will now proceed to use Central Management Servers(Available under registered servers).For the purpose of this demo I have added 2 Instances of SQL 2012 RC 0 already to Central Management Servers as show below.

 

4. Using Root level Central Management Server we can Import the policy which I had created.

5. Once import action is completed.The policy is spread across all the 2 instances.

6. For the purpose of the demo I had created 2 similar databases for both Instance B and Instance C.The name of the database is Test_PBM.

For Instance B – Autoclose is set as False

For Instance C – Autoclose is set as True

7. Using Root level Central Management server,we can now evaluate the policy

8. Once evaluation is completed,we are provided with a clear explanation on how the evaluation went through and which all databases were not according to our policy compliance.

The policy was created for Auto Close property to be False(Condition) for all the databases and after evaluation we found that there is one deviation and that is our Test_PBM database which is residing under Instance C.

 

Conclusion – The power of PBM and Central Management Servers can be combined for great control.

Have you deployed anything like this in your environment.I am interested to know your scenarios.

Thanks for reading.

Installed SQL Server features discovery report


Have you ever faced a situation where in you had to double check what all features you deployed for your SQL Server environment ? Oh! did I selected Full-Text search feature during installation?

There are many number of ways you can double check the features which are available

1. Check services running under SQL Server Configuration Manager

2. Check the install folders for logs(Example - C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log) which will give details about features.

Think about getting a report of this! Wow,that’s a cool thing isn’t? This is what exactly ‘Installed SQL Server features discovery report’ wizard does for you.

You need to run SQL Server setup to launch this wizard and this is rightly available under the tool section of the page which we have seen many times -

We just need to run this option and it will provide us with a very structured report which talks about all the features which were installed for the server.

I was amazed to know couple of  features(Example – LocalDB) were present in my machine.

The contents of the report is actually getting pulled from the summary.txt file which is created after the install is completed (C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log\summary.txt)

So,if you just need the data then its easy to look at the file rather than running the setup.exe.However the report looks neat and structured.

Thanks for reading.

Running Performance Dashboard Reports for SQL 2008 R2 and SQL 2012 RC 0


Today I decided to leverage performance dash board report for my SQL Server 2008 R2 instance and SQL 2012 RC 0 Instances.

SQL 2008 R2 RTM

I downloaded the msi file and ran setup.sql under SQL 2008 R2 RTM instance. Straight away I was hit by the error

Msg 207, Level 16, State 1, Procedure usp_Main_GetCPUHistory, Line 6
Invalid column name ‘cpu_ticks_in_ms’.
Msg 15151, Level 16, State 1, Line 1
Cannot find the object ‘usp_Main_GetCPUHistory’, because it does not exist or you do not have permission.

The error is because there is no column ‘cpu_tickets_in_ms’ for the DMV sys.dm_os_sys_info,hence I changed the column name in Setup.sql as ms_ticks.

Rob Carrol has a blog post which talks about the same and you can read it here

Once the column was changed the script ran without any trouble.Report worked as per the expectations.

SQL 2012 RC 0

Same testing was done for SQL 2012 RC 0 and I was amazed by the results.The RDL file worked perfectly for SQL 2012 too after changing the same code for setup.sql as mentioned above.

Here is a quick preview of the report which came straight out of SQL 2012 instance.

Performance Dashboard tool will give you some good amount of information and you can always prepare your own customized reports using builds and use it for better control.

I would like to hear from you,if you have built any of these solutions for your environment.

Thanks for reading.

Deleted the TUF file!!! Boy, that’s trouble


Just 2 days back I wrote a post of TUF files related to log shipping. You can read the post here 

Today we will see what is going to happen if someone deleted the TUF file accidentally or by any chance it got missed.

I tried to simulate this on my test machine which had log shipping configured. Below are the steps which I followed -

1. Deleted the TUF file which was available in the secondary server.

2. The delete operation was successful.

3. Checked log shipping status and found that the health is ‘Good’

4. Both primary and secondary databases are synced and both have got same set of data. Row by row,Col by Col.

Note - Ideally deleting the TUF file should also cause issues to log shipping secondary restores, however my simulation did not faced that behavior.

All looks good, and you might be wondering that deleting a TUF file is easy and it’s not going to hurt me much!!!

Now, let’s assume that we lost our primary database server due to Memory burn(Short circuit) and we are in need of the Secondary database.

The RTO and RPO matrix is quite okay and we are allowed to bring the secondary database up within 30 minutes. Walk in the park right? We just have to bring the database up, the users/jobs/other objects are already taken care and just the database needs to be up.

Let’s write this simple 6 word TSQL to bring our database up.

RESTORE DATABASE [XenDevDS] WITH RECOVERY

XenDevDS is my test database which is available in the secondary server and its primary copy was the one which was residing on the server which just went for a trip(Memory burn!)

As soon as we execute this command with a big smile assuming that the database will be up, we will get this message -

Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally.
Msg 3441, Level 17, State 1, Line 1
During startup of warm standby database ‘XenDevDS’ (database ID 7), its standby file (‘C:\Program Files\Microsoft SQL Server\MSSQL11.SERVER2012B\MSSQL\DATA\XenDevDS_20120112191505.tuf’) was inaccessible to the RESTORE statement. The operating system error was ’2(The system cannot find the file specified.)’. Diagnose the operating system error, correct the problem, and retry startup.

What does it mean – It simply means that you have done a good job by deleting the TUF file and now please bring it back.

TUF file is required for the Stand by database to recover and we will not be able to bring the database up without the same.

As the simulation was in a very controlled environment, I brought back the TUF file and ran the restore command once again.

RESTORE DATABASE [XenDevDS] WITH RECOVERY

RESTORE DATABASE successfully processed 0 pages in 0.908 seconds (0.000 MB/sec).

The database was recovered and was accepting new connections.

Conclusion – TUF file is a very important part of recovery of a stand by database and we have to educate server ops team or anyone who is responsible for cleaning up files and make sure that this is un-touched.

Do you have any ways to recover a stand by database in log shipping secondary without TUF file.If Yes,then please share your experience in the comments section of this post.

Thanks for reading.

SQL Beautifier


Today I came across some tools which does smart way of formatting your SQL code.Who does’nt like a code which is well formatted and easy to read.

I remember my college days when I used to write multiple lines of code for my final semester project and manually format the code for printing and ease of understanding purpose.

I would have scored better grades if I had thought about this tool long back!!! There are many tools out there,however the one which I liked well is  -

http://www.dpriver.com/pp/sqlformat.htm?ref=wangz.sqlformat.htm

You can get beautiful and well formatted codes like

BEGIN TRAN

DECLARE @endDate DATETIME

SET @endDate = Dateadd(hh, 1, Getdate()) – 1 hours from now

WHILE Getdate() < @endDate
BEGIN
INSERT INTO [Test]
VALUES      (1)

WAITFOR delay ’00:00:02′;
END

Hope you all liked this tool and happy coding!

<Update Added 12/10/2012

Today I found another tool which is pretty impressive,its http://sqlformat.appspot.com/

<Update>

TUF File – Not a very famous member,but does his job pretty well!


I have seen various questions related to TUF files,and one of the discussion was interesting and it was something like below-

<Start>

John  - I don’t understand why we need this TUF file in SQL Server, what does it do? I have been looking around for more information, but seems there is no great information around the same.

Kim – Are you talking about .TRN files?

John – No, I am talking about .TUF files. Trust me it’s there!

Kim – Oh, then I am missing something. Let me check that out.

</End of discussion>

So what is this TUF file is all about?

I was also not very sure of what TUF file deals with, however after some research I was able to understand the concept of TUF files and decided to write this post.

TUF file or a Transaction Undo File is created when performing log shipping to a server in Standby mode. This file contains information on all the modifications performed at the time backup is taken.

This file is important in Standby mode of log shipping were you can access the secondary database. Database recovery is done in standby mode when log is restored.

While restoring the log backup, un-committed transactions will be recorded to the undo file and only committed transactions will be written to disk there by making users to read the database. When we restore next Tlog backup SQL server will fetch the un-committed transactions from undo file and check with the new Tlog backup whether the same is committed or not. If its committed the transactions will be written to disk else it will be stored in undo file until it gets committed or rolled back.

A small graphical representation of the above statement is shown below -

I configured log shipping to test TUF file and created a scenario like below -

1. Created a primary database.

2. Configured log shipping to another Instance within the same box.

3. Backup, Copy and Restore to happen every 15 minutes.

4. Continuously inserted data to the primary database to simulate TUF creation.

5. I was able to find TUF file created under the same path were I had placed my system databases files.

There seems to be changes in this path were we can find the TUF files. It will be available in the root as mentioned above for SQL Server 2008 above and used to be in the LS_Copy folder for earlier versions.

 

 

Coming up next - What happens when I delete this file? So please stay tuned my friends.

Thanks for reading.

Partial database availability – A walk through


Partial database availability is an exiting feature,and I decided to write this blog post after observing many doubts related to this feature in forums.

Lets assume a situation like mentioned below -

We have a database with multiple file groups and data files reside separately in respective file groups.Now assume a situation where we have a severe disk failure and one of the .ndf file residing drive is corrupted!

This will make the database inaccessible.We have multiple options to recover from this situation and one among them is to do a restore of the database using backup sets.Think of a situation where our database is super large and a restore will take around 30 – 45mins.

Do we really want our database users to wait until we complete the restore? What if we give them a portion of the database online,while we work on the recovery part and bring everything online slowly.

Wow!!! (Business will just love these ideas as soon as I tell them).However this one solution is not so simple and require lot of planning,testing and the application should be able to work without a portion of data.

Lets do a demo of this situation and understand how we can achieve partial database availability -

1. We will create a demo database

--Created a Database
CREATE DATABASE TEST_FILEGROUP

2. Create a new file group

--Create a new FileGroup
ALTER DATABASE TEST_FILEGROUP
ADD FILEGROUP ADDITIONAL

3. Add one additional data file to the database

--Add an additional data file to the database
ALTER DATABASE TEST_FILEGROUP
ADD FILE (NAME='NEW_DATA_FILE',
FILENAME='D:\Program Files\Microsoft SQL Server\MSSQL10_50.SQL2008R2RD\MSSQL\DATA\NEW_DATA_FILE.ndf')
TO FILEGROUP ADDITIONAL

4. Validate them

sp_helpdb TEST_FILEGROUP
Name
------------------
TEST_FILEGROUP
 TEST_FILEGROUP_log
 NEW_DATA_FILE

5. Create a table [Employee] on primary file group and insert some data

--Create a Table on primary file group and Insert some data rows
 USE [TEST_FILEGROUP]
 CREATE TABLE Employee(ID Int Identity(1000,1),Name Varchar(20))
USE [TEST_FILEGROUP]
 INSERT INTO Employee (Name)
 SELECT 'John'
 UNION ALL
 SELECT 'Tim'
 UNION ALL
 SELECT 'Tracy'
 UNION ALL
 SELECT 'Jim'
 UNION ALL
 SELECT 'Ancy'

6. Create another table [HRRECORDS] on the additional file group and Insert some data

--Create another table on the additional file group and Insert some data
 USE [TEST_FILEGROUP]
 CREATE TABLE HRRECORDS(ID Int Identity(1000,1),Description Varchar(20))
 ON ADDITIONAL

7. Now we will proceed to take a file group backup for the purpose of this demo

--Take a ADDITIONAL filegroup backup for purpose
 BACKUP DATABASE TEST_FILEGROUP
 FILEGROUP='ADDITIONAL'
 TO DISK='C:\TestBackup\Additional_FileGroup.bak'
Processed 16 pages for database 'TEST_FILEGROUP', file 'NEW_DATA_FILE' on file 1.
 Processed 5 pages for database 'TEST_FILEGROUP', file 'TEST_FILEGROUP_log' on file 1.
 BACKUP DATABASE...FILE=<name> successfully processed 21 pages in 0.318 seconds (0.495 MB/sec).

Now this is the real interesting part of this demo.We are going to simulate an error situation -

We are going to stop the SQL Engine service to delete the additional data file(.ndf file).Once the service is stopped we will be able to delete the ndf file.

Note - This is just for a demo purpose and should not be simulated in real time production environment.[Word of caution before the CTO/Manager gives you surprises! ]

Once the ndf file is deleted,start the engine and you will observe the below error straight away if you try to access our demo database.

This was expected and simply means that deleting ndf file caused failure for the database.What are we going to do now to bring this database up and running?

Definitely we can restore the backup to bring this up,however just think about this situation.Your database backups are huge as we might be dealing with a huge database and users have to wait until the whole backup set is restored.

Do we have a RTO of around 45 mins – 1 hr? Do we really need to wait for the whole restore to complete to fix and issue with another file group before users can connect to the database and access tables which are residing in Primary file group?

The short and sweet answer to this question is  - NO,starting SQL 2005,database can be made available to users as soon as the primary file group is up and running for a database.

Now lets go back to our situation were database is offline because of a corrupted/missing .ndf file.

Users compromised(This should be actually part of DR strategy and should not be decided at the last minute) that they can work with out table HRRECORDS,the one which was residing in the additional file group which just failed. The users just need Employee  table to continue their work.

Wow!!! Now we can feel some fresh air to breath.

8. We can acheive this by taking the additional data file offline

Note – If we make this file offline,we can bring this back only using a file/file grp backup or a regular database full backup.

--Taking additional file offline
ALTER
DATABASE TEST_FILEGROUP
MODIFY
FILE (NAME='NEW_DATA_FILE', offline);

We will need to recycle the service once again for changes to take effect and we can verify this change by checking the sys.database_files table

SELECT name,state_desc from
sys.database_files
Name                         state_desc
 TEST_FILEGROUP              ONLINE
 TEST_FILEGROUP_log          ONLINE
 NEW_DATA_FILE               OFFLINE

9. Now as the file is offline the database is accessable.

Our query to Employee Table will give details like

SELECT TOP 100 [ID]
 ,[Name]
 FROM [TEST_FILEGROUP].[dbo].[Employee]
ID    Name
1000 John
1001 Tim
1002 Tracy
1003 Jim
1004 Ancy

If we attempt to query the table on the additional data file will give an error like

SELECT TOP 100 [ID]
 ,[Description]
 FROM [TEST_FILEGROUP].[dbo].[HRRECORDS]

Msg 8653, Level 16, State 1, Line 1
The query processor is unable to produce a plan for the table or view ‘HRRECORDS’ because the table resides in a filegroup which is not online.

This is Partial Database Availability were one whole database is available without some tables and we have achieved this using File Groups/Files.

Now how can we bring this ndf file back from the backup? Here is the process to show that to you -

1.We will restore the file group from the backup which we had taken earlier

RESTORE DATABASE TEST_FILEGROUP
FILEGROUP = 'ADDITIONAL' FROM DISK = 'C:\TestBackup\Additional_FileGroup.bak' WITH RECOVERY

Oops we are missing something here -

/*Msg 3159, Level 16, State 1, Line 1
The tail of the log for the database “TEST_FILEGROUP” has not been backed up. Use BACKUP LOG WITH NORECOVERY to backup the log if it contains work you do not want to lose. Use the WITH REPLACE or WITH STOPAT clause of the RESTORE statement to just overwrite the contents of the log.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally. */

What does the message says – It says that the tail of the log is not backed up and it needs to be done before we do a restore of the file group.

2. So lets go ahead and backup the tail of the log

BACKUP LOG TEST_FILEGROUP TO DISK='C:\TestBackup\Tail.trn' WITH NORECOVERY
/*Processed 10 pages for database 'TEST_FILEGROUP', file 'TEST_FILEGROUP_log' on file 1.
BACKUP LOG successfully processed 10 pages in 0.223 seconds (0.345 MB/sec).*/

3. Lets try to restore the file Group now.

RESTORE DATABASE TEST_FILEGROUP
FILEGROUP = 'ADDITIONAL' FROM DISK = 'C:\TestBackup\Additional_FileGroup.bak' WITH RECOVERY

I specifically used RECOVERY here for the restore command to show the error message and show what is the need of tail of log backup.

As soon as we run the above command we will get another message

Processed 16 pages for database 'TEST_FILEGROUP', file 'NEW_DATA_FILE' on file 1.
Processed 5 pages for database 'TEST_FILEGROUP', file 'TEST_FILEGROUP_log' on file 1.
The roll forward start point is now at log sequence number (LSN) 21000000019100001. 
Additional roll forward past LSN 21000000024300001 is required to complete the restore sequence.
This RESTORE statement successfully performed some actions, 
but the database could not be brought online because one or more RESTORE steps are needed. 
Previous messages indicate reasons why recovery cannot occur at this point.
RESTORE DATABASE ... FILE=<name> successfully processed 21 pages in 0.214 seconds (0.736 MB/sec).

4.Finally we will bring the database up by restoring the tail of log backup

RESTORE DATABASE TEST_FILEGROUP
FROM DISK = 'C:\TestBackup\Tail.trn' WITH RECOVERY
Processed 0 pages for database 'TEST_FILEGROUP', file 'TEST_FILEGROUP' on file 1.
Processed 0 pages for database 'TEST_FILEGROUP', file 'NEW_DATA_FILE' on file 1.
Processed 7 pages for database 'TEST_FILEGROUP', file 'TEST_FILEGROUP_log' on file 1.
RESTORE LOG successfully processed 7 pages in 0.117 seconds (0.463 MB/sec).

Now our database is completely available with both tables and as a test case we can just query HRRECORDS table to validate data

SELECT TOP 100 [ID]
 ,[Description]
 FROM [TEST_FILEGROUP].[dbo].[HRRECORDS]
ID   Description
1000 IT Spec
1001 DBA
1002 Developer
1003 Java Guy
1004 .NetSpec

Conclusion  - Partial database availability is very much useful for huge databases and you have all your secondary file groups to store historical data and primary file group is critical for business.

Backup and restore of file/file groups is a very interesting topic and I will simulate this feature in SQL 2012 to see if there are any changes and will come up with more details.

I would love to hear your experience dealing with file groups and thanks for reading.



BACKUP LOG WITH TRUNCATE_ONLY!!! Thank heaven that it got discontinued from 2008 – Above


Today I witnessed yet another user who was dealing with BACKUP LOG [database] WITH TRUNCATE_ONLY.

This is one of the features which got discontinued from SQL Server 2008 and above,and this was one of the best decisions Microsoft had taken after SQL Community experts gave valid reasons for discontinuing it.

I decided to write this blog post to explain why this feature is an evil and force SQL Server 2000(Yes,you heard it right! There are many companies which still uses it) and SQL Server 2005 users not to use it.

When you do a Truncate_Only it simply means that the log backup chain is broken and you are loosing your ability of doing a point in time restore.

This is a worst case situation if you are hit with a disaster.

Lets do a quick demo to demonstrate this -

[Using SQL Server 2005 with SP4 for the demo]

1. I have created a test database named as Test_Truncate

CREATE DATABASE [Test_Truncate]

2. Created a table and inserted one record

USE [Test_Truncate]
CREATE TABLE dbo.Test_Truncate (ID INT IDENTITY (100,1), Description VARCHAR (50))
USE [Test_Truncate]
INSERT INTO dbo.Test_Truncate VALUES('Mercedes')

3.  Now we will proceed and take a backup of this database

BACKUP DATABASE Test_Truncate TO DISK='D:\Test_Truncate_1st_Fullbackup.bak'

4. Insert one more record

USE [Test_Truncate]
INSERT INTO dbo.Test_Truncate VALUES('BMW')

5. Now we will proceed and take a transaction log backup

BACKUP LOG Test_Truncate
TO DISK='D:\Test_Truncate_1st_Tranlog.trn'

6.Insert 3 more records

USE [Test_Truncate]
INSERT INTO dbo.Test_Truncate VALUES('Mini Cooper')
GO
INSERT INTO dbo.Test_Truncate VALUES('Ford')
GO
INSERT INTO dbo.Test_Truncate VALUES('GMC')

7. Now we will do a BACKUP lOG WITH TRUCATE_ONLY

Command(s) completed successfully.

8. Insert 3 more records

USE [Test_Truncate]
INSERT INTO dbo.Test_Truncate VALUES('Impala')
GO
INSERT INTO dbo.Test_Truncate VALUES('Cruz')
GO
INSERT INTO dbo.Test_Truncate VALUES('Nissan')

9. Now we try to take a transaction log backup again

BACKUP LOG Test_Truncate
TO DISK='D:\Test_Truncate_2nd_Tranlog.trn'

As soon as we execute this tran_log backup command we will get the error

Msg 4214, Level 16, State 1, Line 1
BACKUP LOG cannot be performed because there is no current database backup.
Msg 3013, Level 16, State 1, Line 1
BACKUP LOG is terminating abnormally.

This means that we truncated the transaction log prior to this backup and broke the log chain.

If you dont take a full backup immediately after the error and continue with normal log backups,then you dont have a recovery chain to recover the database in the event of a failure.

I have observed a different behavior in SQL Server 2000(Yes! Again I said 2000) were transaction log backups are created after Truncate_Only,however there are not valid for a restore and you will end up getting error message like this if you try to restore them after a full backup restore.

Server: Msg 4330, Level 16, State 4, Line 1
The log in this backup set cannot be applied because it is on a recovery path inconsistent with the database.
Server: Msg 3013, Level 16, State 1, Line 1
RESTORE LOG is terminating abnormally.

SQL Server 2005 is much better that it doesn’t allow any transaction_log backups soon after Truncate_Only.

Conclusion -

You have two options to deal with log filling issues

1 . Take a regular tran_log backup if you are running on FULL RECOVERY MODEL

2. Put your databases to Simple RECOVERY MODEL if you dont need point in time recovery.

Thanks for reading.

SQL Azure Compatibility Assessment Tool


Today I was thrilled to hear about the new tool which was released by Microsoft yesterday(3rd Jan 2012).The code name for the tool is “SQL Azure Compatibility Assessment Tool” and this is the right tool to test compatibility if you are planning to move your databases to SQL Azure.

All the details of the release if available under

http://social.technet.microsoft.com/wiki/contents/articles/6246.aspx

I decided to test this wonderful tool which will in a matter of minutes will help me to understand any compatibility issues if I move my databases to the cloud(SQL Azure).

For the test to be successful you will need a LIVE ID and a .dacpac

I already had a live ID which I am using since many years and I decided to go ahead with the process of creating a .dacpac

How do I prepare a .dacpac - 

1. You will need to download and install SQL Server Data Tools to proceed and create a .dacpac

SQL Server Data Tools needs to have Visual Studio 2010 with SP1,and as I already had them installed on my test server,the tool got installed correctly.

2. Opened SQL Server Data Tools GUI to Create a Database Project and Imported a database which I had already backed up from SQL Server 2005 and restored to SQL Server 2012 RC 0.

The Next setup was to proceed and build this.

After the build was completed a .dacpac was created.It was available under

\\Path\Visual Studio 2010\Projects\Database2\bin\Debug

Assessment after .dacpac creation

I went ahead and accessed the portal https://assess.sql.azure.com .Logged in with my live ID and choose New Assessment

This gave me an option to upload the .dacpac which was created earlier.As soon as I uploaded the file,I got Assessment In Progress screen

I went ahead and did a refresh to see if the assessment has completed and found that its done and report is ready

The view button gave me this report which had 2 sections Not Supported and Need to fix

My test database had 2 Not supported explanations and it can be found here -