SQL Server 2016 CTP 3 Setup – UI Improvements


There are some improvements in the setup UI for SQL Server 2016 CTP3, and its great to see Microsoft making these changes as they receive feedback from multiple customers and early adopters.

My earlier post on setup UI changes for CTP 2 is available here.

The 1st change is around granting the SQL Engine Service account necessary privileges to enable INF(Instant File Initialization) during the setup itself.

This was a surprise when I saw it for the 1st time :

New Setup

You can notice the check box for Grant Perform Volume Maintenance Task privilege.

There is an interesting post by Nic Cain(B/T) on this topic and you can read it here.

The 2nd change is around Tempdb configuration and here are the new options :

New Setup2

The wizard will now check the no of cores on the server and will come with a pre-populated value for the no of temp db data files.

It also provides options to define the initial size and the auto growth size.

This might not be something which the power users care,(When I said power users,I meant people who automate lot of SQL server deployments and are already doing this with custom scripts) however this is really a value add for people who do manual SQL installation.

Going forward we also don’t need to turn on Trace Flags 1117 and 1118 as they are taken care by default.

Conclusion:

I’m really glad that Microsoft is giving options for users to follow some of the long-standing best practices during the install process itself and this is a welcoming change.

Thanks for reading and keep watching this space for more!

Setting up the Min and Max Memory options on the fly – Word of caution !


Apparently,setting up the Min and Max Memory setting for SQLServer should be definitely encouraged and no server should be left out with the default settings.But here is a quick word of caution around this –

Do you want to change these numbers on the fly randomly when there is good amount of load on the production system ?

The answer is ‘No’. You shouldn’t be changing these numbers on the fly for your random testing on a production box with good amount of load.

Changing these numbers using sp_configure and hardening the changes using a reconfigure or doing it via GUI will clear the plan cache, and that’s not something which you would like to happen on a busy production box.

Let me show a quick demo to explain this.

Step 1

I will query the employee table which is available for the AdventureWorks2012 database

USE [AdventureWorks2012]
GO
SELECT * FROM [HumanResources].[Employee]

Step 2 

The above query will cause an adhoc plan to be stored in the cache. We can verify this by querying the plan cache

SELECT st.text , qp.query_plan , cp.cacheobjtype , 
cp.objtype , cp.plan_handle 
FROM sys.dm_exec_cached_plans cp 
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st 
CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) qp 
WHERE objtype = 'Adhoc';

Min Max Cache Relation1

Step 3

I will change the Min Server Memory of the server using sp_configure

sp_configure 'Min Server Memory (MB)',501
RECONFIGURE

As soon as I run this command,I can check the sql server error log to see messages like

SQL Server has encountered 1 occurrence(s) of cachestore flush for the ‘Object Plans’ cachestore (part of plan cache) due to some database maintenance or reconfigure operations.
SQL Server has encountered 1 occurrence(s) of cachestore flush for the ‘SQL Plans’ cachestore (part of plan cache) due to some database maintenance or reconfigure operations.
SQL Server has encountered 1 occurrence(s) of cachestore flush for the ‘Bound Trees’ cachestore (part of plan cache) due to some database maintenance or reconfigure operations.

Step 4

Query the plan cache again to see that its cleared

Min Max Cache Relation2

 

Conclusion

Changing instance level settings needs to done with care and if possible these should be done during a proper change/maintenance window for production systems.

Thanks for reading and keep watching this space for more.

Leverage Policy Based Management for regular Page Verify =Checksum compliance monitoring


Page Verify = Checksum is one option which I like all my servers to be compliant. By default all the new databases which gets created will have page verify = checksum as this option is inherited from the model database.

NOTE – APPLICABLE FROM SQL2005 AND ABOVE.

SELECT name,page_verify_option_desc FROM sys.databases
WHERE database_id=3

name                                page_verify_option_desc
=======                         ==================
model                               CHECKSUM

So far so good. But what will happen if we restore a database from SQL 2000 or even restore a database which is having a different page verify option other than checksum.

Lets check that out –

For the demo purpose on Server A we already have a database named Check_Page_Verify which is having a different page_verify option set as NONE(Bad idea !).

****Server A****

--Check the page_verify option of the db
SELECT name,page_verify_option_desc FROM sys.databases
WHERE database_id=6

name                         page_verify_option_desc
=====                       ==================
Check_PageVerify     NONE

We will back this database up and will restore it in Server B.

****Server A****

--Backup the database
BACKUP DATABASE Check_PageVerify
TO DISK='I:\Check_PageVerify.bak'
****Server B****

--RESTORE the database
RESTORE DATABASE Check_PageVerify
FROM DISK='I:\Check_PageVerify.bak'
WITH MOVE 'Check_PageVerify' TO 'D:\Check_PageVerify.mdf',
MOVE 'Check_PageVerify_log' TO 'D:\Check_PageVerify_log.LDF'

“Data file and log file in the same drive ! (Bad idea),however as this is a demo we can ignore this.”

In Server B lets check the page_verify option

****Server B****

--Check the page_verify option of the db
SELECT name,page_verify_option_desc FROM sys.databases
WHERE database_id=6

name                              page_verify_option_desc
=====                            ==================
Check_PageVerify          NONE

I started this post by saying Page Verify = Checksum is one option which I like all my servers to be compliant. Now after this restore my server is not complaint anymore for page verify. I will never come to know about this miss until and unless I regularly do compliance checks.

So how do we do that compliance checks ? This is where Policy Based Management (PBM) comes into picture.
We will need to create a condition first using the wizard.

PBM PageVerify1

I’m using facet ‘Database options’ for this demo and added the expression @pageverify= checksum.

Now, lets create a policy for this condition.

PBM PageVerify2

When creating the policy we will need to choose the condition which we created earlier. The evaluation mode for this policy are of 3 types

1. On demand 2.On schedule and 3.On change Log only.

To know more about PBM concepts please ref http://technet.microsoft.com/en-us/library/bb510667.aspx 

We are picking On schedule for this demo. This option needs a schedule to be created. We will create a schedule which will run at 12 AM every Sunday.

PBM PageVerify3

Once the schedule is created we need to ensure that policy is enabled.

PBM PageVerify4

The idea is to have the schedule run every Sunday to check if there are any databases which are violating the rule of page verify = checksum and on Monday morning when we come to work we will just need to check history of the policy to see if there are any violations.

For the demo purpose, lets trigger the schedule to run manually.

PBM PageVerify5

This is the job which was created automatically for the schedule which we created earlier.

After running the job successfully, all we need to do is check the history of the policy

PBM PageVerify6

There you go, we got the first violation details of the policy

PBM PageVerify7

It clearly says database Check_PageVerify is not compliant for the policy which we created.

If you have multiple databases which are not compliant, then you have the option to evaluate the policy manually ,choose the one which are not complaint and force them to be complaint.

PBM PageVerify8

PBM PageVerify9

PBM PageVerify10

PBM PageVerify11

Great, now we have everything under control and all the databases are compliant.

Conclusion

There are great things which we can do with PBM and if you have a better way to achieve this using PBM(Alternate facets,condition etc) or any other methods, then I’m eager to hear from you.

Keep watching this space for more.

Thanks for reading.

Central Management Servers to Evaluate Policies – “All about being in control”


Policy Based Management or PBM was introduced in SQL Server 2008 initially.This feature will help Database Administrators to create policies and ensure that his/her environment is in compliance with that policy.

Lets take a simple example of recovery model.Using PBM a DBA has got the ability to enforce a policy which will check for the recovery model and database creation with recovery model other than Full can be evaluated or even it can be enforced.

What about if DBA is running an environment with 100 production servers and he/she wants to evaluate a policy?

The short answer is we can use Central Management Servers to import policies and Evaluate them.

Demo

1. Using SSMS I created a policy called ‘Check_Autoclose_Policy_Enterprise’ which will use condition ‘Check_AutoClose_Condition_Enterprise’

You can ref BOL topic [here] to understand how to create a policy and a condition.

This policy simply means that I have set a condition with an expected value ‘False’ for Autoclose property and a policy is created for that condition.

If I evaluate that policy for my servers,it will check the condition for each and every database and will report me in case there are any deviation.If Autoclose = True,then I am notified.

2. Now I will proceed and export this policy.Export action will create an XML file and I will store the XML file to the local drive.

3. I will now proceed to use Central Management Servers(Available under registered servers).For the purpose of this demo I have added 2 Instances of SQL 2012 RC 0 already to Central Management Servers as show below.

 

4. Using Root level Central Management Server we can Import the policy which I had created.

5. Once import action is completed.The policy is spread across all the 2 instances.

6. For the purpose of the demo I had created 2 similar databases for both Instance B and Instance C.The name of the database is Test_PBM.

For Instance B – Autoclose is set as False

For Instance C – Autoclose is set as True

7. Using Root level Central Management server,we can now evaluate the policy

8. Once evaluation is completed,we are provided with a clear explanation on how the evaluation went through and which all databases were not according to our policy compliance.

The policy was created for Auto Close property to be False(Condition) for all the databases and after evaluation we found that there is one deviation and that is our Test_PBM database which is residing under Instance C.

 

Conclusion – The power of PBM and Central Management Servers can be combined for great control.

Have you deployed anything like this in your environment.I am interested to know your scenarios.

Thanks for reading.

Backup feature – Copy_only


All ad-hoc backup’s requests should be treated important as this can harm your recovery plan during emergency.Starting SQL 2005 we have an option to create COPY_ONLY backup which will not alter the differential_log_sequence(The differential_base_lsn affects the sequence in which combination of backups are to be restored in a recovery).

This concept is proved and tested using the below scripts.

–Backup the database AdventureWorksDW

BACKUP DATABASE AdventureWorksDW
TO DISK =’C:\Backup\AdventureWorksDW_Full.bak’

–Check the differential_base_lsn

SELECT DB_NAME(database_id) AS [DB Name],
differential_base_lsn AS ‘Note differential base LSN’
FROM sys.master_files
WHERE database_id = DB_ID(‘AdventureWorksDW’)
AND type_desc = ‘ROWS’
GO

/*LSN found was 31000000011300037*/

–Prove that normal full backup will change the differential_base_lsn

BACKUP DATABASE AdventureWorksDW
TO DISK =’C:\Backup\AdventureWorksDW_Full.bak’

GO

SELECT DB_NAME(database_id) AS [DB Name],
differential_base_lsn AS ‘Note differential base LSN’
FROM sys.master_files
WHERE database_id = DB_ID(‘AdventureWorksDW’)
AND type_desc = ‘ROWS’
GO

/*LSN found was 31000000013800037 */
–Now take a Copy-Only backup and we can note that LSN is not changed.

BACKUP DATABASE AdventureWorksDW
TO DISK =’C:\Backup\AdventureWorksDW_Full.bak’
WITH COPY_ONLY

GO

SELECT DB_NAME(database_id) AS [DB Name],
differential_base_lsn AS ‘Note differential base LSN’
FROM sys.master_files
WHERE database_id = DB_ID(‘AdventureWorksDW’)
AND type_desc = ‘ROWS’
GO

/*LSN found was 31000000013800037 */

Best Practice : Restoring MSDB database ‘Error : Subsystem % could not be loaded’


The paths to SQL Server Agent subsystem DLL’s are stored in table msdb.dbo.syssubsystems, hence when you do a restore of MSDB database you have to follow the below process to delete the entry and repopulate the same:
use msdb
go
delete from msdb.dbo.syssubsystems
exec msdb.dbo.sp_verify_subsystems 1
go
You would require restarting the Agent service to be back in business.

Best Practice – Never RDP to a Production Server and Run Tools like SSMS/Visual studio


Have you got the habbit of doing RDP to the server and launch tools like SSMS and run scripts ? If your answers is ‘YES’,then it would be good to avoid doing that and follow the below best practice :

It is a good practice not to RDP to a Production server,open SSMS and other tools like Visual Studio and start working on it.

Resources like Memory and CPU is required to use these tools and is not recommended in a production server.

As a best practice always launch SSMS from local box to connect the instance and work on the same.