This 64-bit application couldn’t load because your PC doesn’t have a 64-bit processor – Really !


From last night I have been doing lot of planning to setup a virtual environment on my laptop.It was kind a little complex one as I will be dealing with 3 VM’s on the laptop.One has to be setup as a domain controller and a windows cluster has to be setup and install SQL 2012 on all the VMs and do some real test on Availability groups.

Yes,some real testing on Availability Groups !

After intense planning I decided to install the very first VM using Virtual Box (https://www.virtualbox.org/).I sized the VM pretty well giving 1GB memory,25GB HDD space and of course CPU.

I mounted the ISO of Windows Server 2008 R2 SP1 and was ready to start the install process and suddenly an error came up

I was pretty sure that the laptop which I was using was having a 64bit architecture and it was a Intel core i7 processor.

What else could go wrong ?

There is one area you need to check first if you run into the similar issue,ie check if Virtualization Technology has been enabled in your BIOS.

Wow,cool isnt ?

Yes,this has to be checked out and Microsoft virtual PC has a pretty decent article which will help you to find where its available in BIOS for different laptop models.

http://www.microsoft.com/windows/virtual-pc/support/configure-bios.aspx

For my laptop this was indeed disabled by default.I enabled the same and was back to track.

I hope this quick post will help someone,some day.

Thanks for reading,and keep building virtual machines to test SQL 2012 !

TechEd India 2012 On demand videos – Available now !


Yes ! The wait is over and you can now download the TechEd India 2012 session videos.

http://india.msteched.com/

I wanted to attend the event in person,but couldn’t make it this time.Microsoft did a very good job by streaming most of the sessions,key notes and over all it was great fun.

Kudos to everyone who worked really hard for TechEdIndia 2012 !

Thanks for reading.

Sit back,Relax and Enjoy ! – Unattended uninstall of SQL Server instances


Today I had to remove multiple named instances of SQL2012 RC 0 editions. Suddenly an idea came to my mind. Is there a way were we can remove multiple named instances all at once.

If you look the below screen, you can see that there are multiple instances available in the same server, and I need to remove the first three instances all at once.

If we use add/remove option, then the above screen will only allow us to choose 1 Instance at a time for removal. This was indeed time consuming, and I was thinking of better ways to remove instances all at once.

Is it possible to remove all the required instances at once? Apparently the answer is NO, you cannot remove all the required instances (Selected ones) using GUI.

I was then looking for ways to speed up the whole process of uninstall and then the idea of cmdline uninstall came to my mind. Using command line we can also make the uninstall unattended.This simply means that SQL Server instance can be uninstalled in quite mode without any user interaction.

The below command line statement was quickly prepared, and this helped me with uninstall with more ease, and I don’t have to involve much rather than running the command for the first time.

You will need to run this command from the setup boot strap folder of the respective version of SQL Server

Setup.exe /Action=Uninstall /FEATURES=SQL /INSTANCENAME=SERVER2012B /Q

/Features  = SQL will remove Engine,Replication,Full Text and Data Quality and this option should be used with lot of care.

/Q  = Will do the silent mode work and it’s called as Quite mode.

Conclusion


I found cmdline option as much easy and flexible solution and won’t mind to leverage them for non – production environments.

These cmdline options can also be used for Installation purpose, and I will write about the same during coming days.

Thanks for reading.

Target_recovery_time_in_seconds – Indirect checkpoint explained using extended events


 

Today I was querying sys.databases system view in SQL 2012 and was curious to find one column named target_recovery_time_in_seconds.I was pretty sure that this column was not available before 2012.

I decided to learn more about this,and a quick search on BOL lead me to a different world altogether.This modification was part of a new enhancement/feature in SQL 2012 called ‘Indirect Checkpoint’.

Target_Recovery_Time is a new database level (Important to note that this) setting with SQL 2012,which will override the sp_configure option recovery_interval for that particular database.

Lets check the sp_configure value for recovery_interval for one of my test instance.

name minimum  maximum config_value run_value
recovery interval (min) 0 32767 0 0

Now lets check what is the target_recovery_time_in_seconds for my test databases

SELECT name,target_recovery_time_in_seconds from SYS.Databases
name       target_recovery_time_in_seconds
master 0
tempdb 0
model 0
msdb 0
Test_Extd_Blocking 0
AdventureWorks2012 0

Lets now change the Target_Recovery_Time for database AdventureWorks2012 [ Note - Please do not do this directly on a production database without proper test!]

ALTER DATABASE [AdventureWorks2012] SET TARGET_RECOVERY_TIME = 10 seconds
SELECT name,target_recovery_time_in_seconds from SYS.Databases
Where target_recovery_time_in_seconds <>0
name       target_recovery_time_in_seconds
AdventureWorks2012  10

 

What does this mean ?

Checkpoint forces all dirty pages for the current database to be written to disk. Dirty pages are data or log pages modified after entered into the buffer cache, but the modifications have not yet been written to disk.Checkpoint plays an important part in the recovery interval for a database.

Before 2012 the automatic checkpoint was purely controlled using Recovery_Interval sp_configure option,and there was no flexibility to control this at database level.The default value is zero.

There are some limitations with the checkpoint algorithm before 2012,and its very clearly explained by Microsoft PFE Parikshit Savjani (B/T) in his blog post which can be read here. Another blog post which explained the concepts really clear was from Sourabh Agarwal,yet another PFE (B) and his that post can be read here.The major drawbacks of the old algorithm is massive Disk Write activity during checkpoint and unpredictable database recovery time.

With 2012 Indirect checkpoint algorithm there is a Dirty Page list is maintained and also there is a threshold which is maintained for target dirty buffer.A background process Recovery Writer polls the dirty pages to compare the threshold and flushes them.The whole process is very clearly explained in the above mentioned blog posts and I do not want to repeat the same here.Please refer both the blogs to get a detailed explanation with videos.

We can use extended events to check the behavior of Checkpoints and you can add the events checkpoint_begin and checkpoint_end as per requirement.

I did a quick check on behavior of checkpoint for a very small database which has one table.Below are the test results.A simple statement was wrtitten to insert 80000 rows to the table.

Use[Demo_Checkpoints]
INSERT INTO Test VALUES(1000000)
GO 80000

It took 1 minute and 17 seconds to complete the whole insert.

Default checkpoint behavior

The extended event session came up with the below results.

For the default behavior a checkpoint was applied 5 times.

Behavior after changing the Target Recovery_Time for the database

For the purpose of testing I changed the Target_Recovery_Time for the database as 90 seconds.The event session was triggered again.

This time checkpoint was applied for 3 times and I could easily understand the behavior changes.

One of the most important facts which I observed that the same INSERT query took 1 minute and 28 seconds to complete this time.This clearly states the fact that changing the default behavior needs lot of testing and should be used with lot of caution.

Thanks for reading.

Distribution clean up: distribution job failing with error – Could not remove directory !


Today I came across an error for the Distribution clean up: distribution job in one of the environments

Message - 
Executed as user: Domain\Sqlagentaccount. Could not remove directory ‘\\MachineName\ReplData\unc\MachineName_Replica_Replica10_PUB\20120403180393\’. Check the security context of xp_cmdshell and close other processes that may be accessing the directory. [SQLSTATE 42000] (Error 20015). The step failed.

[Note - I have altered the error message contents for security reasons]

This job was running fine and it started giving issues suddenly.

Troubleshooting steps - 

1. As a first step I checked if xp_cmdshell is configured or not.It was found that xp_cmdshell was indeed enabled.

2.I started to dig into the job to see what it runs.The job runs a stored procedure

EXEC dbo.sp_MSdistribution_cleanup @min_distretention = 0, @max_distretention = 72

3.When this is triggered from a job the Agent account is used,hence I decided to run this from SSMS query window.

I got the same error message as mentioned above,along with this

Replication-@rowcount_only parameter must be the value 0,1, or 2. 0=7.0 compatible checksum. 1=only check rowcou: agent distribution@rowcount_only parameter must be the value 0,1, or 2. 0=7.0 compatible checksum. 1=only  scheduled for retry. Could not clean up the distribution transaction tables.

4. I was pretty sure that these error messages are little misleading type and decided to go ahead with verifying the security permission of the UNC share.

\\MachineName\ReplData

5. Initially I was focusing on the security permissions of the unc share,and assigned both the Agent account and the Database Engine account full_control

6. Ran the job again and it failed yet again.

7. I decided to do some R&D via web and found this blog post from SQL Server Support team.This post was pointing that the SQL Account should also have full_control for the UNC share.

8. I went ahead and granted full_control to SQL Account for the UNC share and the issue was resolved.

This was indeed a strange behavior because the job was running fine before with the SQL Account being part of the UNC share full_control list.

The only change which had happened within the environment is SP4 upgrade and this should not have caused this trouble.

As a test case I removed the permission of the SQL Account  once again for the UNC share and tried to run the job.This was successful,which was yet again a strange behaviour.

Conclusion

This particular behavior is not documented anywhere nor this has been noticed by many people within the SQL Family,hence in case you face the same situation,then you might have to double check the permissions for the UNC share to isolate the issue and get a quick solution.

Thanks for reading.

max_files column for sys.traces – When can it have NULL value


Today there was an interesting question on max_files column value which is available under sys.traces view object.The question was like this  -

max_files in sys.traces is 0 and that means zero, not NULL – correct?

Before we get into the details,I would like to mention that the max_files column means the maximum number of rollover files.When we create a trace we have the option to enable file roll over,which simply means that if the first trace file size is set as 5MB,then after reaching that limit sql profiler will create another file to continue capturing the trace.This will continue until and unless the trace is stopped.

Now,can ever this max_file value be NULL ? Yes,after research I found that this value will be NULL if the is_rollover flag is set as 0(Ie,we dont choose enable file rollover when creating the trace)

 

select max_files,is_rollover,id AS Trace_ID from sys.traces
WHERE id <>1 --Filer DefaultTrace

 

Another interesting fact is related to default trace.The default trace max_file value is 5 and max_size is 20 and this cannot be changed.

Thanks for reading.

SQL 2012 Best Practice Analyzer – An overview


Couple of days back Microsoft released BPA for SQL 2012.There are no major changes which I could catch within the tool and the whole install,scanning looked similar to the earlier version.SQL Server 2012 Best Practices Analyzer will offer scanning of both local computer and remote machines.

You would need Microsoft Baseline Config Analyzer 2.0 to use BPA,and we will scan the instance using Baseline config tool.

Both tools are available to download  -

BPA

Baseline Config Tool

We will get good amount of real valuable info using BPA and its pretty neat.

One of the interesting errors which I noted was related to Memory being paged out.I ran BPA on a virtual box with no instance level settings changed to simulate what best BPA is going to offer.

Category: Configuration

Source: localhost

Issue: The event log has messages that indicate that “Significant Portion of SQL Server Memory Has Been Paged Out”
EventId: 17890 Occurrence(s): 95 Last Occurred: 04/11/2012 11:05:31
WorkingSetOfSQLServerInstance – 110844
AvailablePhysicalMemory – 385012
TotalPhysicalMemory – 4193280
SystemMemoryState – Available physical memory is high

Impact: The performance and responsiveness of SQL Server services and associated applications may be affected when the processes are paged out

Conclusion

You have to use BPA in your environments and trust me,this will really help you to isolate issues well before  it becomes a problem.

 

Any Size Page Allocator – Memory allocation changes with SQL 2012


SQL 2012 comes up with lot of improvements and changes.One of the major change which the DBA’s should be aware is related to how memory allocations are made.

Prior to 2012

Prior to 2012 we had 2 memory managers.One was for handling allocations of 8k and less and the second was handling more than 8K allocations.

With SQL Server 2012


With SQL Server 2012 we will have a single memory manager which will manage both single page and multi page allocations.

We can call him ‘Any Size Page Allocator’

With SQL 2012 multi page allocations,CLR allocations are within Max/Min Server Memory value.

The manager in the picture looks sad,however in reality he wont be sad as he has a major role to do and will be busy most of the time.

Conclusion


Your max/min memory value is very important and it was important in the earlier version too.However the catch is now everything depends on this value with 2012.

Thanks for reading.

OLEDB Wait and DBCC – An interesting discussion


Yesterday there was an interesting discussion related to OLEDB wait type getting spiked when DBCC is executed.

This was pretty interesting question for me and was curious to really understand if OLEDB wait type has a dependency with DBCC.

OLEDB is a common wait type in SQL Server.There was a survey conducted by the well know SQL Server Expert and Master Paul S Randal(Blog/Twitter) 2 years back and OLEDB was one of the top 10 waits found across multiple environments.

This wait type indicates that an SPID has made a function call to an OLE DB provider and is waiting for the function to return the required data. This wait type may also indicate that the SPID is waiting for remote procedure calls, linked server queries, BULK INSERT commands or full-search queries.

We inquired with the person who posted the question about OLEDB wait and DBCC relation if he was running any linked servers and the answer was NO.He confirmed that there are no queries which are running while the DBCC was running.There are some DMV’s which might be triggering OLEDB,however as there was no other queries were running it was obvious to check if DBCC really uses OLEDB internally.

The short answer to confirm the relation is,Yes! DBCC uses OLEDB internally and a small spike is expected.

Its normal to get a spike in OLEDB wait,however the average wait_time will be relatively less when you compare this wait with other waits.

If you are using linked servers,full text search and observing OLEDB as a potential candidate of wait type,then you will need to trouble shoot and isolate the issue.

Below is a quick demo to show a increase in this wait type when you run DBCC.

1. Below is the Wait Stats before running DBCC

2. We will run DBCC for Adventureworks 2012 database

3. Below is Wait Stats after running DBCC

As you see there is a small spike in the values and you could even run DBCC on a huge database and see how much spike you are experiancing.

Conclusion

If DBCC is running fine and is not running long duration than expected,then this wait type is not necessarly a cause for concern.

I am interested to understand if you are facing any issues with this OLEDB wait type in your environments and what course of action you have done.

Thanks for reading.

 

Codeplex runs Metro UI Tiles – Downloaded Adventureworks 2012 Database


I was quite surprised to see Metro style UI tiles in codeplex site while downloading Adventureworks 2012 database.

http://msftdbprodsamples.codeplex.com/

Great user experience is what this is all about,and earlier this month I saw metro UI for SQL Azure too.

Downloading Adventureworks 2012 was pretty easy and data file was provided(.mdf).You have the option to easily attach the file and a log file will be automatically created.The database will be readly online in seconds.

Have you noticed Metro UI recently somewhere else,I would like to hear your comments on this.

Thanks for reading.