Counter of Festivals

Ashok Blog for SQL Lovers

Wednesday, 19 November 2014

Why VIEW takes long time to Execute

Why VIEW takes long time to Execute


Here in this article we are discussing related to the performance of the views. Why the performance of the view is slow. Please note that I am not taking about the indexed views, it's another thing and out of the scope of this article.
Like stored procedure, the optimizers cache the execution plan in the case for further use.
Let's take an example:
We have a simple base table contains records and we just create a view from this base table objects and then execute both (Base table and View separately)

CREATE TABLE emp_Table
       (empid    INT NOT NULL IDENTITY(1,1) PRIMARY KEY,
        empname  VARCHAR(50) NOT NULL,
        empsal   DECIMAL(20,2))
       
GO

CREATE VIEW vw_empView
AS
SELECT empid, empname, empsal
FROM   emp_Table

GO
-- Example-1
SELECT * FROM emp_Table

GO
-- Example-2
SELECT * FROM vw_empView

GO

Here in this example which one is executed faster?
It's Example-1. That means direct query from Table objects. To get the better understanding related to execution we just look at the execution plan.



Both execution plans are same. So, why the execution of views take long time?
This is because it takes SQL Server extra work such as looking up data in the system tables before it can execute the view.
This extra work is not part of the execution plan, so it appears that the two SELECT statements should run at the same speed, which they don't, because some of the work SQL Server is doing is hidden.

Monday, 17 November 2014

Avoiding 'Failed to notify via email' errors

Ref:
http://sqlmag.com/blog/avoiding-failed-notify-operator-email-errors
If a picture is worth a thousand words, then the words associated with the following picture would probably involve a decent amount of muttering and cursing about the the DREADED ‘Failed to notify <operator name> via email’ error message that you might see crop up in your environment every once in a while – on new or recently provisioned servers:
image

Email Alerts Good. Configuration Options – Not so Intuitive

As I’ve posted in the past, using Database Mail can be a great way to be notified of problems or issues that might occur with your server (such as corruption or IO problems) or with your jobs – provided, of course, that you’ve got some sort of backup or ‘watchdog’ for your alerting system to make sure that it doesn’t just fail silently without any warnings.
Only, as great as it is to set up a Notification for an operator (I highly recommend using an alias instead of binding actual/individual email-addresses to so-called operators) to be notified of when a job fails or runs into problems, there’s an extra configuration step that you’ll need to jump through to get this to work.
And the problem is that this extra/additional configuration step is obscenely easy to forgot. In fact, I’ve forgotten about it enough times now that being burned by this issue with new servers that I’ve provisioned (or recently inherited) has caused me to blog about this step – in the hopes that doing so will ‘fuse’ this step into my brain. That, and this extra step is so easy to forget about that I’m sure that posting it here will help someone else.

Letting the SQL Server Agent Talk to Database Mail

As you might guess, getting Database Mail to play nicely with the SQL Server Agent is sadly a question of security. Or, stated differently, you’ll get the dreaded ‘Failed to notify … via email’ errors IF you haven’t explicitly allowed the SQL Server Agent to interact with Database Mail.
That said, it’s happily a trivial operation to allow the SQL Server Agent and Database Mail to ‘get giggy’ with each other and throw wild little parties for one another – whenever they want or need to. To do so, you just need to right click on the SQL Server Agent node in SQL Server Management Studio, and select Properties. Then switch to the Alert System node, and enable the Mail profile you’d like to allow the SQL Server Agent to use, restart the SQL Server Agent Service (NOT the SQL Server or the box – just the SQL Server Agent Service – which you can pretty much safely do on MOST systems at ANY time without any big worries or concerns), and you’re good to go.



image

Move Database Files MDF and LDF to Another Location

 Move Database Files MDF and LDF to Another Location

Ref:http://blog.sqlauthority.com/2012/10/28/sql-server-move-database-files-mdf-and-ldf-to-another-location/

When a novice DBA or Developer create a database they use SQL Server Management Studio to create new database. Additionally, the T-SQL script to create a database is very easy as well. You can just write CREATE DATABASE DatabaseName and it will create new database for you. The point to remember here is that it will create the database at the default location specified for SQL Server Instance (this default instance can be changed and we will see that in future blog posts). Now, once the database goes in production it will start to grow.
It is not common to keep the Database on the same location where OS is installed. Usually Database files are on SAN, Separate Disk Array or on SSDs. This is done usually for performance reason and manageability perspective. Now the challenges comes up when database which was installed at not preferred default location and needs to move to a different location. Here is the quick tutorial how you can do it.
Let us assume we have two folders loc1 and loc2. We want to move database files from loc1 to loc2.
USE MASTER;GO-- Take database in single user mode -- if you are facing errors
-- This may terminate your active transactions for database
ALTER DATABASE TestDBSET SINGLE_USERWITH ROLLBACK IMMEDIATE;GO-- Detach DBEXEC MASTER.dbo.sp_detach_db @dbname N'TestDB'GO
Now move the files from loc1 to loc2. You can now reattach the files with new locations.
-- Move MDF File from Loc1 to Loc 2
-- Re-Attached DB
CREATE DATABASE [TestDB] ONFILENAME N'F:\loc2\TestDB.mdf' ),
FILENAME N'F:\loc2\TestDB_log.ldf' )FOR ATTACH
GO
Well, we are done. There is little warning here for you: If you do ROLLBACK IMMEDIATE you may terminate your active transactions so do not use it randomly. Do it if you are confident that they are not needed or due to any reason there is a connection to the database which you are not able to kill manually after review.

Copy Database from SQL 2005 to SQL 2008 ( Lower to Higher Version)

We cant able to copy higher version(SQL 2008 Secondbase) to lower version(SQL 2005 Secondbasedev)

Using attach/detach or backup/restore Method.

But We can Achieve using 

Generate Scripts(with Data) / SQL Server Import AND Export Wizard available in 

SQL 2008 on Small Databases.

before doing this we have to disable all constraints and make it copy then enable constraints

-- Disable all the constraint in databaseEXEC sp_msforeachtable "ALTER TABLE ? NOCHECK CONSTRAINT all"
Generate Scripts(with Data):



save to an script window or text file and make it easy.
-- Enable all the constraint in databaseEXEC sp_msforeachtable "ALTER TABLE ? WITH CHECK CHECK CONSTRAINT all"


SQL Server Import AND Export Wizard:

before doing this we have to disable all constraints and make it copy then enable constraints

-- Disable all the constraint in databaseEXEC sp_msforeachtable "ALTER TABLE ? NOCHECK CONSTRAINT all"




  • Then SQL Server Import AND Export Wizard window will be opened. Press Next
  • Choose a Data source (in your case from SQL Server 2008). Choose a Destination( in your case from SQL Server 2005).
  • Select Copy data from one or more tables or view
  • Select the source's tables and destination's tables
  • Click Next & Finish
-- Enable all the constraint in databaseEXEC sp_msforeachtable "ALTER TABLE ? WITH CHECK CHECK CONSTRAINT all"


Thursday, 13 November 2014

Enabling Distributed Transactions in Remote Servers

Enabling Distributed Transactions in Remote Servers

When we run a distributed transaction against an instance of SQL Server, we may receive an error message. This problem can be resolved by following the below statements:
      • Make sure that the MSDTC (Microsoft Distributed Transaction Coordinator) is enabled on both the servers.
      • Go to Start –> Run –> Type as ‘Services.msc

      • ~Right click on the above selected service and click on start. This starts the MSDTC service in the server (do the same on the remote servers also)
      • Once the Service is started, the next step is to enable the Network Transaction.
      • Go to Start –> Run –> Type as ‘dcomcnfg.exe’ or Goto Control Panel –> Administrative Tools –> Component Services
      • In windows 7, Expand Console Root –> Compoent Services –> Computers –> My Computer –> Distributed Transaction Coordinator –> Local DTC –> Right Click Properties
      • In windows XP, Expand Console Root –> Compoent Services –> Computers –> My Computer –> Right Click Properties
    • Goto ‘Security’ Tab and check the checkbox ‘Network DTC Access’
    • To allow the distributed transaction to run on the current server from a remote server, select the ‘Allow Inbound’ check box.
    • To allow the distributed transaction to run on a remote server from the current server, select the ‘Allow Outbound’ check box.
    • Under the Transaction Manager Communication group, select the ‘No Authentication Required’ option.
    • Click on OK in Security Dialog Box
    • If the settings are not affected once restart the server where the changes were done
    We can check the status of service state by executing the below command in the query window (this need the corresponding permissions for executing):
    1
    EXEC xp_servicecontrol N'querystate',N'msdtc'

Thursday, 30 October 2014

How Do We know Restoration date & time for Database?

How Do We know Restoration date & time for Database?


Use msdb
GO
Select restore_date, destination_database_name, user_name  from restorehistory


Tuesday, 28 October 2014

Setting Up Alerts for Long-Running Transactions

Setting Up Alerts for Long-Running Transactions

Ref:
http://sqlmag.com/blog/setting-alerts-long-running-transactions
The code below is something you can use to easily set up an alert for long-running transactions:
/* NOTE: You have to configure/set the following 3 variables */
DECLARE @AlertingThresholdMinutes int = 10;
DECLARE @MailProfileToSendVia sysname = 'General';
DECLARE @OperatorName sysname = 'Alerts';

-------------------------------------------------------------
SET NOCOUNT ON;

DECLARE @LongestRunningTransaction int;
SELECT
        @LongestRunningTransaction =
                MAX(DATEDIFF(n, dtat.transaction_begin_time, GETDATE()))
FROM
        sys.dm_tran_active_transactions dtat
        INNER JOIN sys.dm_tran_session_transactions dtst
                ON dtat.transaction_id = dtst.transaction_id;

IF ISNULL(@LongestRunningTransaction,0) > @AlertingThresholdMinutes BEGIN

        DECLARE @Warning nvarchar(800);
        DECLARE @Subject nvarchar(100);

        SET @subject = '[Warning] Long Running Transaction On ' + @@SERVERNAME;
        SET @Warning = 'Check SSMS > Server > Reports > Top Transactions By Age.';
       
        EXEC msdb..sp_notify_operator
                @profile_name = @MailProfileToSendVia,
                @name = @OperatorName,
                @subject = @subject,
                @body = @warning;
END
Where you’ll just need to specify the name of the operator you’d like to notify (along with the Mail Profile to use to do so—all of which you can learn about via Books Online and from my previous post on favoring notification of operators over sending emails directly). You’ll also want to specify how many minutes constitute a 'long-running transaction.' This'll obviously vary from one server/workload to the next and from environment to environment. My recommendation though (with this and with all forms of alerting) is to make sure you don’t set this value so low that you’re constantly getting alerts or notifications—alerts and notifications are useless if you train yourself to ignore them because they occur to frequently.
Otherwise, once you’ve specified all of the parameters as needed you’ll then want to create a new SQL Server Agent job that runs every few minutes (where the actual frequency will depend upon your @AlertingThresholdMinutes value) so that you’ll be notified if/when something goes over your specified threshold.

Why Would I Want to Set Up Alerts for Long-Running Transactions

Of course, knowing how to set up alerts for long running transactions (and I’m SURE there are multiple ways to tackle this need) doesn’t exactly explain why you’d want to do so—or what the benefits are.
Simply put, there are a number of benefits you can receive by setting up alerts for long running transactions—simply because long-running transactions can either cause so many potential (locking/blocking) problems in many cases and/or because the presence of long-running transactions on many severs can be an indication of problems or issues. For example, I had a similar alert set up on a client’s box—which recently started sending alerts (during the middle of the night).
A bit of investigation revealed that the culprit was a batch-processing job being run late at night to get rid of non-valid users on an e-commerce site. The job in question typically ran for about 2-3 minutes most nights (and had done so for a very long time). But, recent changes to the logic to determine non-valid users had included a JOIN against a site-log tracking table to review which pages each user had visited as part of establishing their validity. That table, in turn, was missing some key indexes, had been experiencing serious growth, and had—in turn—caused this job to start taking over 3 hours because of the missing index and large amount of data. All of which was made ‘visible’ by virtue of a long-running transaction alert. (Consequently, and with a bit of index tuning, the job was back down to running at a more respectable time of 5-7 minutes every night.)
The point, however, is that the process and server in question wasn’t a huge priority or concern—but aspects of these related operations HAD managed to start bloating to the point where they were impacting other operations. And a simple alert for long-running transactions made this all visible.