วันอังคารที่ 13 กรกฎาคม พ.ศ. 2553

Top 10 SQL Server 2005 Performance Issues for Data Warehouse and Reporting Applications

Top 10 SQL Server 2005 Performance Issues for Data Warehouse and Reporting Applications
Published: November 6, 2006


Relational Data Warehouse or Reporting work loads are characterized by low volumes of very large transactions. These applications are often identified as having predominately read workloads (e.g. Decision Support, Analysis, and Reporting) with periodic feeds or batch loads. It is important to keep these characteristics in mind as we examine the significance of database design, resource utilization and system performance. The top performance bottlenecks or gotchas to avoid for Data Warehouse or Reporting applications are outlined below.

Database Design issue if….

Excessive sorting operations are performed. If you continually perform the same sorting operations over and over, you can avoid these with appropriate indexing.

Excessive RID lookups are performed on heap tables. RID lookups mean extra IOs are required to retrieve columns that are not in the index used. This can be avoided with covered nonclustered indexes.

Key lookups against the clustering keys look like joins however they are marked as “lookups” only in the XML showplan. These can be avoided with covered nonclustered indexes.

A potentially beneficial index is missing on join columns resulting in HASH joins. Indexes on join columns may avoid the hash.

CPU gotchas….

If signal waits > 25% of total waits, there is a CPU bottleneck. See sys.dm_os_wait_stats for Signal waits and Total waits. Signal waits measure the time spent in the runnable queue waiting for CPU. High signal waits indicate a CPU bottleneck.

Avoid inappropriate plan re-use. If the query is identical, then plan re-use is a good thing. However, query parameterization that allows plan re-use is only appropriate when the result set (and intermediate work tables) are of similar size to the original plan. If result set sizes vary significantly due to differing parameter values which are common in data warehouse scenarios, plan re-use can be detrimental. Bad plans can also lead to longer running queries and IO or memory pressure. Therefore, the cost of plan generation in such cases is preferable to plan re-use. Unlike OLTP, data warehouse queries are not always identical in terms of result sets or optimal query plans.

Memory bottleneck if….

Sudden big drop in page life expectancy. DW applications (e.g. big transactions) could experience big drops in page life expectancy. This is due to a cache flush from a big read. See Perfmon object SQL Server Buffer Manager.

Pending memory grants. See counter Memory Grants Pending, in the Perfmon object SQL Server Memory Manager. Large memory grants can be common in Data Warehouse applications. More memory may help, otherwise the user cannot execute until memory grant occurs.

Sudden drops or consistently low SQL Cache hit ratio. Drops or low cache hit may indicate memory pressure or missing indexes.

IO bottleneck if…

The best metric for write performance is disk seconds per read and disk seconds per write. When the IO system is NOT under significant load, there will be no disk queuing and thus disk seconds per read or write should be as good as it gets Normally it takes 4-8 milliseconds to complete a read when there is no IO pressure. Factors for IO throughput are the number of spindles, and drive throughput such as sequential and random IOs per second (according to the vendor). As the IO requests increase, you may notice disk queuing. The effects of queuing are reflected in high disk seconds per read or write. Periodic higher values for disk seconds/read may be acceptable for many applications. For high performance OLTP applications, sophisticated SAN subsystems provide greater IO scalability and resiliency in handling spikes of IO activity. Sustained high values for disk seconds/read (>15ms) does indicate a disk bottleneck.

High average disk seconds per write. See Perfmon Logical or Physical disk. Data Warehouse loads can be either logged with inserts, updates or deletes, or non-logged using bulk copy. Logged operations require transaction log writes. A transaction log write can be as fast as 1ms (or less) for high performance SAN environments. For many applications, a periodic spike in average disk seconds per write is acceptable considering the high cost of sophisticated SAN subsystems. However, sustained high values for average disk seconds/write is a reliable indicator of a disk bottleneck.

Big IOs such as table and range scans may be due to missing indexes.

Blocking bottleneck if….

Index contention. Look for high lock and latch waits in sys.dm_db_index_operational_stats. Compare with lock and latch requests.

High average row lock or latch waits. The average row lock or latch waits are computed by dividing lock and latch wait milliseconds (ms) by lock and latch waits. The average lock wait ms computed from sys.dm_db_index_operational_stats represents the average time for each block.

Block process report shows long blocks. See sp_configure “blocked process threshold” and Profiler “Blocked process Report” under the Errors and Warnings event.

High number of deadlocks. See Profiler “Graphical Deadlock” under Locks event to identify the statements involved in the deadlock.

Network bottleneck if….

High network latency coupled with an application that incurs many round trips to the database.

Network bandwidth is used up. See counters packets/sec and current bandwidth counters in the network interface object of Performance Monitor. For TCP/IP frames actual bandwidth is computed as packets/sec * 1500 * 8 /1000000 Mbps.

Wait statistics gotchas...

Since DataWarehouse and Reporting workloads are largely reads which are compatible with other reads, incompatible exclusive lock waits would ordinarily only come into play during batch loads or periodic feeds. If the top wait statistics are LCK_x. or PAGELATCH_EX, see “SQL Server 2005 Performance Tuning using Waits & Queues” for an explanation of sys.dm_os_wait_stats.

There is an IO bottleneck if top wait statistics in sys.dm_os_wait_stats are related to IO such as ASYNCH_IO_COMPLETION, IO_COMPLETION, LOGMGR, WRITELOG, or PAGEIOLATCH_x.

Indexing gotchas.

Large data warehouse can benefit from more indexes. Indexes can be used to cover queries and avoid sorting. For a data warehouse application, the cost of index overhead is only paid when data is loaded.

Check for missing indexes in sys.dm_db_missing_index_group_stats, sys.dm_db_missing_index_groups and sys.dm_db_missing_index_details

Watch out for fragmentation.

Excessive fragmentation is problematic for big IO operations. The Dynamic Management table valued function sys.dm_db_index_physical_stats returns the fragmentation percentage in the column avg_fragmentation_in_percent. Fragmentation should not exceed 25%. Reducing index fragmentation can benefit big range scans, common in data warehouse and Reporting scenarios

Consider Table Partitioning for fast loads

For the large tables common in Data Warehouses, table partitioning offers important performance and manageability advantages. For example, the fastest type of load is a non-logged bulk copy. The requirements for non-logged bulk copies are that indexes must be dropped. This is not feasible on a huge billion row table UNLESS you use table partitioning. This allows one to create a staging table identical to the large table (minus indexes). A fast non-logged bulk copy is used to load data. Thereafter, indexes are added to the staging table followed by constraints. Then, a meta-data only SWITCH IN operation switches pointer locations for the populated staging table and the empty target partition of the partitioned table resulting in an fully populated partition and empty staging table. Besides a fast bulk copy, the staging table allows us to eliminate blocking in the large partitioned table during the load. For more information refer to “Loading Bulk Data into Partitioned Tables [ http://technet.microsoft.com/en-us/library/cc966380.aspx ] ”. In addition to fast loads, partitioned tables allow fast deletes (or archiving purposes or sliding window deletes) where large logged deletes are replaced with meta-data only partition SWITCH OUT operations that switches pointer locations for the full partition (to be ‘deleted’) and an empty monolithic table. The SWITCH OUT results in an empty partition and a fully populated monolithic staging table. Thereafter the monolithic table can either be dropped or added to a partitioned archive table using SWITCH IN. Partitions also provide manageability improvements when combined with specific filegroup placement, allowing for customized backup and restore strategies.

In contrast to OLTP, a data warehouse or Reporting application is characterized by a small number of large SELECT transactions, each very different. The implications are significant for database design, resource usage, and system performance. These distinctions result in very different objectives and resource utilization profiles.


Reference : Microsoft Technet

วันอังคารที่ 6 กรกฎาคม พ.ศ. 2553

Get Detail Index Fragment in SQL Server 2005

SELECT

OBJECT_NAME(i.object_id) AS TableName
,

i.name AS TableIndexName
,
phystat.avg_fragmentation_in_percent,
CASE
when phystat.avg_fragmentation_in_percent < 30 then 'Reorganize'
when phystat.avg_fragmentation_in_percent > 30 then 'Rebuild'
END AS 'action_to_take'
FROM

sys.dm_db_index_physical_stats(DB_ID('PUBS'),NULL, NULL, NULL, 'DETAILED') phystat inner JOIN sys.indexes i

ON i.object_id = phystat.object_id

AND i.index_id = phystat.index_id WHERE phystat.avg_fragmentation_in_percent > 10

AND phystat.avg_fragmentation_in_percent < 40

วันพฤหัสบดีที่ 29 เมษายน พ.ศ. 2553

Backup and Restore System Database

************************ Backup *****************************************
- backup Database to Disk

backup database master to disk = 'path'

- Backup Databse to Backup device

Create backup device

sp_addumpdevice @physicalType , @logicalname , 'Store Path'

Backup Database to device

backup database master to @devicename

************************ Restore *****************************************

1. Install the new SQL Server Instance
2. Start SQL Server Instance
3. Install Service Pack and hotfix
4. Stop SQL Agent
5. Start the SQL Server in single-user mode
6. Use SQLCMD log on to SQL Server instance with system administrator account
7. Restore the master database with excute command

Restore database [Master] from disk = 'path'

Database Miroring with T-SQL

This example shows all the stages required to create a database mirroring session with a witness using Windows Authentication. The examples in this topic use Transact-SQL. Note that as an alternative to using Transact-SQL steps, you can use the Configure Database Mirroring Security Wizard for database mirroring setup. For more information, see Managing Database Mirroring (SQL Server Management Studio).
Prerequisite
________________________________________
The example uses the AdventureWorks sample database, which uses the simple recovery model by default. To use database mirroring with this database, you must alter it to use the full recovery model. To do this in Transact-SQL, use the ALTER DATABASE statement, as follows:

USE master;
GO
ALTER DATABASE AdventureWorks
SET RECOVERY FULL;
GO
For information on changing the recovery model in SQL Server Management Studio, see How to: View or Change the Recovery Model of a Database (SQL Server Management Studio).
Example
________________________________________
In this example, the two partners and the witness are the default server instances on three computer systems. The three server instances run the same Windows domain, but the user account is different for the example's witness server instance.
The following table summarizes the values used in this example.
Initial mirroring role Host system Domain user account
Principal PARTNERHOST1 \
Mirror PARTNERHOST5 \
Witness WITNESSHOST4 \
1. Create an endpoint on the principal server instance (default instance on PARTNERHOST1).

CREATE ENDPOINT Endpoint_Mirroring
STATE=STARTED
AS TCP (LISTENER_PORT=7022)
FOR DATABASE_MIRRORING (ROLE=PARTNER)
GO
--Partners under same domain user; login already exists in master.
--Create a login for the witness server instance,
--which is running as Somedomain\witnessuser:
USE master ;
GO
CREATE LOGIN [Somedomain\witnessuser] FROM WINDOWS ;
GO
-- Grant connect permissions on endpoint to login account of witness.
GRANT CONNECT ON ENDPOINT::Endpoint_Mirroring TO [Somedomain\witnessuser];
GO
2. Create an endpoint on the mirror server instance (default instance on PARTNERHOST5).

CREATE ENDPOINT Endpoint_Mirroring
STATE=STARTED
AS TCP (LISTENER_PORT=7022)
FOR DATABASE_MIRRORING (ROLE=ALL)
GO
--Partners under same domain user; login already exists in master.
--Create a login for the witness server instance,
--which is running as Somedomain\witnessuser:
USE master ;
GO
CREATE LOGIN [Somedomain\witnessuser] FROM WINDOWS ;
GO
--Grant connect permissions on endpoint to login account of witness.
GRANT CONNECT ON ENDPOINT::Endpoint_Mirroring TO [Somedomain\witnessuser];
GO
3. Create an endpoint on the witness server instance (default instance on WITNESSHOST4).

CREATE ENDPOINT Endpoint_Mirroring
STATE=STARTED
AS TCP (LISTENER_PORT=7022)
FOR DATABASE_MIRRORING (ROLE=WITNESS)
GO
--Create a login for the partner server instances,
--which are both running as Mydomain\dbousername:
USE master ;
GO
CREATE LOGIN [Mydomain\dbousername] FROM WINDOWS ;
GO
--Grant connect permissions on endpoint to login account of partners.
GRANT CONNECT ON ENDPOINT::Endpoint_Mirroring TO [Mydomain\dbousername];
GO
4. Create the mirror database. For more information, see How to: Prepare a Mirror Database for Mirroring (Transact-SQL).
5. On the mirror server instance on PARTNERHOST5, set the server instance on PARTNERHOST1 as the partner (making it the initial principal server instance).

ALTER DATABASE AdventureWorks
SET PARTNER =
'TCP://PARTNERHOST1.COM:7022'
GO
6. On the principal server instance on PARTNERHOST1, set the server instance on PARTNERHOST5 as the partner (making it the initial mirror server instance).

ALTER DATABASE AdventureWorks
SET PARTNER = 'TCP://PARTNERHOST5.COM:7022'
GO
7. On the principal server, set the witness (which is on WITNESSHOST4).

ALTER DATABASE AdventureWorks
SET WITNESS =
'TCP://WITNESSHOST4.COM:7022'
GO


reference : http://msdn.microsft.com/

Switch Recovery Model with T-SQL

Full recovery model

alter database [databasename] set recovery full

Bulk log recovery model

alter database [databasename] set recovery bulk_logged

Simple recovery model

alter database [databasename] set recovery simple

Backup Database with T-SQL to Backup device

Step by Step

1. Create Backup Device
exec sp_addumpdevice 'disk','adventureworks2008backup','d:\backupdb\aw2008.bak'

2. verify backup device
select * from sysdevices

3. Remove Backup device
sp_dropdevice @logicalname = 'adventureworks2008bakup'

4. backup database to device
backup database adventureworks to adventureworks2008backup with init ,stats = 10

5. Verify Backup

restore verifyonly from adventureworks2008backup

วันพุธที่ 21 เมษายน พ.ศ. 2553

Add user with T-SQL

Syntax

sp_adduser [ @loginame = ] 'login'
[ , [ @name_in_db = ] 'user' ]
[ , [ @grpname = ] 'role' ]

Sample

EXEC sp_adduser 'Vidur', 'Vidur', 'Recruiting'