Example of Section Blog layout (FAQ section)
SQL Server 2008 : Partition Level Locking PDF Print E-mail
Written by Administrator   
Tuesday, 26 March 2013 10:35

SQL Server 2008 – Partition Level Locking

http://www.sqlservergeeks.com/articles/sql-server-bi/73/sql-server-2008-%E2%80%93-partition-level-locking

 

Table Partitioning and LOCK_ESCALATION in SQL Server 2008

http://sqlserverpedia.com/blog/sql-server-bloggers/table-partitioning-and-lock_escalation-in-sql-server-2008-2/

 

Enabling Partition Level Locking in SQL Server 2008

http://sqlcat.com/sqlcat/b/msdnmirror/archive/2010/03/03/enabling-partition-level-locking-in-sql-server-2008.aspx

 

Last Updated on Tuesday, 26 March 2013 13:22
 
Implementing Login Statistics Using SQL Server 2005 Trace Feature PDF Print E-mail
Written by Administrator   
Wednesday, 28 November 2012 15:18

great article  how to implement login statistics on SQL 2005 server

http://blogs.msdn.com/b/sqlprogrammability/archive/2006/08/16/703079.aspx

author 

 

 

 

Last Updated on Wednesday, 28 November 2012 15:21
 
SQL Server 2008 and R2 Cluster Best Practices PDF Print E-mail
Written by Administrator   
Monday, 04 June 2012 14:40

SQL Server 2008 and R2 Cluster Best Practices

http://blogs.msdn.com/b/pamitt/archive/2010/11/06/sql-server-clustering-best-practices.aspx

Before starting SQL Server installation

http://msdn.microsoft.com/en-us/library/ms189910.aspx

1. Configure MSDTC as per best practices

2. Pre-stage MSDTC and SQL Server account objects in Active Directory prior to installing a MSDTC or SQL server cluster

3. Benchmark Disk Performance

4. Use Slip Streaming or install the latest support files prior to starting the SQL Server installation

5. Ensure that the account used to install SQL Server cluster has the appropriate permissions

6. Ensure that the SQL Server account should not be a member of Local Administrators group

During SQL Server Installation

1. Use Service Identifiers (SIDs) instead of Domain groups

After SQL Server Installation

1. Implement antivirus exclusion best practices

2. Run SQL Server 2008 BPA tool to check for other best practices

3. Configure SQL Server memory settings appropriately on a multi instance cluster

4. Consider adding a third passive node to the 2-node cluster

If you are using Mount Points

1. Mount Point Best Practices drive (Master drive) to store any data/log/backup files

 

 

 

 
SQL Server TempDB performance tuning PDF Print E-mail
Written by Administrator   
Tuesday, 27 March 2012 13:44

Paul S. Randal

A SQL Server DBA myth a day: (12/30) tempdb should alwayshave one data file per processor

This is a myth I hear over and over and over...

Myth #12: tempdb should always have one data file per processor core.

FALSE

Sigh. This is one of the most frustrating myths because there's so much 'official' information from Microsoft, and other blog posts that persists this myth.

One of the biggest confusion points is that the SQL CAT team recommends 1-to-1, but they're coming from a purely scaling perspective, not from an overall perf perspective, and they're dealing with big customers with top-notch servers and IO subsystems. Most people are not.

There's only one tempdb per instance, and lots of things use it, so it's often a performance bottleneck. You guys know that already. But when does a performance problem merit creating extra tempdb data files?

When you see PAGELATCH waits on tempdb, you've got contention for in-memory allocation bitmaps. When you see PAGEIOLATCH waits on tempdb, you've got contention at the I/O subsystem level. You can think of a latch as kind of like a traditional lock, but much lighter wait, much more transitory, and used by the Storage Engine internally to control access to internal structures (like in-memory copies of database pages).

Fellow MVP Glenn Berry (twitter|blog) has a blog post with some neat scripts using the sys.dm_os_wait_stats DMV - the first one will show you what kind of wait is most prevalent on your server. If you see that it's PAGELATCH waits, you can use this script from newly-minted MCM and Microsoft DBA Robert Davis (twitter|blog). It uses the sys.dm_os_waiting_tasks DMV to break apart the wait resource and let you know what's being waited on in tempdb.

If you're seeing PAGELATCH waits on tempdb, then you can mitigate it using trace flag 1118 (fully documented in KB 328551) and creating extra tempdb data files. I wrote a long blog post debunking some myths around this trace flag and why it's still potentially required in SQL 2005 and 2008 - see Misconceptions around TF 1118.

On SQL Server 2000, the recommendation was one tempdb data file for each processor core. On 2005 and 2008, that recommendation persists, but because of some optimizations (see my blog post) you may not need one-to-one - you may be ok with the number of tempdb data files equal to 1/4 to 1/2 the number of processor cores.

Now this is all one big-ass generalization. I heard just last week of a customer who's tempdb workload was so high that they had to use 64 tempdb data files on a system with 32 processor cores - and that was the only way for them to alleviate contention. Does this mean it's a best practice? Absolutely not!

So, why is one-to-one not always a good idea? Too many tempdb data files can cause performance problems for another reason. If you have a workload that uses query plan operators that require lots of memory (e.g. sorts), the odds are that there won't be enough memory on the server to accomodate the operation, and it will spill out to tempdb. If there are too many tempdb data files, then the writing out of the temporarily-spilled data can be really slowed down while the allocation system does round-robin allocation. The same thing can happen with very large temp tables in tempdb too.

Why would round-robin allocation cause things to slow down for memory-spills to tempdb with a large number of files? A couple of possibilities:

 

  • Round-robin allocation is per filegroup, and you can only have one filegroup in tempdb. With 16, 32, or more files in tempdb, and very large allocations happening from just a few threads, the extra synchronization and work necessary to do the round-robin allocation (looking at the allocation weightings for each file and deciding whether to allocate or decrement the weighting, plus quite frequently recalculating the weightings for all files - every 8192 allocations) starts to add up and become noticeable. It's very different from lots of threads doing lots of small allocations. It's also very different from allocating from a single-file filegroup - which is optimized (obviously) to not do round-robin.
  • Your tempdb data files are not the same size and so the auto-grow is only growing a single file (the algorithm is unfortunately broken), leading to skewed usage and an IO hotspot.
  • Having too many files can lead to essentially random IO patterns when the buffer pool needs to free up space through the lazywriter (tempdb checkpoints dont' flush data pages) for systems with not very large buffer pools but *lots* of tempdb data. If the IO subsystem can't handle the load across multiple files, it will start to slow down.

 

I really need to do a benchmarking blog post to show what I mean - but in the mean time, I've heard this from multiple customers who've created large numbers of tempdb files, and I know this from how the code works (my dev team owned the allocation code).

So you're damned if you do and damned if you don't, right? Potentially - yes, this creates a bit of a conundrum for you - how many tempdb data files should you have? Well, I can't answer that for you - except to give you these guidelines based on talking to many clients and conference/class attendees. Be careful to only create multiple tempdb data files to alleviate contention that you're experiencing - and not to push it to too many data files unless you really need to - and to be aware of the potential downfalls if you have to. You may have to make a careful balance of scalability vs performance to avoid helping one workload and hindering another.

Hope this helps.

PS To address a comment that came in - no, the extra files don't *have* to be on separate storage. If all you're seeing it PAGELATCHcontention, separate storage makes no difference as the contention is on in-memory pages. For PAGEIOLATCH waits, you most likely will need to use separate storage, but not necessarily - it may be that you need to move tempdb itself to different storage from other databases rather than just adding more tempdb data files. Analysis of what's stored where will be necessary to pick the correct path to take.

 

 

Released: Who is Active v11.11

 

http://sqlblog.com/tags/Who+is+Active/default.aspx

 

A Small Collection of I/O Specific DMV Queries

http://sqlserverperformance.wordpress.com/2009/12/30/a-small-collection-of-io-specific-dmv-queries/

 

 

 

 

Last Updated on Tuesday, 27 March 2012 21:23
 
Audit Database DDL Changes with DDL Trigger PDF Print E-mail
Written by Administrator   
Friday, 24 February 2012 15:41

-- CREATE a Table for AUDIT Data

/****** Object:  Table [AUDIT].[DDL_Watch]   ******/

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [AUDIT].[DDL_Watch](
[PostTime] [datetime] NULL,
[DB_User] [nvarchar](100) NULL,
[Event] [nvarchar](100) NULL,
[TSQL] [varchar](2000) NULL
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO

 

-- CREATE a DDL Trigger on Database

IF  EXISTS (SELECT * FROM sys.triggers WHERE parent_class_desc = 'DATABASE' AND name = N'DDL_WATCH')
DISABLE TRIGGER [DDL_WATCH] ON DATABASE
GO

/****** Object:  DdlTrigger [DDL_WATCH]    ******/

IF  EXISTS (SELECT * FROM sys.triggers WHERE parent_class_desc = 'DATABASE' AND name = N'DDL_WATCH')
DROP TRIGGER [DDL_WATCH] ON DATABASE
GO

/****** Object:  DdlTrigger [DDL_WATCH]    ******/

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TRIGGER [DDL_WATCH] 
ON DATABASE 
FOR DDL_DATABASE_LEVEL_EVENTS 
AS
INSERT audit.DDL_Watch
(PostTime, DB_User, Event, TSQL)
VALUES
 ( GETDATE(),
 EVENTDATA().value('(/EVENT_INSTANCE/LoginName)[1]', 'nvarchar(100)'),
 EVENTDATA().value('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(100)'),
 EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(2000)')
 )
GO
SET ANSI_NULLS OFF
GO
SET QUOTED_IDENTIFIER OFF
GO
DISABLE TRIGGER [DDL_WATCH] ON DATABASE
GO
ENABLE TRIGGER [DDL_WATCH] ON DATABASE
GO

 

REFERENCES:

Schema for SQL Server Event Data Schema (This schema describes the structures of the Event Data XML format.) :

http://schemas.microsoft.com/sqlserver/2006/11/eventdata/events.xsd

 

 

Microsoft SQL Server XML Schemas (Links to directories containing Microsoft SQL Server XML Schemata):

http://schemas.microsoft.com/sqlserver/

 



Last Updated on Monday, 05 November 2012 15:07
 
«StartPrev12345678910NextEnd»

Page 1 of 11