SQL Server 2000 Performance Guidelines - Deployment Considerations
From Guidance Share
- J.D. Meier, Srinath Vasireddy, Ashish Babbar, Sharon Bjeletich and Alex Mackman
Use Default Server Configuration Settings for Most Applications
SQL Server uses optimal configuration settings when it is newly installed. Changing the configuration settings may actually decrease performance except in certain high load situations. Thoroughly test any configuration change before making it to ensure that the change really improves performance or scalability. One exception is the memory setting, which is discussed later in this section.
To find out if your server settings comply with common best practices, you can download the Microsoft SQL Server 2000 Best Practices Analyzer tool at http://www.microsoft.com/downloads/details.aspx?FamilyId=B352EB1F-D3CA-44EE-893E-9E07339C1F22&displaylang=en.
- For more information about SQL Server configuration settings, see Knowledge Base article 319942, "HOW TO: Determine Proper SQL Server Configuration Settings," at http://support.microsoft.com/default.aspx?scid=kb;en-us;319942.
Locate Logs and the Tempdb Database on Separate Devices from the Data
You can improve performance by locating your database logs and the tempdb database on physical disk arrays or devices that are separate from the main data device. Because data modifications are written to the log and to the database, and to the tempdb database if temp tables are used, having three different locations on different disk controllers provides significant benefits.
Provide Separate Devices for Heavily Accessed Tables and Indexes
If you have an I/O bottleneck on specific tables or indexes, try putting the tables or indexes in their own file group on a separate physical disk array or device to alleviate the performance bottleneck.
Use the Appropriate RAID Configuration
For a database server, you should choose hardware-level RAID rather than software RAID. Software RAID is usually cheaper but uses CPU cycles. If CPU utilization is a bottleneck for you, SQL Server may not perform optimally.
Two core RAID levels are of value for a database server:
- Striping with parity (RAID 5)
- Striped mirror (RAID 0+1)
When you choose a RAID level, you have to consider your cost, performance, and availability requirements. RAID 5 is less expensive than RAID 0+1, and RAID 5 performs better for read operations than write operations. RAID 0+1 is more expensive and performs better for write-intensive operations and for accessing the tempdb database.
Use Multiple Disk Controllers
A disk controller has a limit on its throughput. Associating too many disks with a single disk controller can lead to I/O bottlenecks.
- For more information about how to determine the number of disks per disk controller, see "Microsoft SQL Server 7.0 Performance Tuning Guide" on MSDN at http://msdn.microsoft.com/library/en-us/dnsql7/html/msdn_sql7perftune.asp?frame=true#sql7perftune_diskperform.
Pre-Grow Databases and Logs to Avoid Automatic Growth and Fragmentation Performance Impact
If you have enabled automatic growth, ensure that you are using the proper automatic growth option. You can grow database size by percent or by fixed size. Avoid frequent changes to the database sizes. If you are importing large amounts of data that tend to be of a fixed size on a weekly basis, grow the database by a fixed size to accommodate the new data.
Maximize Available Memory
Increasing memory is one of the best ways to improve SQL Server performance because more data can be cached in memory. Enable Address Windowing Extensions (AWE) for higher RAM utilization by SQL Server. Enable the /3GB switch in the Boot.ini file to allow a process to make use of 3 GB of virtual memory. By default, the system uses 2 GB. The operating system limits memory use by a process to 2 GB.
Use performance counters to decide the amount of memory that you need. Some performance counters that you can use to measure your need for memory are listed below:
The SQLServer:Buffer Manager:Buffer cache hit ratio counter indicates that data is retrieved from memory cache. The number should be around 90. A lower value indicates that SQL Server requires more memory. The Memory:Available Bytes counter shows the amount of RAM that is available. Low memory availability is a problem if the counter shows that 10 megabytes (MB) of memory or less is available. The SQLServer:Buffer Manager: Free pages counter should not have a sustained value of 4 or less for more than two seconds. When there are no free pages in the buffer pool, the memory requirements of your SQL Server may have become so intense that the lazy writer or the check pointing process is unable to keep up. Typical signs of buffer pool pressure are a higher than normal number of lazy writes per second or a higher number of checkpoint pages per second as SQL Server attempts to empty the procedure and the data cache to get enough free memory to service the incoming query plan executions. This is an effective detection mechanism that indicates that your procedure or data cache is starved for memory. Either increase the RAM that is allocated to SQL Server, or locate the large number of hashes or sorts that may be occurring. The memory configuration option is the one server configuration setting that you should evaluate and possibly change if there are processes running on the server other than SQL Server. If so, change the memory option to Fixed, and leave enough memory for the operating system and for the other processes that might be running.
- For more information about SQL Server memory requirements, see "Inside SQL Server 2000's Memory Management Facilities" at http://msdn.microsoft.com/data/default.aspx?pull=/library/en-us/dnsqldev/html/sqldev_01262004.asp.
- Also, see Knowledge Base article 274750, "HOW TO: Configure memory for more than 2 GB in SQL Server," at http://support.microsoft.com/default.aspx?scid=kb;en-us;274750.
Manage Index Fragmentation
As data is modified in a system, pages can split, and data can become fragmented or physically scattered on the hard disk. Use the DBCC SHOWCONTIG command to see the density and the degree of fragmentation for an index for a table.
There are several ways to resolve index fragmentation.
- Drop and recreate the index.
- Use the DBCC DBREINDEX command.
- Use the DBCC INDEXDEFRAG command.
The first two ways hold locks against the system. Therefore, you should only drop and then recreate an index or use the DBCC DBREINDEX command when there are no users on the system.
You can use DBCC INDEXDEFRAG when your system is online because it does not lock resources.
For more information about the DBCC SHOWCONTIG, DBCC DBREINDEX, and DBCC INDEXDEFRAG commands, see the following "Transact-SQL Reference" topics:
- "DBCC SHOWCONTIG" at http://msdn.microsoft.com/library/en-us/tsqlref/ts_dbcc_46cn.asp.
- "DBCC DBREINDEX" at http://msdn.microsoft.com/library/en-us/tsqlref/ts_dbcc_94mw.asp.
- "DBCC INDEXDEFRAG" at http://msdn.microsoft.com/library/en-us/tsqlref/ts_dbcc_30o9.asp.
Keep Database Administrator Tasks in Mind
Do not forget to take database administrator tasks into account when you think about performance. For example, consider the impact that database backups, statistic updates, DBCC checks, and index rebuilds have on your systems. Include these operations in your testing and performance analysis.