The information in this article applies to:
SUMMARY
SQL Server can provide very high performance with relatively little
configuration tuning. For example, Microsoft's IS department has a SQL
Server that supports over 4,000 concurrent users. Other than obvious
settings such as memory, locks, and user connections, it uses mostly
default configuration settings. Yet it produces very high performance
running on an industry standard quad-processor computer. This high level of
performance was facilitated by good application and database design, not by
extensive configuration tuning.
Q110352 Optimizing Microsoft SQL Server PerformanceWhen addressing a performance problem, the degree of improvement available from configuration tuning is typically modest, unless the system is already not configured properly. Since SQL Server 7.0 uses automatic configuration tuning, it is extremely rare that configuration settings (especially advanced settings) need changing on 7.0. These should not generally be changed without overwhelming reason and not without careful methodical testing to verify the need to change and the benefit once changed. If it is not configured properly, some settings may destabilize the server or make it behave erratically. Except for obvious adjustments such as memory, locks, and user connections, careful thought should be given before adjusting other settings, especially the advanced settings. Do not blindly adjust them to a given value solely on the recommendation of a magazine article, white paper, or consultant if the new settings conflict with recommendations from Microsoft Technical Support. MORE INFORMATIONYears of support experience with hundreds of sites indicates that improper configuration settings may have results ranging from neutral to highly negative. In particular, the settings below should not be adjusted from defaults without specific overwhelming reason. They should also not be adjusted without rigorous methodical performance testing both before and after the change to assess the degree of improvement. This is especially so the version 7.0. Many actual support cases show that 7.0 is capable of extremely high performance without manual configuration tuning. affinity mask (7.0 only)Other settings such as tempdb in RAM (6.5 only), free buffers (6.5 only), read ahead parameters (6.5 only), and max async I/O, should also not be casually adjusted without a good understanding of the impact, or without objective performance tests to verify the benefit. The remainder of this article discusses each of these parameters and gives some considerations in using them: affinity mask (7.0 only) This refers to how firmly a thread is bound to a particular CPU. By default Windows NT uses "soft" affinity, which attempts to re-schedule a thread on the CPU where it last executed. However if this isn't possible, it may run on a different CPU. Affinity mask allows manual intervention and rigidly binds a thread to a specified CPU. This is via the documented Win32 API SetThreadAffinityMas(). In theory this reduces reloading or CPU cache and pipelines, thus helping performance. In actual practice it only rarely helps performance, and often will hurt performance. A secondary reason for using affinity mask is to restrict SQL Server to a subset of available CPUs, to allow other competing services better CPU access. This generally should not be necessary, since SQL Server 7.0 runs at normal priority (approximately 7 on the Windows NT priority scheme of 1 to 31). The Windows NT thread scheduler dynamically adjusts thread priorities of all competing threads to ensure they have a fair chance at all available CPUs. Affinity mask should not be adjusted except under very unusual conditions. Rigorous methodical testing should be done before and after the change to verify the need for and degree of improvement. lightweight pooling (7.0 only) By default, SQL Server 7.0 uses a thread per active SPID, or user process. These threads work in a pooled configuration to keep the number of threads manageable. The advanced configuration option "lightweight pooling" uses Windows NT "fiber" support to essentially run several user processes within a single thread. In theory this can reduce thread context switches and help performance. Actual production experience shows this is not necessary under most conditions. Only if all of the below conditions are met is lightweight pooling even *potentially* beneficial. Whether it is actually beneficial must be determined through careful controlled testing:
This setting defaults to 0, which allows the Windows NT Server virtual memory manager to determine the working set size of SQL Server. Under most conditions, this is the correct setting. When SQL Server is installed, the setup program automatically selects "maximize throughput for network applications" in the Network Control Panel under the server service properties. This instructs the Windows NT Server virtual memory manager to do very little working set trimming, thus minimally interfering with SQL Server's working set. For this reason, it is not usually necessary to override the "set working set size" default. However, if you set it to 1, upon startup SQL Server uses the Win32 API SetProcessWorkingSetSize() to more rigidly fix the working set. If you then overcommit SQL Server memory, the operating system is less able to compensate, and you may more readily see memory-related operating system errors. For more information, see the following article in the Microsoft Knowledge Base: Q110983 Recommended SQL Server for NT Memory Configurationssmp concurrency (6.5 only): This setting defaults to 0, which allows SQL Server (at startup time) to automatically detect the number processors and use n-1 of these. On an SMP computer, it also raises the process priority of SQL Server to 15. The vast majority of the time, you should use the default setting. Overriding this setting to -1 removes any restrictions on how many processors SQL Server can use, which may starve the operating system of CPUs under certain conditions. Typical symptoms include network errors or a console that has stopped responding. The NET VIEW command may fail with a "network session was canceled" error. Other processes running on the computer may stop responding, due to CPU starvation. This may include other services, SQL Executive, backup programs, batch jobs, and so on. On Compaq Proliant servers, the Automatic Server Recovery (ASR) watchdog driver may become CPU starved, assume the Windows NT Server operating system has failed, and restart the server to recover. The default "smp concurrency" setting of 0 does not "waste" a processor, because SQL Server calls the operating system for all network, I/O, and memory management functions. When the operating system is servicing these frequent calls, it uses CPU resources, and the default setting ensures that the operating system gets the CPU time it needs to quickly service SQL Server requests. Hence, all processors on a busy SQL Server system tend to be used when the default "smp concurrency" setting of 0 is used. Setting "smp concurrency" to 1, 2, and so on, should also not be necessary. This may raise the question of when an "smp concurrency" setting of -1 should be used, and why the setting is provided. Under a pure online transaction processing (OLTP) load, with strictly controlled transaction path lengths, where no ad-hoc or decision support queries are allowed, and in an environment where the computer is absolutely dedicated to SQL Server (the console is not used, nor file sharing, nor SQL Executive, and so on) this setting can provide some performance improvements with minimal side effects. However, if the characteristics of the query stream feeding the server ever change from the above pure state, you may observe erratic behavior. For example, a system that appeared to run properly for a period of time may suddenly stop responding. Debugging may be complicated, because it may have the appearance of an operating system problem. For more information, see the following article in the Microsoft Knowledge Base: Q111405 SQL Server and Windows NT Thread Schedulingpriority boost: This setting defaults to 0, which causes SQL Server to run at process priority 7 on a uniprocessor computer and (usually) 15 on an SMP computer (6.5 only -- 7.0 runs at base priority of 7 on both uniprocessor and SMP computers). Use the default setting in most circumstances. Setting it to 1 boosts the SQL Server process priority to 15 on a uniprocessor computer (6.5 and 7.0), and to 24 (6.5 only) on an SMP computer. This can have effects similar to setting "smp concurrency" to -1. Actual support experience shows enabling priority boost is not necessary for good performance in most situations. Setting boost priority on can interfere with smooth server functioning under some conditions, and should not be used except under very unusual circumstances. For example this might be used during investigating a performance by Microsoft Product Support Services. It should not be used just because a white paper or consultant so advises. max worker threads: This setting defaults to 255, which allows up to 255 worker threads to be created. Use the default setting of 255 in most cases. This does not mean that only 255 user connections can be established. A system can have thousands of user connections, essentially multiplexed down to 255 worker threads, and each user will generally perceive no delays. In such a case, only 255 queries can be "concurrently" running, but this in turn is further multiplexed down to the number of available CPUs, so the concurrent nature is only perceived, no matter how many worker threads are configured. As each new user connection is established, a new worker thread is created, up to the worker thread limit. After a period of inactivity, a worker thread automatically times out and closes. This is to keep overhead low, because each thread consumes resources. Configuring high numbers of worker threads is usually counterproductive and slows performance, due to scheduling and resource overhead. procedure cache: On SQL Server 6.5, this setting defaults to 30, which indicates 30 percent of SQL Server's cache space will be reserved for the procedure cache. On a system with a great deal of memory (say 256 MB or more), this may possibly be reduced somewhat to allow more memory for the buffer cache. However, do not blindly reduce this setting to a very low number (such as 5 percent), even if the system uses few stored procedures. This is because procedure cache is a general purpose memory pool used for a variety of internal server allocation functions, not just for caching stored procedures. Without a sufficient amount of memory in this pool, you may receive errors or observe other erratic behavior. On SQL Server 6.5, you can monitor the use of the procedure cache with Performance Monitor, and decide whether to reduce the procedure cache setting. In many circumstances, the incremental amount of buffer cache freed up by reducing the procedure cache setting does not appreciably affect the buffer cache hit ratio, which is usually relatively flat across small changes (percentage-wise) in cache size. Additional query words:
Keywords : kbenv kbusage SSrvGen |
Last Reviewed: November 16, 1999 © 2000 Microsoft Corporation. All rights reserved. Terms of Use. |