Microsoft SQL Server is a robust and full-featured enterprise relational database system. As such, SQL Server requires dedicated system resources in order to function in an optimal manner. If the necessary system resources are not made available to SQL Server, then the possibility of poor performance is great. Furthermore, contention for system resources between SQL Server and other Microsoft BackOffice applications will be commonplace, if the resource requirements of SQL Server are not satisfied.
Accordingly, the following information is a brief description of how SQL Server interacts with each of the four system resource areas. Thus, this information will aid you in addressing resource interaction and/or contention issues between SQL Server and other Microsoft BackOffice components.
In trying to determine which initial CPU architecture is right for your particular needs, you are attempting to estimate the level of CPU bound work that will be occurring on the hardware platform. As far as Microsoft SQL Server is concerned, CPU bound work can occur when a large cache is available and is being optimally used, or when a small cache is available with a great deal of disk I/O activity aside from that generated by transaction log writes. The type of questions that must be answered at this point are as follows:
Will the system be dedicated to Microsoft SQL Server?
How many users or processes will access SQL Server?
What will the level of transaction throughput be?
Is the SQL Server a departmental system or an enterprise system?
Will there be a large number of concurrent users accessing the SQL Server?
Will there be a great deal of aggregation occurring on the SQL Server?
The answer to these questions may have already come from the system or application requirements. If not, you should be able to make some reasonable estimates. The bottom line is purchase the most powerful CPU architecture you can justify. This justification should be based upon your estimates, user requirements, and the logical database design. However, based upon experience it is suggested that the minimum CPU configuration consist of at least a single 80486/50 processor.
Determining the optimal memory configuration for a Microsoft SQL Server solution is crucial to achieving stellar performance. SQL Server uses memory for its procedure cache, data and index page caching, static server overhead, and configurable overhead. SQL Server can use up to 2 GB of virtual memory, this being the maximum configurable value. In addition, it should not be forgotten that Windows NT Server and all of its associated services also require memory.
Windows NT Server provides each Win32® application programming interface application with a virtual address space of 4 GB. This address space is mapped by the Windows NT Server Virtual Memory Manager (VMM) to physical memory and can be 4 GB in size dependent upon the hardware platform. The Microsoft SQL Server application only knows about virtual addresses and thus cannot access physical memory directly. This is controlled by the VMM. In addition, Windows NT Server allows for the creation of virtual address space that exceeds the available physical memory. Therefore, it is possible to adversely affect performance of SQL Server by allocating more virtual memory than there is available physical memory. Hence, the following table contains rule-of-thumb recommendations for different SQL Server memory configurations based upon available physical memory.
Machine |
Microsoft SQL Server |
16 |
4 |
24 |
6 |
32 |
16 |
48 |
28 |
64 |
40 |
128 |
100 |
256 |
216 |
512 |
464 |
These memory configurations are made for dedicated Microsoft SQL Server systems and should be appropriately adjusted if other activities, such as file and print sharing or application services, will be running on the same Microsoft BackOffice platform as SQL Server. However, in most cases it is recommended that a minimum physical memory configuration of 32 MB be installed. Such a configuration will reserve at least 16 MB for Windows NT. Again, these memory configuration recommendations are only guidelines for initial configuration estimates and will most likely require appropriate tuning. Nevertheless, it is possible to make a more accurate and optimal estimate for SQL Server memory requirements based upon the previous knowledge gained from user and application performance requirements.
In order to make a more accurate estimate for an optimal memory configuration, refer to the following table for SQL Server for Windows NT configurable and static overhead memory requirements.
Resource |
Configurable |
Default Value |
Bytes per Resource |
Space (MB) |
User Connections |
Yes |
25 |
18,000 |
0.43 |
Open Databases |
Yes |
10 |
650 |
0.01 |
Open Objects |
Yes |
500 |
72 |
0.04 |
Locks |
Yes |
5,000 |
28 |
0.13 |
Devices |
No |
256 |
300 |
0.07 |
Static Server Overhead |
No |
N/A |
»2,000,000 |
2.0 |
TOTAL Overhead |
2.68 |
You can use this information to calculate a more exact memory configuration estimate with respect to actual memory usage. This is done by taking the calculated TOTAL Overhead above and applying it to the following formula:
Microsoft SQL Server Physical Memory - TOTAL Overhead = SQL Server Memory Cache
The SQL Server memory cache is the amount of memory that is dedicated to the procedure cache and the data cache.
The procedure cache is the amount of the SQL Server memory cache that is dedicated to the caching of stored procedures, triggers, views, rules, and defaults. Consequently, if your system will take advantage of these data objects and the stored procedures are to be used by many users, then this value should be proportional to such requirements. Furthermore, these objects are stored in the procedure cache based upon the frequency of their use. Thus, you want the most utilized data objects to be accessed in cache versus retrieval from disk. The system default is 20% of the available memory cache.
The data or buffer cache is the amount of the SQL Server memory cache that is dedicated to the caching of data and index pages. These pages are stored to the data cache based upon the frequency of their use. Therefore, you want the data cache to be large enough to accommodate the most utilized data and index pages without having to read them from disk. The system default is 80% of the available memory cache.
Accordingly, the following example for a dedicated SQL Server illustrates a more accurate estimate of SQL Server memory requirements.
Resource |
Estimated Value |
Bytes per Resource |
Space (MB) |
User Connections |
50 |
18,000 |
0.9 |
Open Databases |
10—Default |
650 |
0.01 |
Open Objects |
500—Default |
72 |
0.04 |
Locks |
15,000 |
28 |
0.42 |
Devices |
256 |
300 |
0.07 |
Static Server Overhead |
N/A |
»2,000,000 |
2.0 |
TOTAL Overhead |
3.44 |
Physical system memory = 48 MB
Windows NT physical memory = 16 MB
Microsoft SQL Server physical memory = 32 MB
32 MB - 3.44 MB = 28.56 MB Total Memory Cache
Procedure cache: 28.56 * 0.2 = 5.712 MB
Data cache: 28.56 * 0.8 = 22.848 MB
Hence, as a result of such overhead requirements, you will have approximately 28 MB to work with on the SQL Server. As overhead requirements such as user connections and locks grow, this value will be reduced and may subsequently lead to performance problems, which will then require tuning.
Achieving optimal disk I/O is the most important aspect of designing an optimal Microsoft SQL Server solution. The disk subsystem configuration as addressed here consists of at least one disk controller device and one or more hard disk units, as well as consideration for disk configuration and associated file systems. The goal is to select a combination of these components and technologies that complements the performance characteristics of SQL Server. Hence, disk subsystem I/O as it relates to reads, writes, and caching defines the performance characteristics that are most important to SQL Server.
The disk subsystem components and features you should look for are as follows:
Intelligent, fast SCSI-2 disk controller or disk array controller
Controller memory cache
Bus Master card—Processor on-board results in fewer interrupts to the system CPU(s)
Asynchronous read and write support
32-bit EISA or MCA
Hardware-level RAID supported
Fast SCSI-2 drives
Read-ahead caching (at least a track)
The determination of how many drives, of what size, of what configuration, and of what level of fault tolerance, is made by looking back to the user and application performance requirements, understanding the logical database design and the associated data, and understanding the interplay between system memory and disk I/O with respect to Windows NT Server, Microsoft SQL Server, and other Microsoft BackOffice components. While it is beyond the scope of this paper to thoroughly cover this topic in-depth, there are several key concepts and guidelines that aid in selection of an appropriate disk subsystem components.
Concept 1: Most database I/Os (reads and writes) are random with respect to data and indexes. This is true for online transaction processing and decision support systems.
Concept 2: Writes to the Microsoft SQL Server transaction log are sequential and occur as large bursts of page level I/O during the checkpoint process or update, insert, or delete operations.
Concept 3: Optimal access to randomly accessed data and indexes is achieved by distributing the database over several physical disk units, in a single stripped volume (RAID 0 or RAID 5). This results in multiple heads being able to access the data and indexes.
Concept 4: Optimal access to sequentially accessed data is achieved by isolating it from the randomly accessed data and index volume(s), on separate physical disk units, which may be RAID configured (usually RAID 1, mirrored for logs). Sequential access is faster via a single head that is able to move in one direction.
Concept 5: Duplexing of intelligent disk controllers (SCSI or Array) will usually yield greater performance. This is especially true of systems that must sustain high transaction throughputs, systems with small data (buffer) caches, and systems with large data volumes. In addition, if the number of physical disk units exceeds a controller's capacity, another controller will be necessary.
Concept 6: The minimum optimal disk subsystem configuration for any Microsoft SQL Server solution will consist of the SCSI type of controller and at least two SCSI drives. This disk configuration is necessary in order to isolate the SQL Server transaction log(s), placing them on one physical disk and the database devices or file(s) on the other physical disk.
These concepts should be used as guidelines and not as absolutes. Each SQL Server environment is unique, thereby requiring experimentation and tuning appropriate to the conditions and requirements.
As with intelligent disk controllers, the goal is to select an intelligent network interface card (NIC) that will not rob CPU or memory resources from the Microsoft SQL Server system. This network card should meet the following minimum recommendations.
32-bit EISA or MCA
Bus Master card—Processor on-board results in fewer interrupts to the system CPU(s)
On-board memory cache