INF: Optimizing Microsoft SQL Server Performance
ID: Q110352
|
The information in this article applies to:
-
Microsoft SQL Server versions 4.2x, 6.0, 6.5
SUMMARY
To most effectively optimize Microsoft SQL Server performance, you must
identify the areas that will yield the largest performance increases over
the widest variety of situations, and focus analysis on these areas.
Otherwise, you may expend significant time and effort on topics that may
not yield sizable improvements.
For the most part, the following information does not address the
performance issues stemming from multiuser concurrency. This is a
separate, complex topic that is covered in the document "Maximizing
Database Consistency and Concurrency," which can be found in the SQL Server
version 4.2x "Programmer's Reference for C," Appendix E, and also in other
Knowledge Base articles. It is not in the version 6.0 documentation, but
can be found on the MSDN (Microsoft Developer Network) CD under that
title.
Rather than a theoretical discussion, this article focuses primarily on
areas that years of experience by the Microsoft SQL Server Support team has
shown to be of practical value in real world situations.
Experience shows that the greatest benefit in SQL Server performance can be
gained from the general areas of logical database design, index design,
query design, and application design. Conversely, the biggest performance
problems are often caused by deficiencies in these same areas. If you are
concerned with performance, you should concentrate on these areas first,
because very large performance improvements can often be achieved with a
relatively small time investment.
While other system-level performance issues, such as memory, cache buffers,
hardware, and so forth, are certainly candidates for study, experience
shows that the performance gain from these areas is often incremental. SQL
Server manages available hardware resources automatically, for the most
part, reducing the need (and therefore, the benefit) of extensive
system-level hand tuning.
Microsoft SQL Server 6.0 provides new opportunities for platform-layer
performance improvements, with large amounts of memory, symmetrical
multiprocessing, parallel data scan, optimizer enhancements, and disk
striping. However, as large as these improvements are, they are finite in
scope. The fastest computer can be bogged down with inefficient queries or
a poorly designed application. Thus, even with the additional performance
increase that SQL Server 6.0 allows, it is extremely important to optimize
the database, index, query, and application design.
Most performance problems cannot be successfully resolved with only a
server-side focus. The server is essentially a "puppet" of the client,
which controls what queries are sent, and thereby what locks are obtained
and released. Although some tuning is possible on the server side,
successful resolution of performance problems will usually depend
on acknowledging the dominant role the client plays in the problem and
analyzing client application behavior.
MORE INFORMATION
The following are some suggestions that, based on experience, have yielded
significant performance gains:
Normalize Logical Database Design
Reasonable normalization of the logical database design yields best
performance. A greater number of narrow tables is characteristic of a
normalized database. A lesser number of wide tables is characteristic of a
denormalized database. A highly normalized database is routinely associated
with complex relational joins, which can hurt performance. However, the SQL
Server optimizer is very efficient at selecting rapid, efficient joins, as
long as effective indexes are available.
The benefits of normalization include:
- Accelerates sorting and index creation, because tables are narrower.
- Allows more clustered indexes, because there are more tables.
- Indexes tend to be narrower and more compact.
- Fewer indexes per table, helping UPDATE performance.
- Fewer NULLs and less redundant data, increasing database compactness.
- Reduces concurrency impact of DBCC diagnostics, because the necessary
table locks will affect less data.
With SQL Server, reasonable normalization often helps rather than hurts
performance. As normalization increases, so do the number and complexity of
joins required to retrieve data. As a rough rule of thumb, Microsoft
suggests carrying on the normalization process unless this causes many
queries to have four-way or greater joins.
If the logical database design is already fixed and total redesign is not
feasible, it may be possible to selectively normalize a large table if
analysis shows a bottleneck on this table. If access to the database is
conducted through stored procedures, this schema change could take place
without impacting applications. If not, it may be possible to hide the
change by creating a view that looks like a single table.
Use Efficient Index Design
Unlike many non-relational systems, relational indexes are not considered
part of the logical database design. Indexes can be dropped, added, and
changed without affecting the database schema or application design in any
way other than performance. Efficient index design is paramount in
achieving good SQL Server performance. For these reasons, you should not
hesitate to experiment with different indexes.
The optimizer reliably chooses the most effective index in the majority of
cases. The overall index design strategy should be to provide a good
selection of indexes to the optimizer, and trust it to make the right
decision. This reduces analysis time and gives good performance over a wide
variety of situations.
The following are index design recommendations:
- Examine the WHERE clause of your SQL queries, because this is the
primary focus of the optimizer.
Each column listed in the WHERE clause is a possible candidate for an
index. If you have too many queries to examine, pick a representative
set, or just the slow ones. If your development tool transparently
generates SQL code, this is more difficult. Many of these tools
allow the logging of the generated SQL syntax to a file or screen for
debugging purposes. You may want to find out from the tool's vendor if
such a feature is available.
- Use narrow indexes.
Narrow indexes are often more effective than multicolumn, compound
indexes. Narrow indexes have more rows per page, and fewer index levels,
boosting performance.
The optimizer can rapidly and effectively analyze hundreds, or even
thousands, of index and join possibilities. Having a greater number of
narrow indexes provides the optimizer with more possibilities to choose
from, which usually helps performance. Having fewer wide, multicolumn
indexes provides the optimizer with fewer possibilities to choose from,
which may hurt performance.
It is often best not to adopt a strategy of emphasizing a fully covered
query. It is true that if all columns in your SELECT clause are covered
by a non-clustered index, the optimizer can recognize this and provide
very good performance. However, this often results in excessively wide
indexes and relies too much on the possibility that the optimizer will
use this strategy. Usually, you should use more numerous narrow indexes
which often provide better performance over a wider range of queries.
You should not have more indexes than are necessary to achieve adequate
read performance because of the overhead involved in updating those
indexes. However, even most update-oriented operations require far more
reading than writing. Therefore, do not hesitate to try a new index if
you think it will help; you can always drop it later.
- Use clustered indexes.
Appropriate use of clustered indexes can tremendously increase
performance. Even UPDATE and DELETE operations are often accelerated by
clustered indexes, because these operations require much reading. A
single clustered index per table is allowed, so use this index wisely.
Queries that return numerous rows or queries involving a range of
values, are good candidates for acceleration by a clustered index.
Examples:
SELECT * FROM PHONEBOOK
WHERE LASTNAME='SMITH'
-or-
SELECT * FROM MEMBERTABLE
WHERE MEMBER_NO > 5000
AND MEMBER_NO < 6000
By contrast, the LASTNAME or MEMBER_NO columns mentioned above are
probably not good candidates for a non-clustered index if this type of
query is common. Try to use non-clustered indexes on columns where few
rows are returned.
- Examine column uniqueness.
This helps you decide what column is a candidate for a clustered index,
non-clustered index, or no index.
The following is an example query to examine column uniqueness:
SELECT COUNT (DISTINCT COLNAME)
FROM TABLENAME
This returns the number of unique values in the column. Compare this to
the total number of rows in the table. On a 10,000-row table, 5,000
unique values would make the column a good candidate for a non-clustered
index. On the same table, 20 unique values would better suit a clustered
index. Three unique values should not be indexed at all. These are only
examples, not hard and fast rules. Remember to place the indexes on the
individual columns listed in the WHERE clauses of the queries.
- Examine data distribution in indexed columns.
Often a long-running query occurs because a column with few unique
values is indexed, or a JOIN on such a column is performed. This is a
fundamental problem with the data and query itself, and cannot usually
be resolved without identifying this situation. For example, a physical
telephone directory sorted alphabetically on last name will not expedite
looking up a person if all people in the city are named just "Smith" or
"Jones." In addition to the above query, which gives a single figure for
column uniqueness, you can use a GROUP BY query to see the data
distribution of the indexed key values. This provides a higher
resolution picture of the data, and a better perspective for how the
optimizer views the data.
The following is an example query to examine data distribution of
indexed key values, assuming a two-column key on COL1, COL2:
SELECT COL1, COL2, COUNT(*)
FROM TABLENAME
GROUP BY COL1, COL2
This will return one row for each key value, with a count of the
instances of each value. To reduce the number of rows returned, it may
be helpful to exclude some with a HAVING clause. For example, the clause
HAVING COUNT(*) > 1
will exclude all rows which have a unique key.
The number of rows returned in a query is also an important factor in
index selection. The optimizer considers a non-clustered index to cost
at least one page I/O per returned row. At this rate, it quickly becomes
more efficient to scan the entire table. This is another reason to
restrict the size of the result set or to locate the large result with a
clustered index.
Do not always equate index usage with good performance, and the reverse.
If using an index always produced the best performance, the optimizer's
job would be very simple - always use any available index. In reality,
incorrect choice of indexed retrieval can result in very bad performance.
Therefore the optimizer's task is to select indexed retrieval where it will
help performance, and avoid indexed retrieval where it will hurt
performance.
Use Efficient Query Design
Some types of queries are inherently resource intensive. This is related to
fundamental database and index issues common to most relational database
management systems (RDBMSs), not specifically to SQL Server. They are not
inefficient, because the optimizer will implement the queries in the most
efficient fashion possible. However, they are resource intensive, and the
set-oriented nature of SQL may make them appear inefficient. No degree of
optimizer intelligence can eliminate the inherent resource cost of these
constructs. They are intrinsically costly when compared to a more simple
query. Although SQL Server will use the most optimal access plan, this is
limited by what is fundamentally possible.
For example:
- Large result sets
- IN, NOT IN, and OR queries
- Highly non-unique WHERE clauses
- != (not equal) comparison operators
- Certain column functions, such as SUM
- Expressions or data conversions in WHERE clause
- Local variables in WHERE clause
- Complex views with GROUP BY or ORDER BY
Various factors may necessitate the use of some of these query constructs.
The impact of these will be lessened if the optimizer can restrict the
result set before applying the resource intensive portion of the query. The
following are some examples.
Resource-intensive:
SELECT SUM(SALARY) FROM TABLE
Less resource-intensive:
SELECT SUM(SALARY) FROM TABLE WHERE
ZIP='98052'
Resource-intensive:
SELECT * FROM TABLE WHERE
LNAME=@VAR
Less resource-intensive:
SELECT * FROM TABLE
WHERE LNAME=@VAR AND ZIP='98052'
In the first example, the SUM operation cannot be accelerated with an
index. Each row must be read and summed. Assuming that there is an index on
the ZIP column, the optimizer will likely use this to initially restrict
the result set before applying the SUM. This can be much faster.
In the second example, the local variable is not resolved until run time.
However, the optimizer cannot defer the choice of access plan until run
time; it must choose at compile time. Yet at compile time, when the access
plan is built, the value of @VAR is not known and consequently cannot be
used as input to index selection.
The illustrated technique for improvement involves restricting the result
set with an AND clause. As an alternate technique, use a stored procedure,
and pass the value for @VAR as a parameter to the stored procedure.
In some cases it is best to use a group of simple queries using temp tables
to store intermediate results than to use a single very complex query.
Large result sets are costly on most RDBMSs. You should try not to return
a large result set to the client for final data selection by browsing. It
is much more efficient to restrict the size of the result set, allowing the
database system to perform the function for which it was intended. This
also reduces network I/O, and makes the application more amenable to
deployment across slow remote communication links. It also improves
concurrency-related performance as the application scales upward to more
users.
Use Efficient Application Design
The role that application design plays in SQL Server performance cannot be
overstated. Rather than picture the server in the dominant role, it is more
accurate to picture the client as a controlling entity, and the server as a
puppet of the client. SQL Server is totally under the command of the client
regarding the type of queries, when they are submitted, and how results are
processed. This in turn has a major effect on the type and duration of
locks, amount of I/O and CPU load on the server, and hence whether
performance is good or bad.
For this reason, it is important to make the correct decisions during the
application design phase. However even if you face a performance problem
using a turnkey application where changes to the client application seem
impossible, this does not change the fundamental factors which affect
performance - namely that the client plays a dominant role and many
performance problems cannot be resolved without making client changes.
With a well-designed application, SQL Server is capable of supporting
thousands of concurrent users. With a poorly-designed application,
even the most powerful server platform can bog down with just a few
users.
Using the following suggestions for client application design will provide
good SQL Server performance:
- Use small result sets. Retrieving needlessly large result sets
(for example, thousands of rows) for browsing on the client adds CPU
and network I/O load, makes the application less capable of remote use,
and can limit multiuser scalability. It is better to design the
application to prompt the user for sufficient input so that queries
are submitted which generate modest result sets.
Application design techniques which facilitate this include limiting
the use of wildcards when building queries, mandating certain input
fields, and prohibiting improvised queries.
- Use dbcancel() correctly in DB-Library applications. All applications
should allow cancellation of a query in progress. No application should
force the user to reboot the client computer to cancel a query. Not
following this principle can lead to performance problems that cannot be
resolved. When dbcancel() is used, proper care should be exercised
regarding transaction level. For additional information, please see the
following article in the Microsoft Knowledge Base:
Q117143
: INF: When and How to Use dbcancel() or sqlcancel()
The same issues apply to ODBC applications, if the ODBC sqlcancel()
call is used.
- Always process all results to completion. Do not design an application
or use a turnkey application that stops processing result rows without
canceling the query. Doing so will usually lead to blocking and slow
performance.
- Always implement a query timeout. Do not allow queries to run
indefinitely. Make the appropriate DB-Library or ODBC calls to set a
query timeout. In DB-Library, this is done with the dbsettime() call,
and in ODBC with SQLSetStmtOption().
- Do not use an application development tool that does not allow
explicit control over the SQL statements sent to the server. Do not
use a tool that transparently generates SQL statements based on higher
level objects, unless it provides crucial features such as query
cancellation, query timeout, and complete transactional control. It is
often not possible to maintain good performance or to resolve a
performance problem if the application all by itself generates
"transparent SQL," because this does not allow explicit control over
transactional and locking issues which are critical to the performance
picture.
- Do not intermix decision support and online transaction processing
(OLTP) queries.
- Do not design an application or use a turnkey application that forces
the user to reboot the client computer to cancel a query. This can cause
a variety of performance problems that are difficult to resolve because
of possible orphaned connections. For more information, see the
following article in the Microsoft Knowledge Base:
Q137983
: How to Troubleshoot Orphaned Connections in SQL Server
Techniques to Analyze Slow Performance
It may be tempting to address a performance problem solely by system-level
server performance tuning. For example, how much memory, the type of file
system, the number and type of processors, and so forth. The experience of
Microsoft SQL Server Support has shown that most performance problems
cannot be resolved this way. They must be addressed by analyzing the
application, the queries the application is submitting to the database, and
how these queries interact with the database schema.
First, isolate the query or queries that are slow. Often it appears that an
entire application is slow, when only a few of the SQL queries are slow.
It is usually not possible to resolve a performance problem without
breaking the problem down and isolating the slow queries. If you have a
development tool that transparently generates SQL, use any available
diagnostic or debug mode of this tool to capture the generated SQL. In many
cases trace features are available, but they may not be openly documented.
Contact the technical support for your application to determine if a trace
feature exists for monitoring the SQL statements generated by the
application.
For application development tools that use embedded SQL, this is much
easier - the SQL is openly visible.
If your development tool or end-user application does not provide a trace
feature, there are several alternatives:
- Use the 4032 trace flag according to the instructions in the SQL
Server 4.2x "Troubleshooting Guide," and the SQL Server 6.0
"Transact-SQL Reference." This will allow capture of the SQL statements
sent to the server in the SQL error log.
- Monitor the queries through a network analyzer such as Microsoft Network
Monitor, which is part of Systems Management Server.
- For ODBC applications, use the ODBC Administrator program to select
tracing of ODBC calls. See the ODBC documentation for more details.
- Use a third-party client-side utility which intercepts the SQL at the
DB-Library or ODBC layers. An example of this is SQL Inspector from Blue
Lagoon Software.
- Use the SQLEye analysis tool provided as an example in the Microsoft
TechNet CD. NOTE: SQLEye is not supported by Microsoft Technical
Support.
After the slow query is isolated, do the following:
- Run the suspected slow query in isolation, using a query tool such as
ISQL, and verify that it is slow. It is often best to run the query on
the server computer itself using ISQL and local pipes, and redirect the
output to a file. This helps eliminate complicating factors, such as
network and screen I/O, and application result buffering.
- Use SET STATISTICS IO ON to examine the I/O consumed by the query.
Notice the count of logical page I/Os. The optimizer's goal is to
minimize I/O count. Make a record of the logical I/O count. This forms
a baseline against which to measure improvement. It is often more
effective to focus exclusively on the STATISTICS IO output and
experiment with different query and index types than to use SET SHOWPLAN
ON. Interpreting and effectively applying the output of SHOWPLAN can
require some study, and can consume time that can be more effectively
spent on empirical tests. If your performance problem is not fixed by
these simple recommendations, then you can use SHOWPLAN to more
thoroughly investigate optimizer behavior.
- If the query involves a view or stored procedure, extract the query from
the view or stored procedure and run it separately. This allows the
access plan to change as you experiment with different indexes. It also
helps localize the problem to the query itself, versus how the optimizer
handles views or stored procedures. If the problem is not in the query
itself but only when the query is run as part of a view or stored
procedure, running the query by itself will help determine this.
- Be aware of possible triggers on the involved tables that can
transparently generate I/O as the trigger runs. You should remove any
triggers involved in a slow query. This helps determine if the problem
is in the query itself or the trigger or view, and therefore, helps
direct your focus.
- Examine the indexes of the tables used by the slow query. Use the
previously listed techniques to determine if these are good indexes, and
change them if necessary. As a first effort, try indexing each column in
your WHERE clause. Often performance problems are caused by simply not
having a column in the WHERE clause indexed, or by not having a useful
index on such a column.
- Using the queries previously mentioned, examine the data uniqueness and
distribution for each column mentioned in the WHERE clause, and
especially for each indexed column. In many cases simple inspection of
the query, table, indexes, and data will immediately show the problem
cause. For example, performance problems are often caused by having an
index on a key with only three or four unique values, or performing a
JOIN on such a column, or returning an excessive number of rows to the
client.
- Based on this study, make any needed changes to the application, query,
or indexes. Run the query again after making the change and observe
any change in I/O count.
- After noting improvement, run the main application to see if overall
performance is better.
Check the program for I/O or CPU-bound behavior. It is often useful to
determine if a query is I/O or CPU bound. This helps focus your improvement
efforts on the true bottleneck. For example, if a query is CPU bound,
adding more memory to SQL Server will probably not improve performance,
because more memory only improves the cache hit ratio, which in this case,
is already high.
How to Examine I/O vs. CPU-bound Query Behavior:
- Use Windows NT Performance Monitor to watch I/O versus CPU activity.
Watch all instances of the "% Disk Time" counter of the LogicalDisk
object. Also watch the "% Total Processor Time" counter of the System
object. To see valid disk performance information, you must have
previously turned on the Windows NT DISKPERF setting by issuing
"diskperf -Y" from a command prompt, and then rebooting the system.
See the Windows NT documentation for more details.
- While running the query, if the CPU graph is consistently high (for
example, greater than 70 percent), and the "% Disk Time" value is
consistently low, this indicates a CPU-bound state.
- While running the query, if the CPU graph is consistently low (for
example, less than 50 percent), and the "% Disk Time" is consistently
high, this indicates an I/O bound state.
- Compare the CPU graph with the STATISTICS IO information.
Conclusion
SQL Server is capable of very high performance on large databases. This is
especially the case with SQL Server 6.0. To achieve this performance
potential, you must use efficient database, index, query, and application
design. These areas are the best candidates for obtaining significant
performance improvement. Try to make each query as efficient as possible,
so that when your application scales up to more users, the collective
multiuser load is supportable. Study of the client application behavior,
the queries submitted by the application, and experimentation with indexes
using the guidelines in this document are strongly encouraged. A methodical
approach in analyzing performance problems will often yield significant
improvement for relatively little time investment.
Additional query words:
4.20 sql6 Windows NT optimization sqlfaqtop
Keywords : kbother SSrvAdmin SQLFAQ SSrvWinNT
Version : winnt:4.2x,6.0,6.5
Platform : winnt
Issue type : kbinfo
|