Processing Queries and Results

General Good Practices

The following sections discuss general practices that will increase the performance of SQL Server ODBC applications. Many of the concepts apply to database applications in general.

Columns in a Result Set

Applications should select only the columns needed to perform the task at hand. Not only does this reduce the amount of data sent across the network, it also reduces the impact of database changes on the application. If an application does not reference a column from a table, then the application is not affected by any changes made to that column.

Stored Procedures

Sites can realize performance gains by coding most of their SQL statements into stored procedures and having applications call the stored procedures rather than issuing the SQL statements themselves. This offers the following benefits:

The Enterprise versions of the Microsoft Visual C++® development system and Microsoft Visual Basic® programming system also offer a new SQL debugger tool. With SQL Debugger, programmers can use the standard debugger facilities of their programming environment, such as setting break points and watching variables, to debug their SQL Server stored procedures.

Batches

An application that builds several SQL statements to execute realizes better performance if it batches the statements together and sends them to the server all at once. This will reduce the number of network roundtrips the application uses to perform the same work. For example:

SQLExecDirect(hstmt, 
      "select * from authors; select * from titles",
      SQL_NTS);'

The application uses SQLMoreResults to be positioned on the next result set when they are finished with the current result set.

SQLBindCol and SQLGetData

Excess use of SQLBindCol to bind a result set column to a program variable is expensive because SQLBindCol causes an ODBC driver to allocate memory. When an application binds a result column to a variable, that binding remains in effect until the application either calls SQLFreeStmt with fOption set to either SQL_DROP or SQL_UNBIND. The bindings are not automatically undone when the statement completes.

This logic allows applications to effectively deal with situations where they may execute the same SELECT statement several times with different parameters. Since the result set will keep the same structure, the application can bind the result set once, process all the different SELECT statements, then do a SQLFreeStmt with fOption set to SQL_UNBIND after the last execution. Applications should not call SQLBindCol to bind the columns in a result set without first calling SQLFreeStmt with fOption set to SQL_UNBIND to free any previous bindings.

When using SQLBindCol, applications can either do row-wise or column-wise binding. Row-wise binding is somewhat faster than column-wise binding.

Applications can use SQLGetData to retrieve data on a column-by-column basis, instead of binding the result set columns using SQLBindCol. If a result set contains only a couple of rows, then using SQLGetData instead of SQLBindCol is faster, otherwise, SQLBindCol gives the best performance. If an application does not always put the data in the same set of variables, it should use SQLGetData instead of constantly rebinding. Applications can only use SQLGetData on columns that are in the select list after all columns are bound with SQLBindCol. The column must also appear after any columns on which the application has already used a SQLGetData.

Data Conversion

The ODBC functions dealing with moving data into or out of program variables, such as SQLBindCol, SQLBindParameter, and SQLGetData, allow implicit conversion of data types. For example, an application that displays a numeric column can ask the driver to convert the data from numeric to character:

retcode = SQLBindCol(hstmt1,
                     1,        // Point to integer column
                     SQL_C_CHAR,
                     &szCharVar,
                     ARRAYSIZE,
                     &cbCharVar);
printf("fetched row cola = %s\n", szCharVar);

Applications should minimize data conversions. Unless data conversion is a required part of the application, the application should bind columns to a program variable of the same data type as the column in the result set.

If the application needs to have the data converted, it is more efficient for the application to ask the driver to do the data conversion than for the application to do it.

The driver normally just transfers data directly from the network buffer to the application's variables. Requesting the driver to perform data translation forces the driver to buffer the data and use CPU cycles to perform the conversion.

Data Truncation

If an application attempts to retrieve data into a variable that is too small to hold it, the driver generates a warning. The driver must allocate memory for the warning messages and spend CPU resources on some error handling. This can all be avoided if the application allocates variables large enough to hold the data from the columns in the result set, or uses the SUBSTRING function in the select list to reduce the size of the columns in the result set.

Query Options

Timeout intervals can be adjusted to prevent problems. Also, having different settings for some ODBC statement and connection options among several open connection or statement handles can generate excess network traffic.

Calling SQLSetConnectOption with fOption set to SQL_LOGIN_TIMEOUT controls the amount of time an application waits for a connection attempt to timeout while waiting to establish a connection (0 specifies an infinite wait). Sites with slow response times can set this value high to ensure connections have sufficient time to complete, but the interval should always be low enough to give the user a response in a reasonable amount of time if the driver cannot connect.

Calling SQLSetStmtOption with fOption set to SQL_QUERY_TIMEOUT sets a query timeout interval to protect the server and the user from long running queries.

Calling SQLSetStmtOption with fOption set to SQL_MAX_LENGTH limits the amount of text and image data that an individual statement can retrieve. Calling SQLSetStmtOption with fOption set to SQL_MAX_ROWS also limits a rowset to the first n rows if that is all the application needs. Note that setting SQL_MAX_ROWS causes the driver to issue a SET ROWCOUNT statement to the server, which will affect all SQL statements, including triggers and updates.

Care should be used when setting these options, however. It is best if all statement handles on a connection handle have the same settings for SQL_MAX_LENGTH and SQL_MAX_ROWS. If the driver switches from a statement handle to another with different values for these options, the driver must generate the appropriate SET TEXTSIZE and SET ROWCOUNT statements to change the settings. The driver cannot put these statements in the same batch as the user SQL since the user SQL can contain a statement that must be the first statement in a batch, therefore the driver must send the SET TEXTSIZE and SET ROWCOUNT statements in a separate batch, which automatically generates an extra roundtrip to the server.

SET NOCOUNT

Applications can execute the Transact-SQL statement SET NOCOUNT ON. When this is set on, SQL Server does not return an indication of how many rows were affected by data-modification statements, or by any statements within procedures. When SET NOCOUNT is ON, the driver does not get the information it needs to return the number of rows affected should the application call SQLRowCount after a data-modification statement.

All statements executed in a stored procedure, including SELECT statements, generate an "x rows affected" message. Issuing a SET NOCOUNT ON at the start of a large stored procedure can significantly reduce the network traffic between the server and client and improve performance by eliminating these messages. These messages are typically not needed by the application when it is executing a stored procedure.

Cursors

Starting with SQL Server 6.0, the SQL Server ODBC driver supports the ODBC cursor options by using server cursors.

Cursor Types

The ODBC standard assumes that a cursor is automatically opened on each result set and, therefore, does not make a distinction between a result set and a cursor. SQL Server SELECT statements, however, always return a result set. A SQL Server cursor is a separate entity created when the application needs to perform cursor operations such as scrolling and positioned updates.

In the ODBC model, all SQL statements return a result set within a cursor, and an application retrieves rows through the cursor using either SQLFetch or SQLExtendedFetch. Before executing an SQL statement, an ODBC application can call SQLSetStmtOption to set statement options that control the cursor's behavior. These are the default settings for the cursor options.

Option Default
SQL_CURSOR_TYPE SQL_CURSOR_FORWARD_ONLY
SQL_CONCURRENCY SQL_CONCUR_READ_ONLY
SQL_ROWSET_SIZE 1

When running with these default settings, the application can only use SQLFetch to fetch through the result set one row at a time from the start of the result set to the end. When running with these default settings, the SQL Server ODBC driver requests a default result set from the server. In a default result set, SQL Server sends the results back to the client in a very efficient, continuous stream. The calls to SQLFetch retrieve the rows out of the network buffers on the client.

It is possible to execute a query with these default settings, and then change the SQL_ROWSET_SIZE after the SQLExecDirect or SQLExecute complete. In this case, SQL Server still uses a default result set to efficiently send the results to the client, but the application can also use SQLExtendedFetch to retrieve multiple rows at a time from the network buffers.

An ODBC application can change the SQL_CURSOR_TYPE to request different cursor behaviors from the result set. The types of cursors that can be set are:

Concurrency Option Overview

In addition to the cursor types, cursor operations are also affected by the concurrency options set by the application:

SQL_CONCURRENCY = SQL_CONCUR_READONLY

With this option set, the cursor does not support UPDATE, INSERT, or DELETE statements. Locks are not held on the underlying rows that make up the result set.

SQL_CONCURRENCY = SQL_CONCUR_VALUES

This option offers optimistic concurrency control. Optimistic concurrency control is a standard part of transaction control theory and is discussed in most papers and books on the subject. The application uses optimistic control when "optimistic" that there is a slight chance that anyone else may have updated the row in the interval between when the row is fetched and when the row is updated. When the cursor is opened in this mode, no locks are held on the underlying rows to maximize throughput. If the user attempts an UPDATE, the current values in the row are compared with the values retrieved when the row was fetched. If any of the values have changed, SQL Server returns an error. If the values are the same, the cursor engine performs the UPDATE.

Selecting this option means the application must deal with an occasional error indicating that another user updated the row and changed the values. A typical action taken by an application that receives this error would be to refresh the cursor, to get the new values, and then let the user or application decide if the UPDATE should still be performed. Note that text and image columns are not used for concurrency comparisons.

SQL_CONCURRENCY = SQL_CONCUR_ROWVER

This optimistic concurrency control option is based on row versioning. With row versioning, the underlying table must have a version identifier of some type that the cursor engine can use to determine whether the row has been changed since it was read into the cursor. In SQL Server, this is the facility offered by the timestamp data type. SQL Server timestamps are binary numbers that indicate the relative sequence of modifications in a database. Each database has a global current timestamp value, @@dbts, which is incremented with every change in the database. If a table has a timestamp column, then its timestamp column is updated with the current @@dbts value every time the row is updated. The cursor engine can then compare a row's current timestamp value with the timestamp value that was first retrieved into the cursor to determine whether the row has been updated. The engine does not have to compare the values in all columns, only the timestamp value. If an application requests SQL_CONCUR_ROWVER on a table that does not have a timestamp column, the cursor defaults to the values-based optimistic concurrency control, SQL_CONCUR_VALUES.

SQL_CONCURRENCY = SQL_CONCUR_LOCK

This option implements pessimistic concurrency control, in which the application attempts to lock the underlying database rows at the time they are read into the cursor result set. For cursors using server cursors, an update intent lock is placed on the page holding the row when it is read into the cursor. If the cursor is opened within a transaction, these intent-to-update locks are held until the transaction is committed or rolled back. If the cursor has been opened outside a transaction, the lock is dropped when the next row is fetched. Thus, applications wanting full pessimistic concurrency control would typically open the cursor within a transaction. An update intent lock prevents any other task from acquiring an update intent or exclusive lock, which prevents any other task from updating the row. An update intent lock, however, will not block a shared lock, so it does not prevent other tasks from reading the row, unless the second task is also requesting a read with an update intent lock.

In all of these concurrency options, when any row in the cursor is updated, SQL Server locks it with an exclusive lock. If the update has been done within a transaction, the exclusive lock is held until the transaction is terminated. If the update has been done outside of a transaction, the update is automatically committed when it is completed and the exclusive lock is freed. Because SQL Server must acquire an exclusive lock before it updates the row, positioned updates done through a cursor (just like standard updates) can be blocked by other connections holding a shared lock on the row.

Isolation Levels

The full locking behavior of cursors is based on an interaction between the concurrency options discussed above and the transaction isolation level set by the client. ODBC clients set the transaction isolation level by setting the connection option SQL_TXN_ISOLATION. Users should combine the locking behaviors of the concurrency and transaction isolation level options to determine the full locking behavior of a specific cursor environment.

Note that the ODBC API specifies additional transaction isolation levels, but these are not supported by SQL Server or the Microsoft SQL Server ODBC driver.

Server Cursors

Prior to version 6.0, SQL Server sent result sets back to clients using only one type of result set, the default result set. While the default result set is efficient at sending results back to clients, it only supports the characteristics of the default ODBC result set: forward-only, read-only, and a rowset size of one. Because of this, the Microsoft SQL Server ODBC drivers that shipped with SQL Server version 4.2x only supported the default ODBC settings.

When using a default result set, there is only one roundtrip between the client and server; this occurs at the time the SQL statement is executed. After the statement is executed, the server sends the packets containing the results back to the client until all of the results have been sent back or the client has cancelled the results by calling SQLMoreResults. Calls to SQLFetch or SQLExtendedFetch do not generate roundtrips to the server, they just pull data from the client network buffers into the application.

SQL Server 6.0 introduced cursors that are implemented on the server (server cursors). There are two types of server cursors:

Users access the functionality of API server cursors through either ODBC or DB-Library. If an ODBC application executes a statement with the default cursor settings, the SQL Server ODBC driver requests a default result set from SQL Server. If the application sets the ODBC cursor type options to anything other than the defaults, however, then the SQL Server ODBC driver instead requests the server to implement a server cursor with the same options requested by the application. Since the cursor is implemented on the server, the driver does not have to use memory on the client to build a client-based cursor. Server cursors can also reduce network traffic in cases where a user decides they do not need to retrieve an entire result set. For example, if a user opens a cursor with 1,000 rows but then finds what they were looking for in the first 100 rows they scroll through, the other 900 rows are never sent across the network.

When using server cursors, each call to SQLFetch, SQLExtendedFetch, or SQLSetPos causes a network roundtrip from the client to the server. All cursor statements must be transmitted to the server because the cursor is actually implemented on the server.

One potential drawback of server cursors is that they currently do not support all SQL statements. Server cursors do not support any SQL statements that generate multiple result sets, therefore they cannot be used when the application executes either a stored procedure or a batch containing more than one select. If the application has set options that cause the driver to request an API server cursor, and then it executes a statement that server cursors do not support, the application gets an error:

SQLState: 37000
pfNative: 16937
szErrorMsg: [Microsoft][ODBC SQL Server Driver][SQL Server]
            Cannot open a cursor on a stored procedure that
            has anything other than a single select statement in it.

Or

SQLState: 37000
pfNative: 16938
szErrorMsg: [Microsoft][ODBC SQL Server Driver][SQL Server]
            sp_cursoropen.  The statement parameter can only
            be a single select or a single stored procedure.

ODBC applications getting either of these errors when attempting to use server cursors with multiple statement batches or stored procedures should switch to using the ODBC default cursor options.

Multiple Active Statements per Connection

After SQL Server has received a statement, the SQL Server TDS protocol does not allow acceptance of any other statements from that connection until one of the following occurs:

This means that when an ODBC application is using a default result set, SQL Server does not support multiple active statement handles on a connection handle and only one statement can be actively processed at any point in time.

When an ODBC application is using API server cursors, however, the driver can support multiple active statements on a connection. When the rowset for each cursor command has been received back at the client, SQL Server considers the statement to have completed, and it accepts another statement from another statement handle over that connection handle.

For example, an application can do the following to initiate processing on two statement handles:

SQLAllocEnv(&henv):
SQLAllocConnect(henv, &hdbc);
SQLAllocStmt(hdbc, &hstmt1);
SQLAllocStmt(hdbc, &hstmt2);
SQLSetConnectOption(hdbc, SQL_CURSOR_TYPE, SQL_CURSOR_DYNAMIC);
SQLSetConnectOption(hdbc, SQL_ROWSET_SIZE, 5);
SQLExecDirect(hstmt1, "select * from authors", SQL_NTS);

When the SQLExecDirect on hstmt1 is executed, the SQL Server ODBC driver issues a cursor open request. When SQL Server completes the cursor open, it considers the statement to be finished and allows the application to then issue a statement on another hstmt:

SQLExecDirect(hstmt2, "select * from titles", SQL_NTS);

Once again, after the server has finished with the cursor open request issued by the client, it considers the statement to be completed. If at this point the ODBC application makes a fetch request as follows, the SQL Server ODBC driver sends SQL Server a cursor fetch for the first five rows of the result set:

SQLExtendedFetch(hstmt1, SQL_FETCH_NEXT, 1, ...);

After the server has transferred the five rows to the driver, it considers the fetch processing completed and accepts new requests. The application could then do a fetch on the cursor opened for the other statement handle:

SQLExtendedFetch(hstmt2, SQL_FETCH_NEXT, 1, ...);

SQL Server accepts this second statement on the connection handle because, as far as it is concerned, it has completed the last statement on the connection handle, which was the fetch of the first five rows of the rows for hstmt1.

Choosing a Cursor Option

The choice of cursor type depends on several variables, including:

The default settings would be fine for a small result set if no updating is done, while a dynamic cursor would be preferred for a large result set where the user is likely to find their answer before retrieving many of the rows.

Some simple rules to follow in choosing a cursor type are:

Each call to SQLFetch or SQLExtendedFetch causes a roundtrip to the server when using server cursors. Applications should minimize these roundtrips by using a reasonably large rowset size and by using SQLExtendedFetch instead of SQLFetch whenever possible.

Implicit Cursor Conversions

Applications can request a cursor type through SQLSetStmtOption and then execute an SQL statement that is not supported by server cursors of the type requested. A call to SQLExecute or SQLExecDirect returns SQL_SUCCESS_WITH_INFO and SQLError returns:

szSqlState = "01S02", *pfNativeError = 0,
szErrorMsg="[Microsoft][ODBC SQL Server Driver]Cursor type changed"

The application can determine what type of cursor is now being used by calling SQLGetStmtOption with fOption set to SQL_CURSOR_TYPE. The cursor type conversion applies to only one statement. The next SQLExecDirect or SQLExecute will be done using the original statement cursor settings.

Both SQL Server 6.0 and 6.5 have the following restrictions:

SQL Server 6.0 has the following additional restrictions:

SQLExecDirect vs. SQLPrepare/SQLExecute

This section discusses when SQLExecDirect or SQLPrepare/SQLExecute should be used.

Driver Implementation Overview

ODBC offers two options for executing a statement. If a statement is only executed once or twice, the application can use SQLExecDirect to execute the statement. The ODBC definition of SQLExecDirect states that the database engine parses the SQL statement, compiles an execution plan, executes the plan, and then returns results to the application.

If an application executes the same statement many times, then the overhead of having the engine compile the plan every time degrades performance. An application in this case can call SQLPrepare once and then call SQExecute each time it executes the statement. The ODBC definition of SQLPrepare states that the database engine just parses the statement and compiles an execution plan, then returns control to the application. On SQLExecute, the engine simply executes the precompiled execution plan and returns the results to the client, thereby saving the overhead of parsing and recompiling the execution plan.

SQL Server itself does not directly support the SQLPrepare/SQLExecute model, but the SQL Server ODBC driver can use stored procedures to emulate this behavior. On a SQLPrepare, the driver asks the server to create a stored procedure that contains the SQL statement from the application. On SQLExecute, the driver executes the created stored procedure. The ODBC driver uses stored procedures to support SQLPrepare/SQLExecute when the option is enabled either in the data source or the SQLDriverConnect keywords. For example, if an application calls:

SQLPrepare(hstmt, "select *from authors", SQL_NTS);

The driver sends a statement to the server:

CREATE PROCEDURE #ODBC#nnnnnnnn as
SELECT * FROM authors

When the application then does:

SQLExecute(hstmt);

The driver sends a remote stored procedure call to have the server run the #ODBC#nnnnnnnn procedure.

Because a CREATE PROCEDURE statement essentially compiles an SQL statement into an execution plan, and an EXECUTE statement simply executes the precompiled plan, this meets the criteria for the SQLPrepare/SQLExecute mode.

Excess or inappropriate use of SQLPrepare/SQLExecute degrades an application's performance. SQL Server applications should only use SQLPrepare/SQLExecute if they plan to execute a statement more than 3 to 5 times. If an application needs to execute a statement only once, using SQLPrepare/SQLExecute generates two roundtrips to the server: one to create the stored procedure and another to execute it. SQLExecDirect would only use one roundtrip and would also save the overhead of creating and storing a stored procedure. Excess use of SQLPrepare can also cause locking contention in the system tables in tempdb as concurrent users continually try to create the stored procedures to support SQLPrepare.

You may think that applications must use SQLPrepare/SQLExecute to use parameter markers, even if the application will only execute the statement once or twice. This is not true, applications can use parameter markers with SQLExecDirect by calling SQLBindParameter before SQLExecDirect.

If an application will be run by many concurrent users and the users will all be using the same SQL statement, the best approach is to create the SQL statement as a permanent, parameterized, stored procedure and executed it with SQLExecDirect. Having many users concurrently issue SQLPrepare commands can create a concurrency problem on the system tables in tempdb. Even if each user is executing exactly the same statement, the SQL Server ODBC driver on each client is creating its own copy of a temporary stored procedure in tempdb. If the SQL statement is created as a parameterized stored procedure, however, the procedure is created only once. Each ODBC application does not have to create a new procedure for its exclusive use, it simply uses a copy of the permanent procedure's execution plan from the procedure cache.

When used in the appropriate circumstances (to execute a single statement several times), SQLPrepare/SQLExecute can provide significant performance savings.

Impact on Tempdb

SQL Server 6.0 introduced temporary stored procedures, which are identified by having a number sign (#) as the first character in the name. These procedures operate like temporary tables and are automatically dropped by the server if the connection is broken. The SQL Server ODBC driver now creates the procedures that support SQLPrepare as temporary procedures. This makes it impossible for the ODBC-related stored procedures to build up as a result of broken network connections or client computer failures. However, the temporary stored procedures are always created in tempdb. This means that sites running SQL Server 6.0 or 6.5 with ODBC applications that use SQLPrepare must ensure that tempdb is large enough to hold the temporary procedures generated to support SQLPrepare.

There is another factor to consider in relation to how many stored procedures exist in tempdb. ODBC applications call SQLSetConnectoption with fOption set to the driver-specific value SQL_USE_PROCEDURE_FOR_PREPARE and vParam set to either SQL_UP_OFF, SQL_UP_ON, or SQL_UP_ON_DROP to control the generation of temporary procedures.

SQL_UP_ON is the default setting. The driver can reuse procedures if an application re-prepares the same SQL statement, and most applications realize a performance boost because the driver is not having to continually drop stored procedures. This may result in a build up of #ODBC procedures in tempdb, however, from applications that never disconnect or applications that make heavy use of SQLPrepare. These applications should set SQL_UP_ON_DROP by calling SQLSetConnectOption. Starting with the driver that shipped in SQL Server 6.5 SP2, SQL_UP_ON_DROP is now an option that can be specified on data sources for the SQL Server ODBC driver.

Other Considerations of SQLPrepare

To keep from having to hold locks on the tempdb system tables for the length of a user transaction, the SQL Server ODBC driver does not generate a stored procedure for SQLPrepare if it is called within a transaction. The exception to this is when the SQLPrepare is the first statement in the transaction. In this case, the driver generates a stored procedure but then immediately commits the CREATE PROCEDURE statement.

The driver does not generate a stored procedure for a SQLPrepare that uses the ODBC CALL escape clause to call a stored procedure. On SQLExecute, the driver executes the called stored procedure (there is no need to create a temporary stored procedure).

Calling either SQLDescribeCol or SQLDescribeParam before calling SQLExecute generates an extra roundtrip to the server. On SQLDescribeCol, the driver removes the WHERE clause from the query and sends it to the server with SET FMTONLY ON to get the description of the columns in the first result set returned by the query. On SQLDescribeParam, the driver calls the server to get a description of the columns in the tables referenced by the query. This method also has some restrictions, such as not being able to resolve parameters in subqueries.

Stored Procedures

This section discusses issues related to executing stored procedures using the SQL Server ODBC driver.

ODBC Call vs. Transact-SQL EXECUTE

Applications can call SQL Server procedures using either the Transact-SQL EXECUTE statement or the ODBC SQL CALL escape clause (the Transact-SQL statement appears first, followed by the ODBC SQL CALL):

SQLExecDirect(hstmt, "EXECUTE sp_helpdb 'pubs' ", SQL_NTS);
SQLExecDirect(hstmt, "{ call sp_helpdb ('pubs') }", SQL_NTS);

Using the ODBC syntax is recommended. The ODBC syntax, in addition to being more portable, offers improved features and performance over the EXECUTE statement.

The SQL Server TDS protocol provides two methods of sending a procedure to the server: the procedure can be sent to the server as a regular SQL statement, or it can be sent as a TDS Remote Procedure Call (RPC).

The TDS RPC syntax was originally defined for use by servers when one server is asked to execute a remote stored procedure on another server, but it can also be used by applications. Using the TDS RPC syntax means neither the driver nor the server need to perform any parameter conversions. This improves performance, especially for image parameters. The SQL Server ODBC driver uses the TDS RPC syntax if the application uses the ODBC CALL escape clause; it uses the regular SQL statement syntax if the application uses the Transact-SQL EXECUTE statement.

Using the ODBC CALL escape clause also allows the application to retrieve output parameters and return codes from a stored procedure. Output parameter and return code processing is discussed below.

Output Parameters and Return Codes

SQL Server stored procedures can return both output parameters and return codes to an application:

CREATE PROCEDURE odbcproc @oparm int OUTPUT AS
SELECT name FROM sysusers WHERE uid < 2
SELECT @oparm = 88
RETURN 99
GO

The parameters and return codes can be bound to program variables in an ODBC application where the application can reference them. For example, to execute the procedure above using the ODBC CALL escape clause and bind the return code and output parameters to program variables:

SQLRETURN   rcd;
DWORD    ProcRet = 0, OParm = 0;
long     cbProcRet = 0, cbOParm = 0;
// Bind the return code.
rcd = SQLBindParameter(hstmt, 1, SQL_PARAM_OUTPUT,
      SQL_C_SLONG, SQL_INTEGER, 0, 0, &ProcRet, 0, &cbProcRet);
// Bind the output parameter.
rcd = SQLBindParameter(hstmt, 2, SQL_PARAM_OUTPUT,
      SQL_C_SLONG, SQL_INTEGER, 0, 0, &OParm, 0, &cbOParm;
// First ? marks the return code,
// second ? marks the output parameter.
rcd = (SQLExecDirect(hstmt, "{? = call odbcproc(?)}", SQL_NTS;

SQL Server does not send back the values for the return code or output parameters until the end of all result sets for the procedure. The program variables ProcRet and OParm do not hold the output values of 99 and 88 until SQLMoreResults returns SQL_NO_DATA_FOUND.

Text and Image Data

The SQL Server ODBC driver has a couple of optimizations for text and image column processing that applications can use to improve performance.

Bound vs. Unbound Text and Image Columns

When using server cursors (see "Cursors"), the driver is optimized to not transmit the data for unbound text or image columns at the time the row is fetched. The text or image data is not actually retrieved from the server until the application issues SQLGetData for the column.

This optimization can be applied to applications so that no text or image data is displayed while a user is scrolling up and down a cursor. After the user selects a row, the application can call SQLGetData to retrieve the text or image data. This saves transmitting the text or image data for any of the rows the user does not select and can save the transmission of very large amounts of data.

Logged vs. Nonlogged

An application can request that the driver not log text and image modifications:

SQLSetStmtOption(hstmt,
                 SQL_SOPT_SS_TEXTPTR_LOGGING,
                 SQL_TL_OFF);

This option should only be used for situations where the text or image data is not critical, and the data owners are willing to trade data recovery for higher performance.

Data-At-Execution and Text and Image Columns

ODBC Data-At-Execution allows applications to work with extremely large amounts of data on bound columns or parameters. When retrieving very large text or image columns, an application cannot simply allocate a huge buffer, bind the column into the buffer, and fetch the row. When updating very large text or image columns, the application cannot simply allocate a huge buffer, bind it to a parameter marker in an SQL statement, and then execute the statement. Whenever the size of the text or image data exceeds 400K (64K with SQL Server 4.21a), the application must use SQLGetData or SQLPutData with their Data-At-Execution options. Applications should always use Data-At-Execution if there is any possibility that the size of the data will exceed these limits.

Data-At-Execution is described in the ODBC 2.0 Programmer's Reference; however, it remains one of the hardest parts of the ODBC API for an application programmer to learn. The Appendix of this paper contains the source code of two Win32 console applications, Getimage.c and Putimage.c, that illustrate using Data-At-Execution to read and write large amounts of image data. Text columns would use similar calls, the only difference would be binding between SQL_C_CHAR and SQL_LONGVARCHAR instead of SQL_C_BINARY and SQL_LONGVARBINARY. Programmers interested in working with text or image columns should look up the Data-At-Execution index entries of the ODBC 2.0 Programmer's Reference, then search for "text" and "image" in Microsoft SQL Server Programming ODBC for SQL Server.

Querying Metadata

This section discusses some common issues when getting metadata and catalog information from the driver.

Instcat.sql

Both the SQL Server system catalog stored procedures and the ODBC API catalog functions address the need of applications to retrieve catalog information from a database. Because there is a high correlation between the ODBC catalog functions and the SQL Server catalog stored procedures, the SQL Server ODBC driver implements many of the ODBC API catalog functions as calls to a corresponding SQL Server catalog procedure. The driver is therefore dependent on the catalog stored procedures in any SQL Server to which it connects.

Each version of the SQL Server ODBC driver is developed in conjunction with a specific version of SQL Server. The proper operation of each driver version requires the versions of the catalog stored procedures associated with the specific version of SQL Server with which the driver was developed, or a later version of the procedures. For example, the 2.50.0121 driver was developed in conjunction with Microsoft SQL Server version 6.0, and requires either the versions of the system catalog stored procedures that were released with SQL Server 6.0, or with later versions, such as 6.5. The driver does not work properly with older versions of the catalog stored procedures, such as those in SQL Server version 4.21a.

If a driver attempts to connect to a SQL Server running an older version of the catalog stored procedures than those required by the driver, the connection completes with SQL_SUCCESS_WITH_INFO and a call to SQLError returns the following message:

SqlState:   01000
pfNative:   0
szErrorMsg: "[Microsoft][ODBC SQL Server Driver]The ODBC
            catalog stored procedures installed on server
            My421Server are version 02.00.4127; version 06.00.0115
            or later is required to ensure proper operation.
            Please contact your system administrator."

Although the connection is successful, the application may later encounter errors on calls to the ODBC API catalog functions.

Sites running multiple versions of the driver against a server need to ensure that the server is running with at least the version of Instcat.sql associated with the newest ODBC driver that will connect to it. For example, a site running multiple version 6.0 servers could buy SQL Server version 6.5 and upgrade some clients to use the new 2.65.0201 driver that comes with version 6.5. The site would also need to run the 6.5 version of Instcat.sql against the 6.0 servers before the new driver can connect to them.

Installing a newer version of Instcat.sql into an older server does not break any existing applications connecting to that server, even ones still using the old drivers. It simply allows the applications using the new driver to operate correctly.

Sites should run the Instcat.sql script at the server command prompt by using the isql utility.

C:\>cd \Mssql\Install
isql /Usa /Ppassword /Sservername /iInstcat.sql /oInstcat.rpt

For more information about determining the version of Instcat.sql currently applied to a server, see Microsoft Knowledge Base article Q137636. For more information about the isql utility, see the Microsoft SQL Server Transact-SQL Reference.

Multiple Active Statements per Connection

Starting with SQL Server 6.5 and its associated driver, users can have multiple outstanding calls for metadata. In SQL Server 6.5, the catalog procedures underlying the ODBC catalog API implementations can be called by the ODBC driver while it is using static server cursors. This allows applications to concurrently process multiple calls to the ODBC catalog functions.

Caching Metadata

If an application uses a particular set of metadata more than once, it will probably benefit by caching the information in private variables when it is first obtained. This eliminates the overhead of later calls to  the ODBC catalog functions for the same information (which forces the driver to make roundtrips to the server).