Microsoft SQL Server offers server cursors to address the need for incremental fetching of result sets across the network. Server cursors can be requested in an application by simply calling SQLSetStmtOption to set the SQL_CURSOR_TYPE option.
When a SELECT statement is executed as a server cursor, only a cursor identifier is returned by the EXECUTE statement. Subsequent fetch requests pass the cursor identifier back to the server along with a parameter specifying the number of rows to fetch at once. The server returns the number of rows requested.
Between fetch requests, the connection remains free to issue other commands, including other cursor OPEN or FETCH requests. In ODBC terms, this means that server cursors allow the SQL Server driver to support multiple active statements on a single connection.
Furthermore, server cursors do not usually hold locks between fetch requests, so you are free to pause between fetches for user input without affecting other users. Server cursors can be updated in place using either optimistic conflict detection or pessimistic scroll locking concurrency options.
While these features make programming with server cursors more familiar to Oracle developers than using default result sets, they are not free. Compared to default result sets:
For these reasons, it is wise to limit the use of server cursors to those parts of your application that need their features. An example that illustrates the use of server cursors can be found in the LIST_STUDENTS function in the Ssdemo.cpp sample SQL Server ODBC program file.