The SQL Server transaction log can become full, preventing further update, delete, or insert activity in the database, including checkpoint. This is usually seen as error 1105:
Can't allocate space for object syslogs in database dbname because the logsegment is full. If you ran out of space in syslogs, dump the transaction log. Otherwise use ALTER DATABASE or sp_extendsegment to increase the size of the segment.
This can happen on any database, including master or tempdb.
The space requirements for logging can be considerable. For example, in most cases the before and after images of each updated data row must be recorded, plus that of any affected index rows. Since a certain fixed amount of transaction record overhead must be recorded for each logged row, the ratio of updated data to log space consumption will vary depending on the row width. For a narrow row, the amount of log space consumed for a particular update, delete, or insert could be ten times the data space consumed. For wider rows, the amount of log space consumed will be proportionately less. Log space consumption is an unavoidable consequence of providing transactional integrity. The administrator must provide sufficient log space for the particular installation.
The amount of log space required can vary depending on many factors and is very difficult to accurately predict beforehand. While general rule-of-thumb figures, such as 10 to 25 percent of the database size, are sometimes mentioned as a starting point for sizing the log, in actuality this varies widely. Successful SQL Server installations often do some simple empirical tests to roughly assess the log space requirements for their particular data and applications and then size their log based on this. Attempting to size the log based solely on calculations and without tests is difficult and often inaccurate.
Several difficult-to-predict factors can account for variation in log space consumption. One example is the query optimizer. For a given SQL data modification statement, the access plan can vary over time depending on statistical distribution of the data. Different access plans can consume different amounts of log space. Another example is inevitable internal database fragmentation, which can affect the number of page splits performed. There is nothing that can be done or should be done to examine or affect this process, since SQL Server automatically manages data for the user.
An example of a simple test would be to do DBCC CHECKTABLE(SYSLOGS), which returns the number of 2048-byte data pages in the log, both before and after executing a representative sample of your data modification queries. This can give an approximate idea of the log space requirement for these types of queries. It is usually best to err on the side of excess when providing either log or data disk space for relational databases such as SQL Server.
Even with the Truncate Log On Checkpoint option enabled, a number of factors can cause log space exhaustion. These are as follows:
Each single SQL statement is considered an atomic unit that must be applied or not applied in its entirety. For this reason, all row alterations must be logged, and the transaction cannot be truncated over its duration. For example, if a large bulk INSERT was issued that had a running time of five minutes, the log consumed by this transaction cannot be truncated for this period. The Database Administrator must provide sufficient log space for the largest bulk operation expected or must perform the bulk operation in smaller groups.
The log can only be truncated prior to the oldest uncommitted transaction. There are several possible causes of an uncommitted transaction, most of which are application errors. Some of these are:
As considered previously, for the duration of a large bulk transaction, the log records generated by it cannot be truncated. However, such a transaction also precludes log truncation of other shorter transactions that do commit, over the same period.
For example, say the Database Administrator has sized the log such that it is sufficient for the largest envisioned bulk transaction. Yet while this transaction runs, other shorter data modification statements may also be consuming log space. This log space cannot be truncated, since the large bulk transaction started first and hence becomes the oldest uncommitted transaction. The administrator must be aware of the concurrency and log impact of a large bulk transaction, and size the log appropriately.
For example after issuing a BEGIN TRANSACTION, an application might prompt the user for input, which could take a long time depending on user behavior. Until the user responds and the application issues a commit, log truncation will not be possible.
A common cause of this is incorrect handling of the DB-Library call dbcancel() (or ODBC SQLCancel() ) within a user-defined transaction. When a query is canceled with dbcancel() (or ODBC SQLCancel() ), the currently executing SQL statement is aborted and rolled back, but the outer transaction is not. The application must be aware of this and issue the necessary ROLLBACK TRANSACTION or COMMIT TRANSACTION statement to close the transaction. Failure to do so can often result in error 3902:
The commit transaction has no corresponding BEGIN TRANSACTION.
It may be useful for the application to send a SELECT @@TRANCOUNT to determine what transaction nesting level exists. However, the application should not blindly do this and then issue COMMIT/ROLLBACK to achieve @@TRANCOUNT=0. This is because if @@TRANCOUNT is ever different from what the application expects, this indicates the application has lost track of the transaction nesting level, which is an application design error. Issuing COMMIT/ROLLBACK at this point could result in applying or aborting unintended transactions, since the application does not know which transactions resulted in the unintended transaction level. Instead, the programmer should debug the application and any stored procedures involved to determine the cause of the unintended transaction level.
You can use DBCC OPENTRAN() to obtain useful information. For more information see the Microsoft SQL Server Transact-SQL Reference.
In a multiuser environment, it is possible for an open transaction to become blocked on locks held by another process. In this case, the transaction will nevertheless remain open, preventing log truncation. To detect this, the programmer or database administrator will need to use sp_who, sp_lock, the Current Activity window of SQL Enterprise Manager, or other tools to analyze the concurrency environment. In most cases, blocking problems can be reduced or eliminated through proper query, index, and database design.
If the client application issues a dbcancel() (or ODBC SQLCancel() )and the query is not canceled due to either a network problem or a SQL Server problem, the query will continue to run and the transaction will remain open. If you suspect a problem here, use sp_who to see if the query is canceled. If attempting cancel from a TCP/IP sockets client, try the test from a named pipes client, or run the client application on the server machine using local pipes. This will help discern whether a network problem or a SQL Server problem is preventing the cancel.
If you suspect a problem you can also use SQL Enterprise Manager or the KILL statement to terminate the user process. You also may sometimes be able to correct this problem by rebooting the client computer and re-establishing the connection to SQL Server.
Although the log is truncated every 60 seconds, the rate at which this truncation takes place is finite. This scenario is uncommon, and the other possible causes of log overflow should be considered and ruled out before inspecting this possibility. However, it is possible to exceed the maximum truncation rate if many clients are simultaneously issuing large updates. (This is similar to a funnel, which can only drain fluid at a certain rate and can be overfilled even while draining.) In this scenario, the application can be restructured to reduce the number of rows being updated, which should always be a primary design goal for any relational database.
If this is not feasible, the system can be reconfigured for increased disk I/O bandwidth though striping, additional controllers, and so on. (For more information, see Chapter 7, Managing Drives.) It is common in this case to see the checkpoint handler process spend increasing amounts of time in the DUMP TRANSACTION state as it attempts to keep up with log truncation. Once the truncation threshold is exceeded (as described in the following text), you may not see the checkpoint handler ever attempt truncation in that database until the log is cleared.
The checkpoint handler essentially does a DUMP TRANSACTION WITH TRUNCATE_ONLY. Just as if this was issued manually, it will not always succeed if the log is already full to a certain point. For example, a burst of update activity could fill the log to (for example) 95% between visits by the checkpoint handler. When the checkpoint handler attempts truncation, although the log is not completely full, it may be too full to allow truncation. This is because the truncation of the log must itself be logged. The only solution in this case is to use DUMP TRANSACTION WITH NO_LOG to manually truncate the log.
Using the NO_LOG option is not recommended except when absolutely necessary, since it is a nonlogged operation during which system failure could introduce database errors.
For example, under normal conditions in an update-intensive environment, the checkpoint handler truncation rate may keep the log from filling up. If a temporarily open transaction caused by any of the previously described conditions (such as lock contention) causes the log to fill to, say, 50%, there will be much less headroom for handling other update situations. This makes it much more likely to reach the truncation threshold, at which point automatic truncation will not be possible.
Transactions in tempdb are logged like any other database. Since Truncate Log On Checkpoint is on in tempdb, in most cases the log will be truncated and not overflow. However any of the previously described circumstances can cause the tempdb log to fill up.
Usually, tempdb is configured for mixed log and data (sysusages.segmap=7), so data and log operations will contend for the same available space. Certain Transact-SQL constructs such as GROUP BY, ORDER BY DESC, and so on will automatically require tempdb for work space. This will also cause an implicit BEGIN TRANSACTION record in tempdb for the work space. This tempdb transaction will continue for the duration of the transaction in the user database, which can defer tempdb log truncation for this period. If the transaction in the user database is halted for any reason, including a blocking lock, or the application not processing dbnextrow() to completion, the transaction in tempdb will likewise be left open, preventing tempdb log truncation. The programmer must debug the application and/or resolve the concurrency issues that caused this.
For information, see Part 6, Replication.
The following discusses log truncation behavior at SQL Server startup on the basis of whether Truncate Log On Checkpoint is set on.