Backup and Recovery Guidelines for Microsoft SQL Server

Gary Schroeder
SQL Server Group, Microsoft

Created: February 1994
Volume 4, Number 3

Abstract

Microsoft® SQL Server is a powerful, multiuser, relational database management system designed to support high-volume transaction processing, as well as less demanding decision-support applications. SQL Server provides data processing capabilities (reliability, data integrity, performance, security) that meet or exceed those found in production-oriented minicomputer- and/or mainframe-based database management systems. Because of its power and stability, Microsoft SQL Server is widely used in many diverse production environments.

This technical note provides guidelines for proper backup and recovery of SQL Server databases running in production environments. Much of the information discussed here is based on a compilation of common questions that Microsoft Product Support has received from SQL Server customers. Too often, customers have suffered hardware or other failures and only then realized that either they did not have backups, or their backups were done improperly, resulting in unnecessary loss of valuable data. Because this technical note is not intended to replace the existing SQL Server documentation regarding backup and recovery, you should still thoroughly read and understand the chapters on backup and recovery in the Microsoft SQL Server System Administrator's Guide and the Microsoft SQL Administrator User's Guide.

Backup Strategies

Why Back Up?

Today's hardware and software components are many times more reliable than those of the not-so-distant past; however, they will never be perfect, and there is always the chance of having some type of failure. It is therefore prudent to always have some type of "safety net" in place at all times, so that if a situation arises that results in the loss or damage of online data, production can be restored quickly with minimal or no loss of data. When people think of situations that could result in the loss of data, they typically think of hard disk crashes. There are, however, other situations that can have the same end result, but are not as often considered when planning for recovery. Some of these problems include the following:

As the remainder of this technical note will show, Microsoft SQL Server provides many fast and reliable ways to back up your data. To ensure that your data is safeguarded, you need to define a solid backup strategy.

Planning a Backup Strategy

You should have a solid backup plan in place before any application is ever moved from a development/test environment into a production environment. In addition, just as you would never put an untested application into production, you should never go into production without testing your backup strategy. It is always a good idea to include backup strategy planning as a key component of any project. In some cases, the backup strategy may even have a large impact on the design of the application. In planning the backup strategy for a given application, answer the following questions before launching the application into production:

This, of course, is not a complete list of all the questions you should think about in planning your backup strategy. You will need to answer these questions for every SQL Server environment, and there may be many other questions specific to your particular environment.

Where and When to Back Up

It is best to keep live data and backups on separate computers, or better yet, in different buildings. If databases and/or logs are dumped to physical disk devices, it is often optimal from a performance standpoint to dump them to local hard drives on the server, as opposed to drives on some other computer on the network. However, from that point they should be copied to tape or to another server, so that data and backups are not on the same computer. For convenience, most people choose to have their backups close at hand so that they can get to them quickly if needed. If this is done, keep another copy somewhere offsite (such as a bank safety deposit box) so that in the event of damage to the building, backup of data will remain safe.

The frequency and type of backups you do will generally depend on two factors: the "acceptable" amount of work that can be lost due to media or other failure, and the volume of transactions that occur on the SQL Server. For many sites, databases are dumped weekly and transaction logs are dumped daily. This can vary widely, however. For systems that have little update activity and that are used primarily for decision-support, only weekly database dumps might be needed. For other high-volume online transaction processing (OLTP) environments, databases might be dumped daily and the transaction logs dumped hourly. The strategy chosen should be one that best fits your environment while providing adequate insurance of recovering needed data.

If dumps are performed online, they should be scheduled for times when the server is not being heavily updated, because the dumps will slow SQL Server somewhat. In addition, the dumps should be issued on a fixed schedule. By using a fixed schedule, users will always know when the dump is occurring and can expect a slight delay in performance, or they can plan to do other non-SQL-Server–related work during that time.

As an added level of data protection, SQL Server provides facilities for mirroring database devices. With device mirroring enabled, data that is written to one database device is automatically duplicated on another, separate device. Should a media failure occur on either device, the damaged device is unmirrored and the nondamaged device automatically takes over writing all transactions. Depending on the number of hard disks and hard disk controllers present in the computer, it is possible to have a configuration in which any one disk can fail and SQL Server will continue to run uninterrupted. Combining device mirroring along with a regular backup schedule can help ensure a high level of redundancy and minimize the risk of data loss.

Testing a Backup Strategy

Once a backup strategy has been designed, it should be thoroughly tested. In testing the strategy, as many failures as possible should be simulated so you can be sure there are no vulnerable areas that could jeopardize the recovery of data. For example, some customers have implemented highly redundant device and disk mirroring with multiple disks in a computer, so that should any one disk drive fail, they can continue operation uninterrupted. This is a good plan, but it does not eliminate the need to do backups. Asking the question, "Will this be sufficient in the event of theft of the computer?" would show that this is a vulnerable area.

Testing a backup strategy will also demonstrate how much time is required to restore databases from backups. If that amount of time is unacceptable, you may want to reevaluate the frequency with which databases and transaction logs are being dumped, or you may want to look at the feasibility of maintaining a "warm" backup server (discussed in detail later in this article).

Database Dumping and Loading

Managing Database Dump Files

SQL Server databases can be dumped to diskettes, hard disks, or tape. It is a good idea to keep not only the most current dump, but the previous n dumps as well (the value of n depends on what you judge to be necessary). To illustrate the need for this, assume that some failure occurred that resulted in your need to reload the database from a dump. If only the most current dump was kept and something destroyed that dump, you would have to attempt to reconstruct your data from scratch. Or say that a user accidentally issued a misqualified update or delete statement against one or more tables. This could result in the incorrect modification or loss of large amounts of data, which could go undetected for several days, weeks, or more. By keeping a succession of previous dumps, you have multiple "safety nets" in place for recovery.

If you dump your database or transaction logs to the same disk device each time, the current dump will overwrite the existing dump file, so it is best to either rename the dump file after the dump or copy it to another medium, such as tape, to preserve it. When dumping to tape devices using the OS/2® version of SQL Server, the dump will also overwrite anything existing on the tape, so it is a good idea to keep a "pool" of tapes and dump the current database to the tape containing the oldest dump. SQL Server for Windows NT™ allows you to specify whether the new dump should overwrite or be appended to the existing contents of the tape. The tapes can then be circulated for reuse in least-recently-used order. SQL Server provides a configuration parameter, "media retention," that helps to ensure that a new dump does not overwrite an existing dump too soon. The sp_configure system procedure can be used to set the "media retention" parameter, which specifies the number of days that must pass before an existing dump can be overwritten by a new one.

When a database is dumped, SQL Server dumps not only the data portion, but also the transaction log for that database as well. The transaction log must be included in the dump so that when the database is later loaded, transactions that were in progress during the dump can be properly rolled back. Because the transaction log is dumped with the database, the resulting dump file may be larger than the data portion of the database. For example, assume that you have a 100-MB database with a 25-MB transaction log. If both the database and log are nearly full, the dump file would be approximately 125 MB when the database is dumped. Be sure to allow enough room on the media you are dumping to for both the data and log portions of the database being dumped.

Note that with the OS/2 version of SQL Server, dump files do not "shrink" to accommodate a smaller dump. For example, assume that you have a disk dump device with a logical name of DB_DUMP and a physical file name of C:\SQL\DUMPS\DBDUMP.DAT. Suppose you dump a database to that device, and the resulting DBDUMP.DAT file is 100 MB in size. If you then dump a 5-MB database to that same device, the DBDUMP.DAT file will still appear to be 100 MB in size. This is because SQL Server simply overwrites the dump file if it already exists and writes an "end-of-dump" marker in the file to show the end of the dumped data, but the file size does not decrease. If you want to always know the exact size of the actual dump, you can delete or rename the existing dump file before dumping to it again. In this example, you could copy the 100-MB DBDUMP.DAT file to tape after making the first dump, and then delete the DBDUMP.DAT file from the hard disk. Then when you dump the 5-MB database to that device, SQL Server creates a new DBDUMP.DAT file, which will show a size of 5 MB (or less, depending on the amount of data in the database). With SQL Server for Windows NT, the size of the dump file will always be the correct size of the amount of data in the dump. For example, assume again that you dump a database to the DB_DUMP device and the resulting DBDUMP.DAT file is 100 MB. If you then dump a 5-MB database to the same device, the size of the DBDUMP.DAT file will have a size of 5 MB (or less, depending on the amount of data in the database).

SQL Server for Windows NT includes an enhancement that increases the speed of database and transaction log dumping. On other platforms, the data is written to the dump device in 2K blocks. SQL Server for Windows NT writes the data to the dump device in 60K blocks, which can significantly reduce the time needed for backups.

Database Consistency Checking

Regardless of the frequency of database dumps, it is highly recommended that you always run DBCC CHECKDB, DBCC CHECKALLOC, and DBCC CHECKCATALOG on a database either just before or just after dumping it, to check the logical and physical consistency of the database. Because transactions can occur during or after the DBCC but before the dump, you may want to first dump the database and then run the DBCCs to ensure that the database was consistent at the time it was dumped. If a database or transaction log that contains errors is dumped, the errors will still exist when the database is reloaded. Under some conditions this can even prevent successful reloading. There are two important reasons to run DBCC statements on a database when no users are currently using the database. First, if users are actively updating the database while DBCC is running, it can report spurious errors, which can be misleading. Second, because DBCC performs numerous checks on the data pages in the database, it can have a noticeable effect on performance if users are trying to query the database while DBCC is running.

For databases with a large amount of data (for example, in excess of 5 GB), DBCC statements may take several hours to run, which can hamper the feasibility of running DBCCs on a production server. SQL Server for Windows NT addresses this by including the no_index option with DBCC statements. When using the no_index option (or when choosing the Fast checkbox in SQL Administrator), SQL Server checks only the consistency of the data pages, not the index pages, for each user table, which can dramatically reduce the time necessary to run DBCC. It is still advisable, however, to periodically run full DBCCs on the database to ensure consistency of the indexes as well. The syntax for this option is to include the no_index keyword after the DBCC statement. For example, DBCC CHECKDB (pubs, no_index) and DBCC CHECKTABLE (inventory, no_index) perform all the consistency checks on the data pages but skip the checks on any indexes.

Another solution to running DBCCs on large amounts of production data is to have a backup server running. In this configuration, database and transaction logs can be dumped from the production server and loaded onto the backup server. The DBCCs can then be run on the backup server without negatively affecting the performance of the production server. This is also a good safety net in the event that a hardware or other failure causes the production server to become unavailable for a long period of time. In this case, the backup server can quickly become the new production server, resulting in very little disturbance to users. The rapidly declining cost of PC hardware makes this option much more feasible than with the mini/mainframe platforms. (Managing a backup server is discussed in detail later in this article.)

When dumping a database, you will occasionally receive SQL Server error 3004 stating,

xx uninitialized pages encountered while dumping database <db name>. Run DBCC CHECKALLOC on this database for more information, then call technical support if there are any errors.

This error occurs if users are actively updating the database while it is being dumped. The dump procedure takes a "snapshot" of the database in order to begin the dump. Because some transactions may have allocated pages but not yet initialized them at the time the "snapshot" was taken, the dump procedure raises this informational message simply as a precaution.

If you encounter this message, the proper procedure, as the error message states, is to run DBCC CHECKALLOC on that database, ensuring that no users are currently using the database while DBCC is executing. Under most circumstances, CHECKALLOC will not return any errors, and the previous 3004 error can safely be ignored. If CHECKALLOC does report other errors, however, you should contact your primary support provider for further instructions.

Loading Database Dumps

You need to consider several things when loading a database from a dump. First, the target database needs to be at least as large as the database that was dumped. Attempting to load a dump into a database that is smaller than the database that the dump came from will result in a SQL Server 3105 error message, stating,

Data on dump will not fit into current database. Need xx Mbyte database.

where xx is the size of the original database. You can, however, load a dump into a target database that is larger than the original database.

Also consider whether the database will be loaded on the same server from which it was dumped, or if there is a potential for it to be loaded onto a different server. If the database will be dumped and loaded between servers, the code page and sort order between the servers must match. You cannot load a database on a SQL Server that is running with a code page or sort order different from the SQL Server from which the dump originated. With SQL Server for OS/2, the only way to move data between servers running different code pages or sort orders is by using the bcp utility. SQL Server for Windows NT version 4.21 provides an additional tool, the SQL Transfer Manager, which uses a graphical interface to allow you to easily transfer some or all of the data from one server to another. This is useful when the servers are using different sort orders or code pages, or when they are using different hardware architectures (for example, to move data from a SQL Server running on an Intel®-based computer to SQL Server for Windows NT running on a RISC-based computer). If your environment has multiple SQL Servers, it is recommended that they all be configured with the same code page and sort orders.

If a database needs to be recreated and loaded from a previous dump, it is important that the database be recreated in exactly the same way that it was created. That is, the device usage, log space allocation, and segment definitions in the new database must be identical to the way they were when the database was dumped. This information is kept in the sysusages table in the master database, so you should always dump the master database whenever a change to a database is made that records anything in master. In addition, if you save the scripts you used to create the initial database, recreating it will simply be a matter of rerunning those scripts. If a dump is loaded into a database that has data, log, or segment mappings different from the original database, DBCC CHECKALLOC may report SQL Server error 2558,

Extent not within segment: Object <object id>, indid <index id> includes extents on allocation page <page number> which is not in segment <segment number>.

This error can indicate that there are data pages allocated in portions of the database intended for log pages, or log pages allocated in portions intended for data. In the first case (data pages in log space), the effect is that less space is available for the transaction log to grow. The second case (log pages in data space) is less serious, because the data space being used by log pages will eventually be freed when the log is truncated with the DUMP TRANSACTION statement.

Transaction Log Dumping and Loading

Creating and Sizing Transaction Logs

SQL Server's ability to perform online incremental backups (transaction log dumps) provides a very effective mechanism for up-to-the-minute recovery of data. If your environment will be implementing both database and transaction log dumps as a means of backup, it is important to have a well-planned procedure in place and tested prior to going into production. This section contains some tips and guidelines for managing transaction logs.

In order for a transaction log to be dumped to a physical dump device, the log must reside on its own device. This can be accomplished using the LOG ON clause of the CREATE DATABASE statement when the database is initially created. Or, after the database has been created, it can be altered (using the ALTER DATABASE statement) onto a new device, and the sp_logdevice system procedure executed to move the log to the new device. Ideally, every database should have its log on a separate device. Exceptions are small databases that are either dumped often enough to negate the need for transaction log dumps, or "test" databases whose contents are not needed or can be easily rebuilt, such as the pubs database. The other exception is the master database, which must have the data and log portions on the MASTER device.

The first step in managing transaction logs is to estimate how large to make the log. The size will depend on the volume of transactions in the database and the frequency with which the log is dumped. After a transaction log is dumped to a physical dump device, the inactive portion of the log (the portion from the beginning of the log to the first noncommitted transaction) is automatically truncated to free new space for the log to use. As a general rule, creating a log that is approximately 10–25 percent of the size of the database is a good place to start—depending on your environment, you may require a log that is significantly larger or smaller than that amount. It is best to simulate the volume and types of transactions that will be expected when the system is in production. If the log fills or comes close to filling up before the scheduled transaction log dump occurs, you should either enlarge the size of the log or decrease the time between transaction log dumps.

Frequency of Transaction Log Dumps

When deciding how frequently to dump the transaction log between database dumps, there are a couple of important points to consider. In the following example, assume a production application dumps its database at midnight every Friday, and transaction log dumps are performed every hour from 8 a.m. to 5 p.m. Monday through Friday (10 log dumps per day). If a failure occurs at 4:30 p.m. on Friday, resulting in the need to recover from backups, the previous Friday's database dump would need to be loaded, followed by 49 transaction log loads.

Fri Mon Tue Wed Thu Fri
12 a.m. 8 a.m. 8 a.m. 8 a.m. 8 a.m. 8 a.m. 4:30 p.m.
--|--- --|--- --|--- --|--- --|--- --|--- --|---->
database dump 10 log dumps 10 log dumps 10 log dumps 10 log dumps 9 log dumps Failure

Although doing online log dumps results in a smaller performance hit to the SQL Server compared to database dumps, loading that many transaction logs can take a significant amount of time, resulting in a longer period of down time. A better approach is to do nightly database dumps and still do hourly transaction log dumps during the day. Assuming the same scenario of a failure at 4:30 p.m. Friday, only Thursday night's database dump would need to be loaded, followed by the 9 hourly transaction log dumps from 8 a.m. through 4 p.m. on Friday.

Also consider that the larger the number of log dumps, the bigger the risk of something happening to one of them. If any one transaction log dump is misplaced, deleted, or the transaction log truncated, no further transaction logs past that point can be loaded. Think of individual transaction logs as the pairs of teeth in a zipper. If one set of teeth is missing, you can only zip the zipper up to that point—likewise, you can't "skip over" a missing transaction log.

Transaction Log Sequencing

SQL Server keeps track of the order of transaction log dumps via a method that can be thought of as similar to a "sequence number." The sequence number is incremented every time a DUMP TRANSACTION statement is issued, regardless of whether the log is being dumped to a physical dump device, or whether it is being truncated by using either of the WITH TRUNCATE_ONLY or WITH NO_LOG clauses. When you restore transaction logs, they must be loaded in the same order in which they were dumped, and their sequence numbers must be in monotonically increasing numbers.

In the following example, assume again that a database is dumped at midnight each night, and the transaction log is dumped to a separate disk device every hour from 8 a.m. to 5 p.m. The 8 a.m. dump would have a sequence number of 1, the 9 a.m. dump would have a sequence number of 2, and so on. Suppose that the transaction log completely fills up at 9:30 a.m., such that it must be dumped (truncated) using the WITH NO_LOG option, and the remaining hourly dumps continue as scheduled. If the hard disk fails at 3 p.m. that day, the previous night's database dump could be loaded, followed by the 8 a.m. (sequence number 1) and 9 a.m. (sequence number 2) transaction log dumps. However, attempting to load the 10 a.m. log dump would result in SQL Server error 4305,

Specified file 'dumpfile' is out of sequence. Current time stamp is '<some date and time>' while dump was from '<some other date and time>'.
12 a.m. 8 a.m. 8:30 a.m. 9 a.m. 9:30 a.m. 10 a.m. 10:30 a.m.
|--------- |----------- -----|---- |------------- |------------ |----------- |----->
database dump log dump (Sequence #1) log dump
(Sequence #2)
LOG FULL!
TRUNCATE LOG
(Sequence #3)
log dump (Sequence #4)

Error 4305 would be encountered because the 9:30 a.m. truncation of the log would have been given a sequence number of 3, and the 10 a.m. dump has a sequence number of 4. Since the truncation of the log does not produce a physical dump file, it cannot be loaded. The net effect is an attempt to go from sequence number 2 to sequence number 4, which (as stated earlier) is prohibited because the sequence numbers must increase monotonically; therefore, the transactions that took place after 9 a.m. cannot be recovered. To avoid this situation, it is imperative that a full database dump be issued immediately after the transaction log is truncated, so that the sequence number in the database is reset.

Effect of "trunc. log on chkpt." on Transaction Logs

Two database options have an important effect on transaction logs: trunc. log on chkpt. and select into/bulkcopy. The trunc. log on chkpt. option automatically performs a

DUMP TRANSACTION <database> WITH TRUNCATE_ONLY

statement every time the SQL Server checkpoint handler wakes up, about once every minute. Setting this option for a database is useful primarily in a development environment when backups of the transaction log are not wanted. However, if this option is set, you can rely only on recovering from your database dumps, not from your log dumps. When this option is set, SQL Server prohibits a transaction log dump to a physical dump device—attempting to do so will yield SQL Server error 4208,

DUMP TRANsaction is not allowed while the trunc. log on chkpt. option is enabled: use DUMP DATABASE, or disable the option with sp_dboption.

It is possible to turn the option off, so subsequent transaction log dumps to physical dump devices will be allowed. However, those dumps will not be recoverable, as the sequence number for the database will have already been incremented for as many times as the checkpoint handler woke up and truncated the log.

The following example illustrates the effects of the trunc. log on chkpt. option. Assume that a database was dumped at midnight, the transaction log was dumped to a physical device at 8 a.m., and at 8:15 a.m. the trunc. log on chkpt. option was set for the database. At 8:45 a.m., the option was turned off, and at 9 a.m. the log was successfully dumped to another dump device. Between 8:15 and 8:45, there may have been as many as 30 automatic truncations of the transaction log (one truncation for each time the checkpoint handler woke up). If this database later needed to be restored, the database dump could be loaded, followed by the 8 a.m. transaction log dump. However, attempting to load the 9 a.m. log dump would result in SQL Server error 4305, because the log had been truncated several times between 8:15 a.m. and 8:45 a.m. while the trunc. log on chkpt. option was set. Therefore, if you need to rely on your transaction log dumps for recovery, you must dump the database after turning off the trunc. log on chkpt. option, or future log dumps will not be recoverable.

12 a.m. 8 a.m. 8:15 a.m. 8:45 a.m. 9:00 a.m.
|-------- |----- |--------------------- |-------- |------>
database dump log dump trunc log   <approx. 30
on chkpt.   automatic
turned on   truncations
           of the log>
trunc log
on chkpt.
turned off
log dump

For databases that do not have the log on a separate device, there is no harm in having the trunc. log on chkpt. option set. If the log is located on the same device as the data, SQL Server disallows the dumping of the transaction log to a physical device (only DUMP DATABASE is allowed for backing them up), so sequencing of the transaction log is not an issue.

Effect of "select into/bulkcopy" on Transaction Logs

The select into/bulkcopy option must be set in a database when you want to perform nonlogged operations such as SELECT INTO or fast bulk copy (bcp). The relation between this option and the dumping of transaction logs differs between SQL Server versions 1.x and 4.2. In SQL Server version 1.x, attempting to dump a transaction log to a physical dump device while this option is set will result in SQL Server error 4207,

DUMP TRANsaction is not allowed while the select into/bulk copy option is enabled: use DUMP DATABASE, or disable the option with sp_dboption.

In version 1.x, it is possible to set the option, perform nonlogged operations, turn the option off, and do successive transaction log dumps. These transaction logs can later be loaded, but none of the nonlogged operations will be able to be recovered. SQL Server version 4.2 allows transaction logs to be dumped while this option is turned on, provided that no nonlogged activity has taken place since the last database dump.

To illustrate the effect of nonlogged operations, suppose that you dump a version 1.x database, set the select into/bulkcopy option, and issue a nonlogged statement such as

SELECT * INTO newtable FROM oldtable.

You then turn the option off and dump the transaction log to a physical dump device. If the database dump and transaction log dumps are then reloaded, the database will contain the newtable table, but it will have zero rows in it, because the creation of the table is logged, but the insertion of the data rows is not logged.

SQL Server version 4.2 detects if any nonlogged operations have been performed in a database. If an attempt is made to dump the transaction log to a physical dump device and it detects that nonlogged operations have been performed, it raises a slightly revised version of the 4207 error, stating,

DUMP TRANsaction is not allowed while the select into/bulk copy option is enabled or if a nonlogged operation has occurred: use DUMP DATABASE, or disable the option with sp_dboption.

As the error message states, it is necessary to turn off the select into/bulkcopy option and dump the database before transaction log dumps can be issued.

Dumping a Full Transaction Log

It is possible for a database's transaction log to become so full that there is no room left to log new transactions. When this occurs, SQL Server will report error 1105,

Can't allocate space for object 'syslogs' in database <database name> because the 'logsegment' segment is full. If you ran out of space in Syslogs, dump the transaction log. Otherwise, use ALTER DATABASE or sp_extendsegment to increase the size of the segment.

In most cases, the log can be dumped to a physical dump device, which will free up new space in the log. When the transaction log is dumped, SQL Server first writes a checkpoint record in the log in order to get a snapshot of what transactions are currently open. It then proceeds to write out the pages of the log and then remove the inactive portion of the log.

If the log is so full that there is not even enough room left to write the checkpoint record, the only option available is to issue a DUMP TRANSACTION database_name WITH NO_LOG statement. This statement truncates the inactive portion of the log without writing the initial checkpoint record and without logging any of the truncation operation. Because the log is not written to a physical dump device, it is not recoverable. It is therefore imperative that the database be dumped immediately so that future transaction log dumps will be recoverable. If you encounter this situation regularly, you should increase the size of the log or increase the frequency of your log dumps.

SQL Server for Windows NT includes enhancements that can help prevent a transaction log from ever filling completely. Through its close integration with the Windows NT Performance Monitor, the percentage of log space used for each database can be continuously monitored. Using Performance Monitor alerts, the database administrator can have Performance Monitor automatically run a batch file to dump the transaction log when the percentage of log space used hits a predefined threshold. Using this approach, along with the automated scheduled backup facilities of SQL Administrator, the database administrator's task of managing backups has been greatly simplified and automated.

Loading Transaction Logs

If a situation arises that requires you to recover files from database and transaction log dumps, the load procedure needs to be completed in one continuous step. That is, the database should first be loaded, followed by each of the transaction logs, with no activity taking place in between each load. However, users can be actively performing updates in other databases. If any updates take place in the target database before all the logs have been loaded, when the next log load is attempted, SQL Server will raise error 4306,

There was activity on database since last load, unable to load. Must restart load sequence with the load database to continue.

As the error message indicates, you must start back at the beginning by loading the database dump, followed by all the log dumps. For this reason, it is often advisable to start SQL Server in single-user mode when loading a database. In single-user mode, only one user at a time is allowed access to SQL Server, and this should be the person loading the database. Alternatively, SQL Server could be started normally, but sp_dboption could be used to turn on the "single user" option for the database being loaded. This allows users to freely access other databases on the server, but allows only one person to access the database being loaded (this should be the person performing the load).

Note   Shutting down and restarting SQL Server during a load sequence can lead to error 4306, because the SHUTDOWN statement puts a checkpoint record in each database, and each database is checkpointed when SQL Server is restarted.

Managing a Backup Server

Purpose of a Backup Server

If a hard disk drive or other failure results in the computer being unusable for a time, it can be a relief to know that you have reliable backups of your data. However, making the necessary repairs and reloading the dumps of the database and logs can take considerable time. A very cost-effective alternative to this is to maintain a "warm" backup server at all times.

SQL Server includes features that make it very easy to have another server "standing by" in case something should happen to the primary production server. Assume that you have two servers: Primary is the primary production SQL Server and Standby is another server on the network, also running SQL Server, with devices, databases, log-ons, and user accounts set up identically to those on Primary. Each time a database is dumped from Primary, you will immediately load it onto Standby. Likewise, you will immediately load each transaction log dump from Primary onto Standby. (It is still recommended as a further precaution that these dumps also be copied off to tape or some other medium and stored.) Should a problem arise on Primary that will require significant downtime, Primary can be taken offline, and Standby's server name changed to Primary.

Using this approach, the only down time will be the few minutes necessary to take Primary offline and change Standby's name—no user applications will need to change, and the data on the server will be current through the last transaction log dump made from the old Primary server.

Setting Up a Backup Server

To accomplish a backup server setup, there are a couple of changes that need to be made to the databases on Standby. After initially creating the databases on Standby, the two database options read only and no chkpt on recovery need to be set for each database that will be maintained. As stated earlier, when loading a database and transaction logs, no update activity can take place in the database until all logs have been loaded. These two options help to reduce the risk of having any activity between log loads. When SQL Server is started, a checkpoint record is written in each database after it has been recovered. Setting the no chkpt on recovery option prevents SQL Server from writing this checkpoint, so that if the Standby server goes down, after SQL Server is restarted the databases will still show that no activity has taken place, and transaction logs can continue to be loaded from the Primary server.

If SQL Server is manually stopped by using the SHUTDOWN statement, the shutdown procedure waits for all active processes to complete and checkpoints each database before stopping. The writing of this checkpoint prevents any further transaction log loads, because there would have been an update made to the database. Setting the read only option not only prevents users from performing any write activity in a database, it also causes SQL Server to bypass writing a checkpoint record in the database when a SHUTDOWN statement is issued. When these two options are set for the databases on Standby, database and transaction logs should be able to continue loading regardless of how the Standby server goes down. Note that if the Standby server is needed to replace Primary, you must turn off the no chkpt on recovery and read only options before users can resume updating.

Summary

Microsoft SQL Server offers unparalleled performance and integrity for a PC-based relational database management system, with a robust set of data integrity features, including device mirroring, online database and transaction log backups, and automatic roll forward/rollback recovery. SQL Server is an ideal platform for large production applications, whether they be high-volume transaction-processing systems or decision-support environments. As with any software application, it is expected that all proper precautions be taken to avoid loss of data, whether due to hardware, software, or user error. Before using any SQL Server application for production, you should have a well-tested, solid backup strategy in place so that in the event of any type of failure, production can be restored with minimal downtime and little or no loss of data.

Additional Information

To receive more information about Microsoft SQL Server or to have other technical notes faxed to you, call Microsoft Developer Services FAX Request at (206) 635-2222.

Available Technical Notes

Microsoft Open Data Services: Application sourcebook
part number 098-54944

Designing Client-Server Applications for Enterprise Database Connectivity
part number 098-54943

Query Optimization Techniques
part number 098-54942

Microsoft SQL Server Network Integration Architecture
part number 098-54941

Using Binary Columns and Bitwise Operations in Microsoft SQL Server for Windows NT
part number 098-54940

Network Configuration Options with Microsoft SQL Server for Windows NT
part number 098-54650

Discussion of the ANSI SQL Standard and Microsoft SQL Server
part number 098-34656

© 1994 Microsoft Corporation. All rights reserved.

Information in this document represents the current view of Microsoft Corporation on the issue discussed as of the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft and Microsoft cannot guarantee the accuracy of any information presented after the date of publication. This document is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS DOCUMENT.

Companies, names, and data used in examples herein are fictitious unless otherwise noted.