Scheduled Tasks with NT Command Files

Paul Munkenbeck

The SQL Executive Task Scheduler is easy to use and administer, but it has some well-known limitations that make it less than perfect for production applications. In this article, Paul shows how to use structured NT command files to manage your task schedules professionally.

Apart from special-purpose replication tasks, there are only two types of tasks that you can schedule under SQL Executive -- namely, Transact-SQL (T-SQL) and NT commands. Both types have their strengths, and I use a mix of them in my production schedules. The T-SQL option is limited to 255 characters, and, under SQL Server 6.0, it can't contain GO commands, but this is still ideal for just invoking a stored procedure in a particular database.

For more complicated tasks, I use NT command files. For example, if you have a sequence of functions that need to run one after another, it's awkward to have to implement this as separate tasks in the Task Scheduler. There's no concept of specifying that one task is dependent upon the completion of another one. All you can do is estimate the elapsed time for the first task, calculate its expected finishing time, and then set the start time of the second task to be some time after that.

Having explored various ideas for working around this -- such as forcing a task to raise a log event that's picked up by an alert that invokes an on-demand task, monitoring sysprocesses to see when a task finishes, and even considering third-party NT schedulers -- I decided it was simpler to just design an NT command file that ran each function in sequence!

You can run T-SQL code from within an NT command file by using the isql program. The SQL code can either be a direct query, such as a single SELECT or execution of a stored procedure, or a separate T-SQL script that's read in by isql. Of course, in addition to T-SQL code, NT command files also let you run external programs like Visual Basic executables and the usual set of operating system commands. So NT command files can give you lots of flexibility, but in order to be manageable, you need to give them a generic structure that can be easily deployed across different servers and different applications.

Using isql in command files

Most of my command files include calls to isql to run T-SQL scripts. To do this, you need to provide the server name, database name, and a SQL Server login and password. If the SQL Executive service is set up to run under a user account (rather than the system account), as recommended during the SQL Server installation process, then its scheduled task command files will also run under this same account (so it's important that this account have sufficient security permissions on all the folders and files used in the scheduled command files).

If you run integrated or mixed security, and the SQL Executive account has an appropriate login and database user under SQL Server, then you should be able to use a trusted connection under isql, and you won't have to supply a user or password. Although I do use this to avoid hard-coding passwords in my command files, I usually code the "isqluser" value just to document which account is actually being used. Also, if you want to use the default database for the login, then there's no need to specify a database name. However, for the same reason as mentioned previously, I always code an explicit value.

Table 1 gives a summary of isql's most important command line switches, summarized from SQL Server Books Online. For a full description of all the options, see the Transact-SQL 6.0 Reference Manual and/or the online "What's New in SQL Server 6.5." Note that the switches are case-sensitive.

Table 1. isql switches and their descriptions.

/U login_id

The user login ID (case-sensitive).

/E

Uses a trusted connection instead of requesting a password.

/p

Prints performance statistics.

/n

Removes numbering and the prompt symbol (>) from input lines.

/d dbname

Issues a USE dbname statement when isql is started.

/Q "query"

Executes a query and immediately exits isql. Use double quotation marks around the query and single quotation marks around anything embedded in the query.

/c cmdend

Specifies the command terminator. By default, commands are terminated and sent to SQL Server by entering GO on a line by itself. When you reset the command terminator, don't use SQL reserved words or characters that have special meaning to the operating system, whether preceded by a backslash or not.

/w columnwidth

Allows the user to set the screen width for output.

/m errorlevel

Customizes the display of error messages. The message number, state, and error level are displayed for errors of the specified severity level or higher. Nothing is displayed for errors of levels lower than the specified level. Use -1 to specify that all headers are returned with messages, even informational messages. If using -1, there must be no space between the parameter and the setting (/m-1, not /m -1).

/?

Displays the syntax summary of isql switches.

/H wksta_name

Is a workstation name, changing the value in the dynamic system table sysprocesses if the user logs in from a computer different from the usual login computer. If no workstation name is specified, the current computer name is assumed.

/P password

Is a user-specified password. If the /P option isn't used, isql prompts for a password. If the /P option is used at the end of the command line without any password, isql uses the default password NULL. Passwords are case-sensitive.

/S servername

Specifies which SQL Server to connect to.

/i inputfile

Identifies the file that contains a batch of SQL statements or stored procedures.

/b

Specifies that isql exits and returns a DOS ERRORLEVEL value when an error occurs. The value returned to the DOS ERRORLEVEL variable is 1 when the SQL Server error message has a severity of 10 or greater; otherwise, the value returned is 0. DOS batch files can test the value of DOS ERRORLEVEL and handle the error appropriately.

Environment variables and replaceable parameters

If the connection parameters have to be hard-coded for every isql call, then your command files will become difficult to migrate to other applications or to other servers running the same application. It would be much more convenient if these standard isql parameters could be supplied as variables. Luckily, NT command syntax does provide such a method using environment variables and replaceable parameters. The isql program even supports some specific environment variables of its own.

An environment variable is created or updated by using the SET command -- for example, "SET sqlpath=C:\SCHEDULE\MYAPP". It's best to code the equal sign without any spaces around it. A variable is used in a command file by coding the variable name inside a pair of % characters -- for example, "CALL %sqlpath%\DAILY.CMD". These types of environment variables are global in that they stay in effect for the whole command shell session, or until they're cleared by SETting them to nothing (for instance, "SET sqlpath=").

Command files also accept command line parameters. These are called replaceable parameters and are referred to by their position on the command line. The first parameter is referred to as %1, the second %2, and so on, up to %9. Take a look in your operating system reference manual if you want to use more than nine replaceable variables in one file. These variables are local in that they're destroyed once the command file finishes.

With these points in mind, take a look at the following command file structure:

REM Typical Command File for SQL Scheduled Task

REM Expects %1 as parameter 'weekday' or 'weekend'

SET isqlserver=MYSERVER

SET isqluser=MYUSER

SET isqlpassword=MYSECRET

SET database=MYDATABASE

REM Set working directory:

C:

CD\SCHEDULE\MYAPP

APROGRAM.EXE %isqlserver%,%database%

@REM Note next stored proc uses command line parm

isql /d %database% /Q "MY_Stored_Procedure @param='%1'"

IF NOT ERRORLEVEL 1 GOTO ISQL2

@ECHO MY_Stored_Procedure Failed - Aborting Task

GOTO END

:ISQL2

isql /d %database% /i MYSCRIPT.SQL

:END

As you can see from this example, all the variables used by isql can be coded once at the start of the command file. The "isqlserver", "isqluser", and "isqlpassword" (SQL 6.5 only) variables are specific to isql; they're recognized in place of the /S, /U, and /P switches. The %database% variable is one I've invented myself. If you're running SQL Server 6.0, then you can still define your own password variable and name it isqlpassword, but you'll have to include the isql option /P %isqlpassword%.

In this example, the stored procedure MY_Stored_Procedure takes a parameter that controls the type of processing it carries out. Its value is either 'weekday' or 'weekend', and this is supplied by a replaceable variable from the command file's command line. This illustrates the fact that a variable can be used anywhere in the command file and will be evaluated before the command line is processed. Note also some basic error handling that halts the command file if this stored procedure fails for any reason.

The \SCHEDULE folder probably won't be in the system path, unless you've made the effort to add it in explicitly. So, when this command file is run from a scheduled task under SQL Executive, it must be specified with a fully qualified path, along with any command line parameters -- for example, "C:\SCHEDULE\MYAPP\DAILY.CMD weekend". (See Figure 1 for a screenshot of setting up a scheduled task to run a command file.) Note that for the same reason, the first thing the command file has to do is to set its own current working directory.

Error handling in command files

No production application is complete without some suitable error handling. Command files provide error management through the IF ERRORLEVEL construction. Most well-written external programs and operating system commands will supply a return code that's available for testing with IF ERRORLEVEL. isql takes a more eccentric approach.

With isql, then, there are three ways of setting ERRORLEVEL. The first method is to use the special EXIT(query) command, and it works only if you're using isql with the /i switch or < redirection symbol to read an input file containing a T-SQL script. Think of both of these techniques as mimicking the GO command and executing the current batch including the query expression inside the brackets. However, this approach then quits isql and sets ERRORLEVEL based on the results of the final query expression. It takes the first column of the first result row and converts it into an integer -- for instance, EXIT(SELECT returncode = @@ERROR) will set the return code to the last error value. Unfortunately, since the output from the EXIT query appears in the isql output, this can look quite untidy. There are other versions of the EXIT statement, and you can read about them in your undoubtedly dog-eared Transact-SQL Reference Manual.

The second method is available only under SQL Server 6.5 and again works only with the /i switch or < redirection symbol. It uses the new /b option. When this is specified, isql will exit when an error occurs and set ERRORLEVEL according to the severity of the error. The value returned to the DOS ERRORLEVEL variable is 1 when the SQL Server error message has a severity of 10 or greater; otherwise, the value returned is 0.

The third method of setting ERRORLEVEL from isql is to use the RAISERROR statement with the state parameter set to the special value 127. The return code is set to the error message number, or 50000 if an ad hoc message string is used. So, RAISERROR(50001,10,127) generates an error message of severity 10 and sets ERRORLEVEL to 50001. If you're using isql with the /Q switch to execute a stored procedure, then this is the only practical way to get the ERRORLEVEL value set.

Generalizing your scheduled task command files

Now, if you have only a few tasks in your schedule (lucky you!), then the preceding command file structure is probably generic enough. However, your applications might have several different schedules, such as daily, weekly, monthly, or even different processing on different days of the week. In this case, you'll find yourself repeating the environment variable settings in each one. Also, if you run the same application on more than one departmental server, then you'll have to customize the server settings in every one.

Figure 1. Setting up a scheduled task to run a commmand and log its output

To resolve these issues, the next step is to separate out the SET statements into their own files. It helps if you design a standard folder structure that can be deployed on each server. I keep all my task schedule script files and command files in a hierarchy starting from a folder named something like C:\SCHEDULE. Here's a listing of a typical directory tree for the folder structure.

C:\SCHEDULE

|SRVRSETS.CMD

|

\---MYAPP

APPSETS.CMD

APROGRAM.EXE

DAILY.CMD

DAILY.LOG

MYSCRIPT.SQL

I like to keep my scheduled command files outside of the SQL Server path for two reasons. First, it's unaffected by any upgrade; and second, I can control security on the (now few) files that contain hard-coded passwords. All the files that make up this tree can be found in the accompanying Download file.

Finally, here are the three command files that control and run the daily scheduled task for an application. You'll see that I've taken the "typical command file" shown previously and added some extra customization.

REM This is C:\SCHEDULE\SRVRSETS.CMD

REM Server Environment variable settings 

PROMPT $d$s$t$g

SET isqlserver=MYSERVER

REM This is C:\SCHEDULE\MYAPP\APPSETS.CMD

REM Application Environment variable settings 

SET database=MYDATABASE

SET isqluser=MYUSER

SET isqlpassword=MYSECRET

REM This is C:\SCHEDULE\MYAPP\DAILY.CMD

REM Application Daily Schedule 

REM as called from Task Scheduler

REM Expects %1 as parameter 'weekday' or 'weekend'

CALL C:\SCHEDULE\SRVRSETS.CMD

REM Set working directory:

C:

CD\SCHEDULE\MYAPP

CALL APPSETS.CMD

@REM Run a program on current server

APROGRAM.EXE %isqlserver%

@REM Note next stored proc uses command line parm:

isql /d %database% /Q "MY_Stored_Procedure 

   @param='%1'"

@IF NOT ERRORLEVEL 1 GOTO ISQL2

@ECHO MY_Stored_Procedure Failed - Aborting Task

GOTO END

:ISQL2

isql /b /n /d %database% /i MYSCRIPT.SQL

@IF NOT ERRORLEVEL 1 GOTO END

@ECHO MYSCRIPT Failed - Aborting Task

GOTO END

:END

The scheduled task still calls command files from SQL Executive in the same way -- for example, "C:\SCHEDULE\MYAPP\DAILY.CMD weekend". However, the first function every command file carries out is to call a separate command file, SRVRSETS.CMD, which always lives in folder C:\SCHEDULE and sets the server-specific environment variables. Next, it navigates to the application's working directory and calls another command file, APPSETS.CMD, which lives in this same folder and sets the application-specific environment variables.

It's then ready to do real work -- running executable programs and T-SQL scripts -- using the values in the environment variables. Note that My_Stored_Procedure uses RAISERROR with a state value of 127 to return an error code, whereas errors from MYSCRIPT.SQL are trapped with the isql /b option. The code for both of these can be found in the accompanying Download file.

Customizing your command files

This approach is very flexible and easy to extend or customize. For instance, for some of my tasks, I like to keep a log of the output from the command files. You can get each execution of isql to output its results to a separate file, but I prefer to see all the output logged in one place. To do this, you simply redirect the whole of the command file output into a text file when defining the scheduled task, as follows (also see Figure 1):

"C:\SCHEDULE\MYAPP\DAILY.CMD weekday > 

   C:\SCHEDULE\MYAPP\DAILY.LOG"

When doing this, it's useful to include a timestamp so that you can see when each step in the process was executed. I accomplish this by including the statement PROMPT $d$s$t$g, which you can see at the start of the server settings file shown previously. This changes the command prompt from C:> to the current date and time separated by spaces. Try it and see! The log for a run of DAILY.CMD can be found as file DAILY.LOG in the accompanying Download file.

You don't have to categorize your environment variables in the same way as the preceding examples. For instance, the structure assumes a different login for each application's schedule. This might be appropriate if you allow different development groups to set up their own schedules. If instead you want to use the same login (such as sa) for all schedules, then the isqluser and isqlpassword settings can be moved into the server-specific command file. Whichever method you choose, protect those files with visible passwords in them by setting security permissions on them to prevent unauthorized access -- but don't forget that the SQL Executive account has to be able to read and execute them!

By making sure I implement the same scheduled task structure and command file hierarchy for each application and on each server, I ensure consistency and ease of support. However, the structure is simple enough to allow great flexibility for different applications to fit into the framework. 

Each new release of SQL Server provides more functionality in the Task Scheduler. [Indeed, SQL Server 7.0 is expected to offer some pretty exciting new functionality in the form of SQL Server Agent technology. We'll even be able to use SQL Server Agent to schedule multi-server tasks. -- Ed.] For example, prior to SQL Server 6.5, I implemented my database maintenance tasks as a set of command files running various DBCCs, UPDATE STATISTICS, reindexing, sp_recompile, and so on, in the correct sequence. It took me hours to set up the daily, weekly, and monthly schedules. Now, the Database Maintenance Plan Wizard (what a snappy title!) can generate scheduled tasks that do most of this with one line of T-SQL. Even so, I doubt if there will be a wizard that will completely replace all the flexibility of my command file structure. I hope you find the same. Happy scheduling!

Download sample code for this article here.

Paul Munkenbeck has been a DBA for more than 15 years. He has worked on all types of databases -- from desktop to multi-gigabyte mainframe. He was first exposed to Microsoft SQL Server as a database consultant on a project in early 1995. Having gained two years' experience of strategic SQL Server applications with a leading UK real estate company, Paul is now DBA for a services company that builds and runs a variety of database applications for its clients. MUNKY@compuserve.com, 101502.2537@compuserve.com.