March 1999
Make Your Windows 2000 Processes Play Nice Together With Job Kernel Objects |
Windows 2000 offers a new job kernel object that allows you to group processes together and create a sandbox that restricts what these processes are allowed to do. Using jobs that contain a single process lets you place restrictions on that process that you normally wouldn't be able to. |
This article assumes you're familiar with C++, Win32 |
Figure 5 shows Spy++ with two MDI child windows open. Notice that the Threads 1 window contains a list of threads in the system. Only one of those threads, 000006AC SPYXX, seems to have created any windows. This is because I ran Spy++ in its own job and restricted its use of UI handles. In the same window, you can see the MSDEV and EXPLORER threads, but it appears that they have not created any windows. I assure you that these threads have definitely created windows, but Spy++ is unable to access them in any way. On the right-hand side, you see the Windows 3 window. In this window, Spy++ shows the hierarchy of all windows existing on the desktop. Notice that there is only one entry, 00000000. Spy++ must place it there as a placeholder. Note that this UI restriction is only one-way. That is, processes that are outside of a job can see USER objects that are created by processes within a job. For example, if I were to run Notepad in a job and Spy++ outside of a job, Spy++ would be able to see Notepad's window even if the job that Notepad was in specified the JOB_OBJECT_UILIMIT_ HANDLES flag. Also, if Spy++ were in its own job, it would be able to see Notepad's window unless the Spy++ job had the JOB_OBJECT_UILIMIT_HANDLES flag specified. Restricting UI handles is awesome if you want to create a really secure sandbox for your job's processes to play in. However, it is useful to have a process that is part of the job communicate with a process that is outside of the job. One easy way to accomplish this is to use window messages, but if the job's processes can't access UI handles, then a process in the job can't send or post a window message to a window created by a process outside of the job. Fortunately, there is a way to solve this problem using a new function: |
|
The hUserObj parameter indicates a single USER object whose access you want to either grant or deny to processes within the job. This will almost always be a window handle, but it may be other USER objects such as a desktop, hook, icon, menu, and so on. The last two parameters, hjob and fGrant, indicate which job you are granting or denying access to. Note that this function fails if it is called from a process within the job identified by hjob. This prevents a process within a job from granting itself access to an object. The last type of restriction that can be placed on a job is related to security. A JOBOBJECT_SECURITY_LIMIT_ INFORMATION structure looks like this: |
|
The table in Figure 6 describes the members.
Naturally, once you have placed restrictions on a job, you may want to query those restrictions. You can do so easily by calling |
|
Like SetInformationJobObject, you pass this function the handle of the job, an enumerated type indicating what restriction information you want, the address of the data structure to be initialized by the function, and the length of the data block containing that structure. The last parameter, lpReturnLength, points to a DWORD that is filled in by the function telling you how many bytes were placed in the buffer. You can (and usually will) pass NULL for this parameter if you don't care.
Placing a Process in a Job OK, that's it for setting and querying restrictions. Now let's get back to my StartRestrictedProcess function. After I place some restrictions on the job, I spawn the process that I intend to place in the job by calling CreateProcess. However, notice that I use the CREATE_SUSPENDED flag when calling CreateProcess. This creates the new process, but doesn't allow it to execute any code. Since the StartRestrictedProcess function is being executed from a process that is not part of a job, the child process will also not be part of a job. If I allowed the child process to immediately start executing code, it would be running out of my sandbox and could do things successfully that I want it to be restricted from doing. So after I create the child process and before I allow it to start running, I must explicitly place the process in my newly created job. I do that by calling |
|
This function tells the system to treat the process (identified by hProcess) as part of an existing job (identified by hJob). Note that this function only allows a process that is not assigned to any job to be assigned to a job. Once a process is part of a job, it cannot be moved to another job or become jobless (so to speak).
Also note that when a process that is part of a job spawns another process, the new process automatically becomes part of the parent's job. However, there are two mechanisms that allow you to alter this behavior. First, you can turn on the JOB_OBJECT_BREAKAWAY_OK flag in JOBOBJECT_BASIC_LIMIT_INFORMATION's LimitFlags member. This flag tells the system that a newly spawned process can execute outside the job. But to make this happen, CreateProcess must be called with the new CREATE_ BREAKAWAY_FROM_JOB flag. If CreateProcess is called with the CREATE_BREAKAWAY_FROM_JOB flag, but the job does not have the JOB_OBJECT_BREAKAWAY_OK limit flag turned on, CreateProcess fails. This mechanism is useful if the newly spawned process also controls jobs. Second, you can turn on the JOB_OBJECT_SILENT_ BREAKAWAY_OK flag in the JOBOBJECT_BASIC_ LIMIT_INFORMATION LimitFlags member. This flag also tells the system that newly spawned processes should not be part of the job. However, the difference is that there is no need to pass any additional flags to CreateProcess. In fact, this flag forces new processes not to be part of the job. This flag is useful for processes that were originally designed knowing nothing about job objects. As for my StartRestrictedProcess function, after I call AssignProcessToJobObject my new process is part of my restricted job. I now call ResumeThread so that the process's thread can now execute code under the job's restrictions. At this point, I also close the handle to the thread since I won't be using it. Terminating All Processes in a Job One of the most popular things that you will want to do with a job is kill all of the processes within it. Earlier, I mentioned that Microsoft Developer Studio doesn't have an easy way to stop a build in progress because it would have to somehow know which processes were spawned from the first process that it spawned. This is very tricky, and I explained how Developer Studio accomplishes this task in my June 1998 Win32 Q & A column. I suspect that in future versions of Developer Studio, Microsoft will use jobs because the code is a lot easier to write and there's much more that you can do with them. To kill all the processes within a job, you simply call |
|
This function is similar to calling TerminateProcess for every process contained within the job, setting all their exit codes to uExitCode.
Querying Job Statistics I've already discussed the QueryInformationJobObject function and how you can use it to get the current restrictions on a job. You can also use this function to get statistical information about a job. For example, to get basic accounting information you call QueryInformationJobObject, passing JobObjectBasicAccountingInformation for the second parameter and the address of a JOBOBJECT_ BASIC_ACCOUNTING_INFORMATION structure, defined as follows: |
|
The members of this structure are described in Figure 7. In addition to querying this basic accounting information, you can make a single call to query both basic and I/O accounting information. To do this, you pass JobObjectBasicAndIoAccountingInformation for the second parameter and the address of a JOBOBJECT_BASIC_AND_IO_ACCOUNTING_INFORMATION structure, defined as follows: |
|
As you can see, this structure simply contains both JOBOBJECT_BASIC_ACCOUNTING_INFORMATION and IO_COUNTERS structures.
In addition to accounting information, you may also call QueryInformationJobObject at any time to get the set of process IDs for processes that are currently running in the job. To do this, you must first make a guess as to how many processes you expect to see in the job, then allocate a block of memory large enough to hold an array of these process IDs plus the size of a JOBOBJECT_BASIC_PROCESS_ ID_LIST structure: |
|
So, to get the set of Process IDs currently in a job, execute code similar to that shown in Figure 8 .
This is all of the information you get using these functions, but the operating system is actually keeping a lot more information about jobs. The additional information is kept in performance counters and can be retrieved using the functions in the Performance Data Helper function library (PDH.DLL). You can also use the Microsoft Management Console (MMC) System Monitor Control snap-in to view the job information. The dialog box in Figure 9 shows some of the counters available for job objects in the system. Figure 10 shows some of the Job Object Details counters available. You can also see that Jeff's Job has four processes in it: calc, cmd, notepad, and wordpad. |
Figure 9 Job Objects in MMC |
Note that performance counter information can only be obtained for jobs that were assigned names when CreateJobObject was called. For this reason, you may want to create job objects with names even though you do not intend to share these objects across process boundaries by name.
Figure 10 Job Details in MMC |
Job Notifications
At this point, you certainly have the basics. There is only one thing left to cover about job objects: notifications. For example, wouldn't you like to know when all of the processes in the job terminate, or if all the allotted CPU time had expired? Or maybe you'd like to know when a new process is spawned within the job or when a process in the job terminates. If you don't care about these notifications, and many applications won't care, then working with jobs is truly as easy as I've already described. If you do care about these events, then there is a little more that you have to do.
If all you care about is whether all the allotted CPU time had expired, then there is an easy way for you to get this notification. Job objects are non-signaled while the processes within the job have not used up the allotted CPU time. Once all the allotted CPU time has been used, Windows forcibly kills all processes in the job and signals the job object. You can easily trap this event by calling WaitForSingleObject (or a similar function). Incidentally, you can reset the job object to the non-signaled state if you later call SetInformationJobObject, granting the job more CPU time.
When I first started working with jobs, it seemed that the job object should be signaled when there are no processes running within it. After all, process and thread objects are signaled when they stop running; a job should be signaled when it stops running. This way, you could easily determine when a job had run to completion. However, Microsoft chose to signal the job when the allotted time expires instead because that signals an error condition. Since many jobs will start off with one parent process that hangs around until all its children are finished, you can simply wait on the parent process's handle to know when the entire job is finished. My StartRestrictedProcess function shows you when the job's allotted time has expired or when the parent process in the job has terminated.
Well, I've just described how to get some simple notifications, but I haven't explained what you need to do to get more advanced notifications such as process creation/termination. If you want these additional notifications, you must put a lot more infrastructure into your application. In particular, you must create an I/O completion port kernel object and associate your job objects with the completion port. Then, you must have one or more threads that wait on the completion port for job notifications to arrive so that they can be processed.
The completion port is a very complex kernel object that has many cool uses, but it is far too involved to go into here. Instead, I urge you to see the Platform SDK documentation or Chapter 15, Device I/O, of my Advanced Windows book for a full explanation of I/O completion ports.
Once you've created the I/O completion port, you associate a job with it by calling SetInformationJobObject as follows:
|
After this code executes, the system will monitor the job. As events occur, it will post events to the I/O completion port. (By the way, you can call QueryInformationJobObject to retrieve the completion key and completion port handle, but it is very unlikely that you would ever have to do this.)
Threads monitor an I/O completion port by calling GetQueuedCompletionStatus: |
|
When this function returns a job event notification, *pCompletionKey will contain the completion key value set when SetInformationJobObject was called to associate the job with the completion port. This lets you know which job had an event. The value in *pNumBytesTransferred
indicates which event occurred (see Figure 11). Depending on the event, the value in *1pOverlapped will indicate
a process ID.
Just one last note about this: by default, a job object is configured so that when the job's allotted CPU time expires, all the job's processes are terminated automatically and the JOB_OBJECT_MSG_END_OF_JOB_TIME notification does not get posted. If you want to prevent the job object from killing the processes and instead just notify you that the time has been exceeded, you must execute code like this: |
|
The only other value you can specify for an end-of-job-time action is JOB_OBJECT_TERMINATE_AT_END_OF_JOB, which is the default when jobs are created anyway.
Conclusion Prior to Windows 2000, Microsoft has not allowed nearly enough control over processes. While it has been a long time coming, the job object certainly addresses many of the issues that developers care about and have spent countless hours trying to get the operating system to do. The job object comes with the bonus that you can now apply restrictions to a single process or to a set of processes all at once. If you find yourself requiring more control over a process's execution, make sure you check the latest job object documentation to see if Microsoft has added the abilities you need. My guess is that Microsoft will add many more capabilities into the job object as new versions of Windows appear.
|
For related information see: Job Objects at http://msdn.microsoft.com/library/psdk/winbase/prothred_9joz.htm. Also check http://msdn.microsoft.com for daily updates on developer programs, resources and events. |
From the March 1999 issue of Microsoft Systems Journal
|