Any NT driver can use a semaphore object to synchronize operations between its driver-created thread(s) and other driver routines. For example, a driver-dedicated thread might put itself into a wait state when there are no outstanding I/O requests for the driver, and the driver's Dispatch routines might set the semaphore to the Signaled state just after they queue an IRP.
The Dispatch routines of highest-level NT drivers, which are run in the context of the thread requesting an I/O operation, might use a semaphore to protect a resource shared among the Dispatch routines. Lower-level driver Dispatch routines for synchronous I/O operations also might use a semaphore to protect a resource shared among that subset of Dispatch routines or with a driver-created thread.
Any NT driver that uses a semaphore object must call KeInitializeSemaphore before it waits on or releases the semaphore. Figure 3.25 illustrates how a driver with a thread can use a semaphore object.
Figure 3.25 Waiting on a Semaphore Object
As Figure 3.25 shows, such a driver must provide the storage for the semaphore object, which should be resident. The driver can use the device extension of a driver-created device object (see Section 3.2), the controller extension if it uses a controller object (see Section 3.4), or nonpaged pool allocated by the driver.
When the DriverEntry or Reinitialize routine calls KeInitializeSemaphore, it must pass a pointer to the driver's resident storage for the semaphore object. In addition, the caller must specify a Count for the semaphore object, as shown in Figure 3.25, that determines its initial state (nonzero for Signaled).
The caller also must specify a Limit for the semaphore, which can be either of the following:
·Limit = 1
When such a semaphore is set to the Signaled state, a single thread waiting on the semaphore becomes eligible for execution and can access whatever resource is protected by the semaphore.
This type of semaphore is also called a binary semaphore because a thread either does or does not have exclusive access to the semaphore-protected resource.
·Limit > 1
When such a semaphore is set to the Signaled state, some number of threads waiting on the semaphore object become eligible for execution and can access whatever resource is protected by the semaphore.
This type of semaphore is called a counting semaphore because the routine that sets the semaphore to the Signaled state also specifies how many waiting threads can have their states changed from waiting to ready, which can be the Limit set when the semaphore was initialized or some number less than this preset Limit.
Very few NT device or intermediate drivers have a single driver-created thread, let alone a set of threads that might wait on a semaphore. Few system-supplied drivers use semaphore objects, and, of those that do, even fewer use a binary semaphore. A binary semaphore, while seeming to have similar functionality to a mutex object, does not provide the built-in protection agains deadlocks that a mutex object has for system threads running in SMP machines. For more information about mutex objects, see Section 3.9.4, next.
After a driver with an initialized semaphore is loaded, it can synchronize operations on the semaphore that protects a shared resource. For example, a driver with a device-dedicated thread that manages the queueing of IRPs, such as the system floppy controller driver, might synchronize IRP queueing on a semaphore, as shown in Figure 3.25:
1.The thread calls KeWaitForSingleObject with a pointer to the the driver-supplied storage for the initialized semaphore object to put itself into a wait state.
2.IRPs begin to come in that require device I/O operations. The driver's Dispatch routines insert each such IRP into an interlocked queue (see Section 3.8.2) under spin-lock control and call KeReleaseSemaphore with a pointer to the semaphore object, a driver-determined priority boost for the thread (Increment, as shown in Figure 3.25), an Adjustment of one that is added to the semaphore's Count as each IRP is queued, and a Boolean Wait set to FALSE. A nonzero semaphore Count sets the semaphore object to the Signaled state, thereby changing the waiting thread's state to ready.
3.The Kernel dispatches the thread for execution as soon as a processor is available: that is, no other thread with a higher priority is currently in the ready state and there are no kernel-mode routines to be run at raised IRQL (greater than PASSIVE_LEVEL).
The thread removes an IRP from the interlocked queue under spin-lock control, passes it on to other driver routines for further processing, and calls KeWaitForSingleObject again. If the semaphore is still set to the Signaled state (that is, its Count remains nonzero, indicating that more IRPs are in the driver's interlocked queue), the Kernel again changes the thread's state from waiting to ready.
By using a counting semaphore in this manner, such a driver thread "knows" there is an IRP to be removed from the interlocked queue whenever that thread is run.
Calling KeReleaseSemaphore with the Wait parameter set to TRUE indicates the caller's intention to immediately call a KeWait..Object(s) support routine on return from KeReleaseSemaphore.
NT driver writers should consider the following guidelines for setting the Wait parameter to KeReleaseSemaphore:
A pageable thread or pageable driver routine that runs at IRQL PASSIVE_LEVEL should never call KeReleaseSemaphore with the Wait parameter set to TRUE. Such a call causes a fatal page fault if the caller happens to be paged out between the calls to KeReleaseSemaphore and KeWait..Object(s).
Any standard driver routine that runs at an IRQL greater than PASSIVE_LEVEL cannot wait for a nonzero interval on any dispatcher object(s) without bringing down the system (see Section 3.9). However, such a routine can call KeReleaseSemaphore while running at an IRQL less than or equal to DISPATCH_LEVEL.
For a summary of the IRQLs at which standard NT driver routines run, see Chapter 16. For support-routine-specific IRQL requirements, see the Kernel-Mode Driver Reference.