1.2.4  Multiprocessor-safe

In any Windows NT multiprocessor platform, the following conditions hold:

·All CPUs are identical, and either all have identical coprocessors or none has a coprocessor.

·All CPUs share memory and have uniform access to memory.

·In a symmetric platform, every CPU can access memory, take an interrupt, and access I/O control registers. In an asymmetric platform, one CPU takes all interrupts for a set of slave CPUs.

Windows NT is designed to run unchanged on uniprocessor and symmetric multiprocessor platforms, and NT drivers should be designed to do likewise.

To run safely on a symmetric multiprocessor platform, any operating system must solve this problem: how to guarantee that code executing on one processor does not simultaneously access and modify data that is being accessed and modified from another processor. For example, an NT device driver’s ISR that is handling a device interrupt on one processor must have exclusive access to critical, driver-defined data (or the device registers) in case its device interrupts simultaneously on another processor.

Furthermore, NT drivers’ I/O operations that are serialized in a uniprocessor machine can be overlapped in a symmetric multiprocessor machine. That is, a given driver’s routine that processes incoming I/O requests can be executing on one processor while another routine that communicates with the device executes concurrently on another processor. Whether they are running on a uniprocessor or multiprocessor machine, this situation requires NT drivers to synchronize access to any driver-defined data or system-provided resources that are shared among driver routines and to synchronize access to the physical device, if any.

The NT Kernel component exports a mechanism, called a spin lock, that is used to protect shared data (or device registers) from simultaneous access by one or more routines running concurrently on a symmetric multiprocessor platform. The Kernel enforces two policies regarding the use of spin locks:

·One and only one routine can hold a particular spin lock at any given moment, and only the holder of a spin lock can access the data it protects. Another routine must acquire the spin lock in order to access the same data, and the spin lock cannot be acquired until the current holder releases it.

·Like a hardware or software interrupt vector, the Kernel assigns each spin lock in the system an associated IRQL value. A kernel-mode routine can acquire a particular spin lock only when the routine runs at the spin lock’s IRQL.

These policies prevent an NT driver routine that usually runs at a lower IRQL but is currently holding a spin lock from being preempted by a higher priority driver routine that is trying to acquire the same spin lock, thereby causing a deadlock.

The IRQL assigned to a spin lock is generally that of the highest-IRQL routine that can acquire the spin lock. For example, an NT device driver’s ISR frequently shares a storage area with the driver’s DPC routine, which calls a driver-supplied critical-section routine to access the shared area. In this case, the spin lock protecting the shared area has an IRQL equal to the DIRQL at which the device interrupts. While the critical-section routine holds the spin lock and accesses the shared area at DIRQL, the ISR cannot be run in a uniprocessor machine because the device interrupt is masked off, as already mentioned in Section 1.2.3. While the critical-section routine holds the spin lock and accesses the shared data at DIRQL, the ISR still cannot acquire the spin lock in a symmetric multiprocessor machine.

Note that a set of kernel-mode threads can synchronize access to shared data or resources by waiting on one of the NT Kernel’s dispatcher objects: an event, mutex, semaphore, timer, or another thread. However, most NT drivers do not set up their own threads because they get better performance by avoiding context switches. Consequently, time-critical kernel-mode support routines and NT drivers must use the Kernel’s spin locks to synchronize access to shared data or resources whenever they run at IRQL DISPATCH_LEVEL or at DIRQL.

For more information about using spin locks and managing IRQLs, see Chapter 16. For more information about the Kernel’s dispatcher objects, see Chapter 3.