Think of an interaction as a unit of work on the system. This could be a user interaction with an application, a reading of a file from a network server, or a sending of e-mail across the network. It is best if you define this action yourself, because you know what your computer is being used to accomplish. (Well, maybe you don't. But we certainly don't know.) If we know just a few things about this interaction, there are a lot of things we can say about the performance limitations of the system.
The first thing we want to know is the total time the interaction uses on each unit of hardware on the computer. Call this the demand for the device, and measure it in seconds.
If demand(processor) is the processor time used by the interaction, and demand(disk) is the disk time used by the interaction, we can invent a natural law called the Consistency Law that states:
util(processor) / util(disk) = demand(processor) / demand(disk)
where util(device) is the utilization of the device (either the disk or the processor in this case). Util(device) is a number from 0 to 1 which is generally expressed as a percentage from 0 to 100%. This tells us that the devices will be busy in relation to the demand for them. A consequence of this law is that a device may not necessarily be at maximum utilization in order for a system to be achieving maximum throughput, defined as interactions per second.
If a device can achieve utilization of 1 (for reasons why a device may not be able to achieve utilization of 1, see the discussion of sequencing in Chapter 7), the maximum throughput for that device is:
max throughput(device) = 1 / demand(device)
Clearly, the device with the smallest max throughput in the system for this interaction will determine the maximum throughput the system can achieve. This device is the bottleneck. Notice that making any other device faster can never yield more throughput; it can only make the faster device have lower utilization. This is why it is so important to discover the bottleneck in a system before signing the purchase order for new hardware!
For example, suppose that an interaction requires .3 seconds of processor time and .5 seconds of disk time, and no other device time. The processor can handle 3.3 interactions per second, while the disk can handle 2 interactions per second. So the overall system can handle only 2 interactions per second, at which point the disk will be saturated (utilization = 1). By the Consistency Law, the utilization of the processor at that point is .3/.5 = .6, or 60%. Pretty cool, huh?
This gives rise to a general observation known as the Throughput Law, which says that for all devices, the overall throughput of the system is measured by the following:
throughput = util(device) / demand(device)
For certain devices, it is useful to define the demand for the device in terms of the number of times the device is used by the interaction, and the average amount of time the device is used on each visit, known in queuing theory as the service time of the device:
demand(device) = visits(device) * service(device)
Windows NT Performance Monitor is based on these simple yet powerful principles. For each device, it counts and displays such basic elements as the utilization, visits, and service time. Sometimes it displays only some of these values and you can easily compute the others. This is done in those cases when we must leave it to you to define what constitutes an interaction on your system.
But we also use a simple trick. Because we don't know what your interaction is, we define the default interaction on the system as whatever took place during the last second. With this definition of interaction, demand(device) expressed as a fraction of a second is the same numerically as util(device) expressed as a number from 0 to 1. So if you don't care to define your interaction too precisely, you can use our default definition and get meaningful results.
Soon, you will easily be able to toss these simple formulas around. Your friends will be amazed.