A Simple Model of a Network Bottleneck

Figures 7.3 and 7.4 correspond to the third peak on the left of Figures 7.1 and 7.2. (The first two were for 512 and 1024 bytes, respectively, and this one is for 2048 bytes.) Clearly, if we increase the record size requested, throughput increases. Can we determine what the bottleneck is from this data? Let's give it a try. (If you thought Rule #7 about counter ratios wasn't important before this, just wait!)

Let's define our interaction as one read. The time for this read is one divided by the System: File Read Operations/sec, or 0.005114 seconds. A simple model of this interaction would be:

Assuming for the moment there is no overlap between media transmission and processor time, this reduces to just (client processor time) plus (media time) plus (server processor time).

The server processor time used in one second is just the Processor: % Processor Time expressed as a number between 0 to 1, or 0.07950 seconds. On the client this is 0.37678 seconds. Dividing each of these by the number of reads per second gives the server and client processor time per read as 0.0004065 seconds and 0.0019267 seconds, respectively.

Each read transfers 2266.793 bytes, (we get this by dividing Network Segment:Bytes Total/sec by Frames Total/sec). The media (Ethernet in this case) transmits at 800 nanoseconds per byte, so we multiply that by the number of bytes per read and get 0.001813 seconds per interaction. Now, summing server processor time, client processor time, and media time, according to our simple model, we get 0.004147.

This 0.004147 is 0.000967 or 967 microseconds less than the 0.005114 seconds for each file read operation. We must conclude that our simple model is a bit too simple. It seems that we forgot the network adapter cards. Since these are identical on both client and server systems, we can assume each takes half of 967 or 483.5 microseconds to process the packets for each record. By doing a similar computation on 512-byte records and fitting a line to the result using linear regression (for once we won't bore you with the details) we determine that the network adapters are taking 50 microseconds per packet and 216 nanoseconds for each byte in the file operation.