Results Section

The results section of the log reports data collected by the WCAT clients during the test. The section consists of a table summarizing the data collected. Each row in this table represents a different measure of client activity or server response. The number of columns in the table varies with the number of clients used in the test. The following is an example of a results table:

Results: Data, Summary, Rate, 157.55.94.121, 157.55.94.121, Client Id, 0, 0.00, 1, 2, Duration, 180, 1.00, 180, 180, Pages Requested, 11602, 64.46, 5718, 5884, Pages Read, 11602, 64.46, 5718, 5884, Total Responses, 11602, 64.46, 5718, 5884, Avg Response Time, 155, 0.86, 157, 152, Min Response Time, 10, 0.06, 10, 10, Max Response Time, 1883, 10.46, 1883, 1873, StdDev Response Time, 194, 1.08, 200, 186, Total Connects, 11602, 64.46, 5718, 5884, Avg Connect Time, 40, 0.22, 40, 39, Min Connect Time, 0, 0.00, 0, 0, Max Connect Time, 390, 2.17, 390, 290, StdDev Connect Time, 20, 0.11, 22, 19, Connect Errors, 0, 0.00, 0, 0, Read Errors, 0, 0.00, 0, 0, Data Read, 186970624, 1038725.69, 94656000, 92314624, Header Bytes, 2344747, 13026.37, 1155657, 1189090, Total Bytes, 189315371, 1051752.06, 95811657, 93503714, Avg Header per Page, 202, 1.12, 202, 202, Avg Bytes per Page, 16317, 90.65, 16756, 15891,

The following table lists and defines the columns of the results table.

Table 4 Results Table Columns

Column Description
Data The test criterion for which the data in the row was collected.
Summary The sum of the value for that test criteria for all client computers in the test. What the Summary column signifies for a row depends on the row. If the test criteria is a total, the Summary value is the sum of the values for all client computers. If the test criteria is an average, the Summary value is an average of the values for all client computers.
Rate The average value for that test criterion per second of the test. The value of the rate is calculated by dividing the Summary value by the total elapsed duration of the test, in seconds. For the purposes of the Rate column, the duration includes the warmup and cooldown periods of the test.
ClientIP This column is named using the IP address identifying a particular client computer. If you are using multiple clients, the results table has multiple ClientIP columns.

There are multiple ClientIP columns; each represents one client. The ClientIP columns list the client computers in the order they connected to the controller. The values in each client column reflect values for all virtual clients running on that client process.

Note

If the test included a large number of clients, each row contains multiple client columns of comma-separated values. If this is the case, use a spreadsheet or data processing program to organize the rows and columns into a table.

The rows of the results table represent values for different test criteria recorded by the WCAT clients during the test. The following table lists and describes each test criterion by row.

Table 5 Results Table Rows

Test criterion (row) Description
Client ID WCAT identification numbers for the client computers. The Summary and Rate column values for the Client ID row are always zero.
Duration The total elapsed time of the test, in seconds. The duration includes the warmup and cooldown periods.
Pages Requested The total number of pages requested from the server by clients during the test. Each page can consist of one or more files.
Pages Read The total number of pages received from the server by clients during the test. Each page can consist of one or more files.

Table 5 Results Table Rows (Continued)

Test criterion (row) Description
Total Responses The total number of instances in which the server responded to requests from clients. This value includes responses to requests for connections, as well as requests for pages.
Avg Response Time The average response time recorded by clients. The response time is the elapsed time between the time a request was sent and the time the response was received, in seconds. In the ClientIP column, the average response time is calculated by summing the average response times recorded by each virtual client.
Min Response Time The shortest response time recorded by a client computer during the test. In the ClientIP column, this value is the shortest response time recorded by any virtual client on the client computers.
Max Response Time The longest response time recorded by a client computer during the test. In the ClientIP column, this value is the longest response time recorded by any virtual client on the clients.
StdDev Response Time The standard deviation of the response times recorded by the clients. The standard deviation is a statistical measure of how much values vary from the average value.
Total Connects The number of successful connections established between the virtual clients and the server during the test.
Avg Connect Time The average length of time for a connection between a virtual client and the server during the test. The connect time is the time elapsed between the time the connection was established and the time it was closed, in seconds, as observed by the client.
Min Connect Time The shortest connect time recorded by a client computer during the test. In the ClientIP column, this value is the shortest connect time recorded by any virtual client on the client computers.
Max Connect Time The longest connect time recorded by a client computer during the test. In the ClientIP column, this value is the longest connect time recorded by any virtual client on the client computers.
StdDev Connect Time The standard deviation of a connect time recorded by the client. The standard deviation is a statistical measure of how much values vary from the average value.

Table 5 Results Table Rows (Continued)

Test criterion (row) Description
Connect Errors The number of connection error messages the clients received during the test. The server closes a connection and sends a connection error message to the client if the connection request has formatting or protocol errors, if the server did not have the resources to support the connection, or if the maximum number of connection permitted for the HTTP (WWW) Service is exceeded.

If you have more than 10 connection errors in a test, rerun the test using the Performance Monitor counters option to investigate processor and memory use on your server. Or reduce the number of virtual clients in the test. You can also use Internet Service Manager to increase the maximum number of possible connections to the HTTP service.

Read Errors The number of read error messages the clients received during the test. The server closes a connection and sends a read error message to the client if the page the client requested was not found on the server.

You should not have any read errors in a WCAT test if you have correctly followed the installation procedures and are using the prepared tests. To eliminate read errors, make sure the entries in the Uniform Resource Locator (URL) fields of the script input file correspond to files on the WCAT server’s Home directory.

Data Read The amount of Web page content the virtual clients received from the server, in bytes. The Data Read column does not include the bytes associated with protocol headers. To determine the total number of bytes the clients received from the server, add the values of the Data Read and Header Bytes columns.
Header Bytes The number of bytes the virtual clients received from the server that constitute protocol headers for data.
Avg Header per Page The average size of the protocol header for each page, in bytes. The Avg Header per Page column is calculated by dividing header bytes by pages read.
Avg Bytes per Page The average size of a test Web page, in bytes, excluding bytes that constitute protocol headers. The Avg Bytes per Page column is calculated by dividing data read by pages read.