The controller parses the configuration, script, and distribution files and constructs the messages for client. It then creates a listen socket and waits on the Web Capacity Analysis Tool controller port. Clients connect to the socket one after another. The controller sends messages to clients after all of the clients are connected. First, it sends the configuration message, then the script header message, followed by a list of script page messages, one for each page. Clients start the performance run when they receive the complete set of messages. Most tests include a warm-up and a cool-down period. The clients do not make any measurements during these periods, but attempt to bring the server to full load. Also, these periods blank out the time when there is some net traffic between controller and clients. During the steady state between the warm-up and cool-down periods, the clients run for the specified duration of the test and take measurements. At the end of the cool-down period, the clients summarize data from each individual thread and send a statistics message back to the controller. After all the clients have reported back the statistics, the controller sums all data and summarizes both individual and summed results to a log file (.log). Optionally, users can specify a list of performance counters to monitor on the server during the test. The controller samples the specified counters at 10-second intervals and writes the sampled values to a separate performance counter log file (.prf). After parsing command line arguments and input files, the controller starts the sampling thread. The controller stops sampling after receiving responses from all clients. The controller also averages the counters over the period between start of clients and before reception of any statistics and reports averaged value of each counter in the log file.