While making the pages look attractive is a prime function, what's equally important is how we handle the data that we need for our application and at the same time make using the interface as responsive as possible. One of the best ways to put people off browsing your Web site is to build it in such a way that each page takes an age to load in their browser. Our public application needs to load quickly, and—more important—respond quickly to user actions. While they will probably be prepared to give the first page a chance to load, they won’t accept long delays each time they open another page.
The Intranet vs The Web
On top of this, we need to think about the effects on our server. When implementing applications at the showroom, we have a server for each showroom and only the sales staff and managers in the showroom will access it. This will also probably be over a high-bandwidth local network or Intranet, so it's easy to achieve good connection performance and provide a powerful enough server to handle the known number of users.
As for the head office, we've implemented the communication between our application and the database located there using message queuing. Even if the head office server is under intense loading all day from its local users, it will catch up during slack periods or overnight. Again, here, scalability is not likely to be a major problem.
When we go public on the Web, however, the situation is very different. We can't predict the number of users or the traffic patterns. At the same time, we probably can’t afford to install gigabit-bandwidth lines and multiple servers just in case it gets busy at some times of the day. Instead we need to consider how we can build the application interface so as to apply the minimal load to our server and our Internet connection.
Browsing vs Ordering
In the examples you've seen earlier in this book we used a variety of techniques to get data from the server to the client and back again. Unlike the showroom applications, where the prime purpose is to submit data, our public Web application will spend most of its time reading information from our server as users browse the car models, view details of each one, and examine the finance packages that are available. Only on limited occasions will an order be submitted to our server—although of course we'd like this to happen as often as possible!
So our design needs to take this factor into account. We need to think about how we can most effectively use the components we've got to provide information about our products to the user, and then process any orders that they place:
The diagram shows where the largest loading on our server comes from, and it is this that we need to plan for.
Server Load While Placing An Order
When it comes to placing the order, we can't do much to reduce the number of connections. We need to store the details of every order or we'll create a lot of unhappy customers. However placing an order involves multiple table and multiple database updates, and here we are dependent on the latency of the data sources and the connections to them.
We can aim to reduce the duration of each connection by using the techniques you've seen already in this book—in particular message queuing, which allows the operation to complete successfully without having to wait for the remote data source to signal a successful update.
Server Load While Browsing The Products
The duration of each individual connection when browsing will tend to be short because the server just has to retrieve the data (a recordset, HTML page or graphic file), send it to the client, and then disconnect. As long as any tasks our server has to run, such as stored procedures or custom components, operate as efficiently as possible we can't do much else to reduce the duration of each connection. What we can do is try to reduce the number of connections.