Developing a logical multi-tier architecture for our applications helps to partition code by separating the presentation, business and data processing. Once we've partitioned the application into manageable parts, we can build components to hold the objects at each level within the program.
Deciding exactly where to split the functionality is not easy. Even in the simple example that we just looked at, the code to convert the name to uppercase essentially belongs to both the presentation and business layers:
txtName.Text = UCase$(txtName.Text)
If we choose to put this line in the presentation tier then we're able to give the user timely feedback - indicating that the name has been converted to uppercase. But this means that we've put a business rule in the presentation tier, which is far from preferable.
We could put the code in the business tier - a good place for it, since it's a general rule that should be enforced regardless of the interface. Doing this, however, can make it much more difficult for the user-interface to provide timely feedback to the user. It's entirely possible that the business tier will be running on another machine somewhere in the network, so we could be adding a lot of network traffic if the presentation tier is asking the business layer for field-by-field information all the time.
Another possibility is to put the code in both locations. This provides timely feedback to the user, and lets the business tier centrally enforce the rule. Of course, we've duplicated the code by doing this, and so the program becomes much harder to maintain in the long run.
None of these are optimal solutions. This is one of the hardest issues we face as developers of distributed systems. It would be nice if there was a definite answer, so we could say this is the way to do it. Unfortunately, the choice depends on many factors, and these factors are different for each project.
So, the question remains: what are we to do about this partitioning problem? In general, there are two possible approaches: the rich (interactive) interface, and the batch interface. We'll take a look at each of these now.
Advocates of the rich interface look at the evolution of software from dull, non-interactive screens, to highly-interactive, graphical displays, and want to provide that experience for the user.
Word processing is an excellent example. Years ago, people entered their text, along with various arcane codes, into what was essentially a glorified text editor. From there, they could print their text and see what it looked like after it was formatted. User demand pressured vendors into developing what-you-see-is-what-you-get word processing, so the user eventually received a highly interactive interface with immediate feedback on appearance. Even more recently, tools such as Microsoft Word have begun to subtly provide feedback on spelling by underlining misspelled words as the user types. In short, users like highly interactive, well-designed interfaces.
However, there is a cost to providing this rich interface: it requires that many of the business rules be easily and quickly accessible from the presentation layer. If the rules are located on a server out on the network, then the presentation layer will have to make calls across that network - which will probably result in dismal interface performance.
The solution is to put some or all of the business rule processing on the clients, thus allowing the presentation layer to easily and quickly access those rules - which should provide a highly interactive experience for the user.
The major drawback to this solution is that it requires that we run part of our application on the client workstation. This can be a serious problem if we need to distribute the application to hundreds or thousands of client workstations. Fortunately, there are many new technologies becoming available, in the Windows environment, to automatically manage this distribution. For example, Microsoft Internet Explorer and Microsoft's Zero Administration Initiative both automate the installation of client-side components.
On-Line Transaction Processing (OLTP) developers have long subscribed to a batch input concept. Web browser interfaces also follow along this line, since it minimises network traffic and leaves virtually no processing on the client.
Interfaces constructed with this technique allow the user to enter virtually anything on the screen, and perform little or no screening of input while the data is being entered. When the user accepts the screen, the business rules check all the entries and report a list of problems with the data.
The benefit to this approach is significant. All the business rules are centralised and so easily maintained. The presentation layer does virtually no work beyond that required by the controls with which the user interacts. This approach virtually eliminates the logistic problems of maintaining hundreds or thousands of clients.
The downside to batch interfaces is that the user-interface is very much non-interactive, since it gives the user no immediate feedback. In many cases, this type of interface will be unacceptable to the end users, who often want their application moved to Windows specifically to escape a batch-style interface.
The conflict between rich and batch interfaces is significant. As developers, we want to provide the user with the best, most interactive interface that we can manage. At the same time, we need to provide good performance, and design the application to be easily manageable - since it's going to be deployed to hundreds or thousands of client machines.
Visual Basic provides us with the tools to overcome these problems. To do this, we need to adopt an architecture where we split the business processing into two parts:
The idea behind splitting the business processing apart is to keep processing as physically close to where it is needed as possible. This means that the processing needed by the presentation layer should be physically close, typically on the client. Processing that mostly works with data should be closer to the data source, often on a central application server.
If we look at a typical business object, we'll find that it provides some services that are only useful for developing an interface. We'll also find that it provides some services that are mostly useful for supporting data processing.
For instance, let's consider a simple Product object. Such an object might implement the following services:
Service | Purpose |
ID | ID value of the product |
Name | Product's name |
Price | Product's price |
QOH | Quantity of the product on hand |
Receive(Quantity) | Receive some amount of the product into the inventory |
Allocate(Quantity) | Allocate some amount of the product from the inventory |
Load(ID) | Load the object from the database |
Save() | Save the object into the database |
Delete() | Delete the object from the database |
Typically, we'd implement a Product object such that it implements all of these services. The problem is that we're left with a single object that needs to run on one machine - either the client or the server.
This puts us back at the rich versus batch interface crossroads. If we put the object on the client, we can provide a rich interface, but we've also put the data access methods on the client. If we put the object on the server, the client will need to go across the network to use any of the services provided by the object, so the user-interface would typically communicate with the object in a batch format; in other words, we're back to providing a batch interface for the user.
However, suppose we split our Product object into two separate objects: a UI-centric Product object, and a data-centric Product object. Let's consider this now.
Our UI-centric Product object would provide the following services:
Property/Method | Purpose |
ID | ID value of the product |
Name | Product's name |
Price | Product's price |
Property/Method | Purpose |
QOH | Quantity of the product on hand |
Receive(Quantity) | Receive some amount of the product into the inventory |
Allocate(Quantity) | Allocate some amount of the product from the inventory |
All these services would very likely be used only by the user-interface. By making these available in their own object, we have the option of putting that object in a component that will be installed on each client. Now the UI code will be able to interact with this object locally, without any network traffic. Any business rules that are implemented by these services will be enforced interactively. Thus we can implement a rich, highly interactive interface for the user.
In general, the UI-centric business processing resides on the client machine, so it's readily accessible to the presentation layer. This is processing that does not require interaction with the database or with other centralised resources. We're including a lot of business rules and business processing in this category.
For instance, if a data field can only accept a future date, this is the place for that rule. If a series of values need to be totalled together and run through a mathematical formula, this too is where the processing belongs.
On the other hand, if a field needs to be checked against a column in a large table, or verified against some external data source (like a credit check), then this is the wrong place to do the work.
By keeping the UI-centric processing close to the client, we can build some very nice, highly interactive user-interfaces. While we can't tell the user about every business rule violation that they might cause, we can probably catch the majority of them and present it to the user in a timely fashion.
The data-centric Product object would provide the following services:
Property/Method | Purpose |
Load(ID) | Load the object from the database |
Save() | Save the object into the database |
Delete() | Delete the object from the database |
These services interact exclusively with the database to retrieve, add, update or remove our object's data. These are not the type of service that the user-interface requires on a field-by-field basis, so we won't take any serious performance hit by running them on a separate machine from the client workstation.
A lot of business rules require quick access to the data source or some other centralised resource, such as a credit checking service via TCP/IP or messaging such as a central fax service. Again, in keeping with the philosophy that processing should be performed physically close to the source, this type of business rule belongs on some central machine. Usually, this would be an application server, although that may vary. We'll discuss physical architectures more thoroughly in Chapter 4.
It's important to understand that data-centric business processing is not the same as data services. Data services are mostly concerned with storage, retrieval and integrity of data; business processing, on the other hand, is all about enforcing business rules and performing any business logic.
A good example of a data-centric business rule is that of verifying a user's entry against a column in a very large table. It is not practical, or desirable, to download all that data to the client machine, so the rule can't be enforced there.
Such a rule might belong in the data services (implemented in SQL) if it's just a simple lookup. But suppose the business rule is more complex - such that more processing is required beyond just the lookup. This is often much easier to do in a high-level language, such as Visual Basic, than it is to do in a SQL stored procedure.
As another example, suppose we have an order entry application, and our users want automatically generated sequential order numbers that need to be displayed as soon as the order screen is presented to the user. Generating sequential order numbers is a form of business processing, but to make them sequential across all clients requires centralised processing. We could do this at the database level, but it would probably make more sense to handle it through our business logic. Implementing a simple business service to provide the next sequential order number is easy, and keeping it at this level can help reduce the load on the database server.
Even though we've split the business processing into two parts, they are still integral. In fact, they must work together efficiently, since neither the UI-centric nor data-centric processing can, by themselves, provide all the business logic required for an application.
What we need is for the two parts of the business processing to handle all their communications internally, so that neither the presentation layer, nor the data processing layer, is aware that the business logic has been split apart. Essentially, we want it to appear as single layer of software.
Each UI-centric (client-side) business object must be able to efficiently communicate with any corresponding data-centric (server-side) object. The communication between these objects will typically be across a network, and so the goal is to minimise the dialog between the objects.
In Chapter 4, we'll go into the details and see how we can make this communication fast and practical.