While we all tend to build example or test-bed applications that revolve around a particular operating system and methodology, such as our Wrox Car Co sample application, the real world often dictates that our applications must be integrated into an existing system; often one that uses legacy components and 'foreign' data stores.
Integration With Existing Systems
For example, it's unlikely that your financial director would agree to shift the whole company's payroll system onto, say, Windows NT and SQL Server just so that you can add a new interface to it. Likewise, your applications or components may need to interface with data processing components that run on a different operating system, perhaps on another server across the other side of the continent.
Coping with 'foreign' data sources is not such a major headache, providing that a suitable OLE-DB or ODBC driver is available for us to connect to it from a Windows platform. As you saw in previous chapters, we can talk to almost any data source this way, and (together with MTS) easily implement distributed transactions.
But this technique has one big drawback. Usually, we'll be implementing business rules that carry out the data access operations in a controlled and pre-defined way, rather than just hitting the data store directly. These objects will often be running on the foreign server's operating system, and this means that we may have to rewrite the business rules components in order to run them on our own system. If we can't do this, or if we don't want to, we're into the second problem situation. How does our Windows-based application communicate with (say) a component written in COBOLand running on a mainframe system?
While there are other ways that it can be achieved, message queuing provides one of the simplest solutions for communication between disparate systems in this kind of environment.
Coping With Real-World Networks
Message queuing also provides another major benefit in a distributed application. For some time it's been rumored that computer networks are not 100% reliable. OK, so you might get 99% availability on your office LAN—but that argument doesn't hold much water when a customer is complaining that you charged them for a product, but the update to the warehouse shipments database in Mexico failed to get processed.
One of the benefits of using a transacted application, as in our example with MTS, is that you can be sure that this situation doesn't arise. If the remote database can't be updated then the transaction aborts and the customer is not charged for the goods. Your application tells the order clerk that this is the case, and they can do something about it.
The Downside Of Transacted Applications
While using transactions to provide an all-or-nothing process is great, it has a major downside. Your application can't commit any of the operations involved in the transaction until all of them have completed successfully. And of course, if your application steps outside the cozy confines of your fast and 'reliable' LAN, it's likely to meet problems such as the latency—or even non-availability—of the connection to the remote system. If that connection is via the Internet, the situation is probably going to be even worse.
To get round this, and provide a responsive and reliable interface for our applications, we really want some way of coping with the non-availability or slow response of a remote system. We came across this when we built our Wrox Car Co application in previous chapters. It needs to update a remote database at head office with the order. So what do we do if the phone company just dug through our cables when installing next door's new line? Do we tell the customer that they can't buy a car today? Again, message queuing can provide a solution.