Constant innovation in computing hardware and software have made a multitude of powerful and sophisticated applications available to users at their desktops and across their networks. Yet, with such sophistication have come many problems for developers, software vendors, and users. For one, such large and complex software is difficult and time-consuming to develop, maintain, and revise. Revision is a major problem for monolithic applications, even operating systems, in which features are so intertwined that they cannot be individually and independently updated or replaced. Furthermore, software is not easily integrated when written using different programming languages and when running in separate processes or on separate machines.
Even when integration facilities have been available, the programming models for working with different services across various boundaries have not been consistent. The trends of hardware downsizing and greater software complexity are driving the need for distributed component environments. This requires a generic set of facilities for finding and using services (components), regardless of who provides them or where those services run, as well as a robust method for evolving services independently over time without losing compatibility with clients of earlier versions. Any real solution to these problems must also take advantage of object-oriented concepts and be capable of working with legacy code—that is, look to the future without forgetting history.
As an example, consider the problem of creating a system service API that works with multiple providers of some service in a polymorphic fashion. In other words, you want a client of the service to be able to transparently use any particular provider of the service without any special knowledge of which specific provider—or implementation—is in use. In traditional systems, every application calls a central piece of code to access meta-operations such as selecting a service and connecting to it. Usually this code is a service, or an object manager, that involves function-call programming models with system-provided handles as the means for object selection. But once applications have used the object manager to connect to a service, the object manager only gets in the way like a big brick wall and forces unnecessary overhead. Yuck.
Worse yet, such traditional service models make it nearly impossible for the service provider to express new, enhanced, or unique capabilities to potential clients in a uniform fashion. A well-designed traditional service architecture, such as Microsoft's Open Database Connectivity (ODBC) API, might provide the notion of different levels of service. Applications can count on the minimum level of service and then determine at run time whether the provider supports higher levels of service in certain predefined quanta. The providers, however, are restricted to providing the levels of services defined at the outset by the API; they cannot readily provide a new capability that clients could discover at run time and access as if it were part of the original specification. To take the ODBC example, the vendor of a database provider intent on doing more than current ODBC standards permit must convince Microsoft to revise ODBC in a way that exposes that vendor's extra capabilities. In addition, the Microsoft bottleneck limits the ability for multi-vendor initiatives independent from Microsoft to exploit an existing technology for their own purposes. Thus, traditional service architectures cannot be readily extended or supplemented in a decentralized fashion—you have to go through the operating system vendor. Yuck.
Traditional service architectures also tend to be limited in their version handling. The problem with versioning is one of representing capabilities (what a piece of code can do) and identity (what a piece of code is) in an interrelated, ambiguous way. A later version of some piece of code, such as "Code version 2," indicates that it is like "Code version 1" but different in some distinct and identifiable way. The problem with traditional versioning in this manner is that it's difficult for code to indicate exactly how it differs from a previous version and, worse yet, for clients of that code to react appropriately to new versions—or to not react at all if they expect only the previous version. The versioning problem can be reasonably managed in a traditional system when there is only a single provider of a certain kind of service. In this case, the version number of the service is checked when the client binds to the service. The service is extended only in an upward-compatible manner (a significant restriction as software evolves over time) so that a version n provider will work with consumers of versions 1 through n-1 as well, and references to a running instance of the service are not freely passed around by clients, all of whom might expect or require different versions. But these kinds of restrictions are unacceptable in a multivendor, distributed, modular system with polymorphic service providers. In other words, yuck.
Thus, service management, extensibility of an architecture, and versioning of services are the problems. Application complexity continues to increase as functionality becomes more and more difficult to extend. Monolithic applications are popular because it is safer and easier to collect all interdependent services and the code that uses those services into one package. Interoperability between applications suffers accordingly because monolithic applications are loath to allow outsiders to access their functionality and thus build a dependence on a certain behavior of a certain version of the code. Because end users demand interoperability, however, software developers are compelled to attempt some integration anyway, but this leads back to the problem of software complexity and completes a vicious cycle of problems that limit the progress of software development. Major yuck.