Microsoft Corporation
Texas Instruments
October 1996
Microsoft Corporation and Texas Instruments, Inc. began discussions in 1992 about how to reduce cycle time for building applications. Out of this dialogue came a vision for application development based on assembly from new and prefabricated components. This vision guides the evolution of both companies’ analysis, design, construction, deployment and administration tools and techniques. Evidence of the commitment that Microsoft and Texas Instruments have to this vision is their collaborative effort to design repository technology that will be an important enabler in realizing component-based development.
Application development has been evolving in both techniques and participants since its inception. It has changed from an activity that involved a few hundred people working with ones and zeros in the 1950s, to an activity that involved millions of high-level language coders in the 1980s, to an activity that today includes tens of millions of end users directly interacting with spreadsheets, macro languages and query tools. Throughout this period, the emphasis has been on applying structure and technology in an effort to shift application development from a highly skilled craft to a repeatable engineering discipline.
Guidelines for application architecture—how an application is structured—also have evolved. When applications were constructed as monolithic executables deployed on a single processor, the structure of the application was solely the purview of the developer. Guidelines emphasized programmer productivity and code maintainability. Today, businesses are faced with the need to exploit emerging distributed computing technology to improve levels of customer service and lower costs. An application is no longer a discrete executable but rather a collection of cooperating software components and shared data. Guidelines for structuring applications emphasize deployment flexibility, ease of administration, and end-user empowerment. How the application is structured is important not only to the information systems staff but also to the core of the business.
In today's business climate, the ability to partition and reconfigure the functionality of an application across distributed (and heterogeneous) computing resources is only half the story. The other half is the ability to package that functionality so that it can be reused across applications. Sought-after business benefits include enterprisewide consistency of business rules, faster time to market, and management of change.
Of course, the library of reusable components does not exist today. Corporate management information systems (MIS) are still in the business of application development; application assembly from reusable components is a three-to-five year vision. The technology infrastructure itself is still maturing. In fact, there is still much debate about the "right" model for developing truly reusable components. This evolution will take time, and the vision outlined in this paper will evolve in the process.
These two requirements—application partitioning and reuse—lead to new models for the structure of applications. In the target execution environment, a business application is no longer a discrete set of executables and data, separate from another application's set of executables and data with import/export interfaces defined between them. Rather, an application becomes simply that set of services required to support the scope of a specified business problem; whenever two applications require the same service, they share the component that implements the service. The analysis and logical design process is one of discovering the services (and their usage characteristics) that are required to deliver on the business problem; the physical design process is one of deciding which services need to be newly developed code (ideally, little) and which services can be delivered by an existing base of available components (ideally, most).
Run-time technologies that can support executable component invocation and coordination across distributed, heterogeneous computing platforms are just maturing. Microsoft Corporation's core technologies for supporting such architectures are embodied in OLE. OLE is a set of extensible object services built on top of a robust object model (Component Object Model or COM) for the interaction of autonomous components (components written by different companies in different programming languages that are constructed without knowledge of who will consume the services they provide.)
The companion challenge for Microsoft, Texas Instruments, and others is to evolve the techniques and tools that will enable organizations to design, construct, deploy, and administer applications that take full advantage of this execution environment and realize the sought-after business benefits.
Component-based development is a phrase that refers to the techniques and tools that enable construction of applications from new and prefabricated components. It is, in a sense, the next generation of client/server computing. The phrase client/server computing initially referred to a downsizing trend in which the end user controls the flow of the application and computing is distributed between the desktop (client) and a shared network-based platform (server). The identifying characteristic of component-based applications is that the partitioning model is no longer two-tier and tied closely to a physical platform configuration, but rather multitier (or n-tier) and able to be distributed and dynamically reconfigured across a heterogeneous computing base.
This paper explores the characteristics of that target development environment:
Throughout these sections, benefits and challenges for component-based development are noted. The vision outlined in this paper will be validated and refined by industry allies and customers. It will be used to help define requirements for the techniques, tools, and technologies that component developers and business solution builders alike will need to design, construct, deploy, administer, and reuse components and component-based applications.
Information systems executives are faced with a number of difficult challenges today. They must find ways to accomplish the following:
The future of IS is to provide an infrastructure consisting of both software and hardware that business users can employ in the service of a responsive, flexible, agile enterprise. The approach may be summarized as the following: buy or build components, assemble them into business solutions, and customize them for specific needs. The result is very rapid response to changing business conditions and empowerment of business workers to employ information from across the enterprise with minimal dependence on a central IS department.
Evidence exists that this approach has already been adopted by many corporations whose application development strategy is now motivated to reuse (that which already exists), buy (when a suitable product can be found and customized), then build (only when necessary). These organizations are moving away from the traditional development approach deriving from a central IS shop to one illustrated in Figure 1.
Figure 1. Component-based development—relative population using today's technology
Central IS people retain major responsibility for the computing infrastructure and the development of enterprise-shared components, but they work closely with the new breed of application developers to provide solutions to specific requirements by creating additional components or customizing shared components to meet local unit needs. In some cases, these application developers may be part of the business unit organization, but they are trained in the necessary information technology (IT) skills of building, assembly, and customization.
Components intended for organizationwide use are natural candidates for central development to build. Similarly, professional developers might best take responsibility for assembling components into applications that support business processes pervasive throughout the organization.
On the other hand, components that implement the business rules of a specific department, or organizationwide components that require domain specialization, would be built by departmental programmers.
Over time, as more reusable components are formed and central IS increasingly takes over the role of infrastructure management and enterprise coordination, the relative populations can be expected to reflect the tendency shown in Figure 2.
Figure 2. Component-based development—relative population in five years
A component-based approach offers new options for dividing the tasks of application development among workers according to their specialized interests and skills. Empowered workers and self-managed teams will be enabled to change or create workflows. Information workers might be skilled in using macros, but if they prefer not to assemble the entire application, they might employ wizards—intelligent modules embedded in development tools—to guide them through the process of modifying the application to conform to their preferred way of working. Business users—the section of the community whose computing needs traditionally are met by professional development staff—might perform simple tasks such as changing fonts and screen colors to suit their preferences.
As computing skills become more pervasive and expectations of business users rise, Microsoft and Texas Instruments expect that business users will perform less rote, scripted tasks and more self-initiated ad hoc information access and manipulation. The shift in the business user's role from task worker to a skilled information worker is shown from Figure 1 to Figure 2.
In the future, few people will only react to applications supplied to them. In the responsive, empowered organization, each person is actively engaged in contributing to its success. A component-based development approach addresses a key issue facing businesses today—the need for rapid response to change—by empowering information workers to create their own applications, thus closing the gap between them and traditional development groups. The need for corporations to empower every individual leads Microsoft and Texas Instruments to believe the component-based approach will be widely adopted as the next step in the evolution of application development.
Increasingly, end users must become participants in application customization to integrate desktop productivity tools, such as word processors, personal databases, and spreadsheets, as part of the overall application. These tools must interact with application components provided by developers in a seamless and transparent way. Empowered by the ease with which data can be moved between graphs and worksheets in a spreadsheet tool, and the ease of use of modern desktop productivity tools, users will not accept the inflexibility of applications provided solely by their IS groups. Applications on the desktop should be as well integrated with one another as the best-integrated productivity tools.
By having desktop access to multiple remote information sources, business users can monitor and manage the complex, multifunctional business processes through a single application. For example, when a telesales representative is given access to customer and inventory information, integrated with an order-entry system, the worker can take an order, apply business rules to determine the customer's available credit, and inform the customer whether the wanted item is in stock and when to expect delivery—all with a single telephone call.
Workflows natural to the process are supported naturally by the desktop user interface. Desktop integration is more than just the graphical user interface—it's about delivering services to end users.
Figures 1 and 2 imply that as application development moves toward an assembly model, software construction as we know it today will no longer exist. Rather, applications will be assembled from prefabricated components. Central IS will be one source of those components; business developers will be another. But, if IS is committed to the objectives embodied in the phrase "buy before build," who will build the components that IS wants to buy? The answer is likely to be third parties.
Third-party development shops are positioned to become the supply-chain component builders to the enterprise-based solution builders. However, making this happen will require the following:
Each of these requirements makes application assembly from third-party components a challenge. However, the industry is already seeing an initial market for third-party business components for applications such as financial, accounting, human resources, and inventory management because the set of useful services is well understood. Industry collaborators in various vertical markets—such as banking, manufacturing, and design and modeling—are defining OLE interface specifications that can provide the basis for competitive component offerings.
Microsoft and Texas Instruments envision a shift in the enterprise from build to reuse to buy, but don't envision the disappearance of traditional development. Component builders—whether IS, line of business, or third parties—will continue to require the full suite of development tools and languages. What will be needed in addition are the component management and application assembly, deployment, and administration tools that make component-based application development feasible.
The previous section described a development model in which application components may be built by central and business development groups and, increasingly, by third parties and then used as the basis for assembly into new applications to support business tasks and processes that may not even have been envisioned when the components were built. This section defines a number of formal concepts that make this development model work as the basis for creating a viable pool of application building blocks.
One of the key concepts in component-based development is that of a service. A service is simply a request protocol for a logical unit of work—pay employee, update record, print document, or price product—that can be invoked without the requester needing to know the software that implements it or the implementation details.
Service provision is a metaphor from the real world. In either arena, the consumer of a service should not need to possess the knowledge required to render the service. Rather, the consumer should be able to choose one service provider or another (where a choice exists) and trust the provider to take care of changes and improvements to the function as required.
As in the real world, responsibilities on both sides of the interaction need to be defined, and expectations of both parties must be set correctly. In the world of software, this is handled by ensuring that the service to be provided is defined in an interface. The interface acts as a kind of contract—it describes the nature of the service to be provided and the obligations and responsibilities of the provider and consumer. A description of the service also might include such details as security and access control information. These concepts are illustrated in Figure 3.
Figure 3. The Interface is a Contract
In the software world, it is usually said that a piece of software implements the interface. In fact, one piece of software may implement more than one interface, and thus may be the provider of a range of services, in the same way that a law firm may provide services such as the writing of wills, divorce arranging, business contracts, and others. An interface that can be implemented by more than one piece of software provides potential competition benefiting the service consumer. The choice may be made explicitly by the consumer or may be made by a broker that makes decisions influenced by physical proximity, resource cost, and so forth.
This concept of service provision does the following:
A software component is a unit of code that implements one or more clearly defined services as defined by a set of published interfaces. As a consequence of this definition, software components possess the following attributes:
Using software components as application building blocks should bring many benefits to the software industry. Applications built from components should be able to be implemented faster because less overall new construction is involved. Applications also should be more reliable if built from components that have been tried and tested by earlier developers.
Structuring an application as a collection of cooperating components that can be distributed and administered across a networked computing base is a reasonably well-accepted concept. However, the idea described in this and the previous section is that these components can in turn become the building blocks from which new applications are assembled. Further, the assertion is made that reusable services can be successfully defined and implemented entirely outside the scope of a particular application development project.
The problem is that component development, deployment, and reuse often are addressed as purely technical issues. The technology makes it possible to partition applications and reconfigure the execution environment, but technology alone does not provide the advantages of cross-application building blocks. As corporations have found, it is sometimes difficult to construct components that abstract the business problem in such a way that they can be meaningfully reused or shared to solve new business problems.
The following section explores the benefits and challenges of reuse and looks at some concepts that appear promising in helping to realize the benefits of component reuse.
Reusability of software has been cited as a significant objective to be realized from a component-based development approach. Anticipated benefits include the following:
However, several major challenges have hindered successful, widespread achievement of these goals:
Regardless of the modeling technique used by business analysts, one of the resulting modeling deliverables is, with increasing frequency, a business object model. Business objects are meaningful real-world items—customer, form, order—that business users and component developers alike can understand. Business objects are encapsulated types because their behavior in collaboration with other business objects, and in response to business events, is specified by their operations. These operations, too, are meaningful in a business sense.
Business objects describe the behavior of real-world items in a formal way. A business object is a design abstraction, not a piece of code. The ability to describe behavior formally enables the processing logic to be encoded in software. This is analogous to technology vendors' efforts to identify and encapsulate system objects, such as user, access token, or print queue consistently across the technology services platform. Class libraries (component implementations) for these abstractions are increasingly available to programmers and power users.
Business objects provide a useful foundation for building standard software components because they do the following:
It follows that the resulting software components do the following:
If the business user is to be involved in the development process, from construction to assembly, then software components must be equally understandable and recognizable in business terms. Business objects thus provide a consistent semantic to the business models and, by implication, they have the potential to bring the same to the software components that implement them.
(Note: The reverse is not true. That is, software components aren't business objects just because they implement some business process semantics and rules.)
Most software component reuse opportunities today occur when the semantics are reasonably well agreed upon—that is, the object models, were they to be constructed formally, would meet with little objection. These are largely technology services rather than business services, where technology services include services to handle print, list box, and other infrastructure objects with business-neutral semantics and rules. Third-party component building blocks today, from OLE controls to database management systems (DBMSs), are largely technology services. Interest in what have been called commercial line of business objects will continue to increase but will require agreement on business interfaces and semantics across companies.
A large percentage of business software reuse opportunities today involve reuse of source code. Often, this reuse is informal rather than formal. Individual programmers reuse algorithms and data-validation routines. They share that code with other members of their development team or company through source-code control systems. Code is copied, tailored, and modified for new uses or new deployment constraints. Reuse of source code will continue to be important for component builders.
But, why isn't there more business reuse of run-time code? Besides the general challenges in component reuse outlined earlier, business application developers face other specific challenges:
In the future, the development and execution environments for reusable components must address these issues, providing mechanisms for composition, customization, and administration. In addition, the development environment must recognize that reuse opportunities are possible across the development life cycle, including the following:
Having a consistent definition for application building blocks across the development life cycle and the tools that support it is key to reuse in a business context.
The previous section introduced the building-block concepts of service, component, and business object. This section shifts the discussion from a building-block perspective to a business-application perspective. What is a business application? How does the service provision concept for components apply to the business applications that comprise them?
This section addresses these questions, looking first at the structure of business applications and then at a logical architecture for component-based applications.
The structure of an application can be expressed from several viewpoints. The following are some examples:
Over the life of the application, any of these views of application structure are subject to change:
The bottom line is that the structure of an application is all of these views and their interrelationships. An application's structure includes the following:
These building blocks may come from several sources and throughout the application development life cycle. They may have been generated by integrated application-design and development tools. They may have been built by a trained designer, developer, or writer using specialized editors and compilers. They may have been purchased from third-party vendors. They may have been defined by application administrators. Whatever their source, these building blocks all have several things in common:
One of the complexities of component-based development is that a change in a component may have implications for any dependent components. If the relationship is purely descriptive, then the implication may be simply that the related component, for example, a document, needs to be updated. However, if the relationship is a dependency on a third-party software component for which the business organization does not have the source code, then the implication may be that the application will no longer work or that its operational characteristics change. At a minimum, a complex test and validation cycle may be required.
Understanding an application's structure will become very important in a component-based development environment. These structures can be complex, involving configuration of independently versioned components. As more of the application components are bought rather than built, the availability of effective tools to manage change may make the difference in the success of component-based development and application assembly.
The fact that business applications share characteristics of their logical structure (models) leads to opportunities for reuse of the software components that implement that structure. One set of logical structure characteristics typical of business applications is the types of services they provide:
User services. These services support the user's view of the application, increasingly through graphical user interfaces that allow the end user to control the interaction sequence in performing assigned business tasks. Visual metaphors aid in accessing information, manipulating data, and invoking system or other application services. Not all user services are visual; for example, underlying services transform data from a graphical display to a tabular display.
Data services. These services control access to and management of corporate data. They may shield requesting clients from knowledge of data storage, distribution, and format—providing location services, format transformation services, and data caching, for example. Many of these services are commodity services that can be provided directly by reusable components such as DBMSs, but often applications have specific performance, location transparency, data replication, or migration requirements that lead organizations to build custom services that encapsulate all access to data.
Business services. These services implement the business processes and rules that define the business. They govern how the decision-independent corporate data is accessed, manipulated, and interpreted for making business decisions. Whereas user services may be targeted at specific sets of end users or lines of business, business services define logic and rules that are enforceable across the enterprise.
It has been observed that the analysis and design techniques, development tools and languages, and technology building blocks differ for each category of service. For example, user services typically reside on a desktop PC or workstation, exhibit a graphical user interface, and lend themselves to rapid prototyping for iterative review and refinement with end users themselves. Data services require less direct user interaction and are designed by individuals skilled in data modeling and database design. Business processes and rules are modeled through interaction with business knowledge workers and may require complex analysis, design, and coding.
Large application development teams tend to include designers and developers who specialize in each of these areas. On small development projects, all of these services may be designed and constructed by the same person or collaborative team, although even small development teams tend to acknowledge differences in techniques and tools appropriate for graphical user interaction design versus database modeling and service design and implementation.
Several partitioning models have been proposed that provide guidelines on ways to manifest this logical structure as a physical implementation that takes advantage of distributed computing platforms. Familiar partitioning models include the two-tier remote data access model (business processing on the client) and database server model (business processing in database stored procedures), as well as the three-tier application server model (business processing independent of both the client and the database). In particular, the three-tier model suggests that shared business processes and associated business logic should reside neither on the client nor the database server but rather on a middle tier of servers to maximize distributed processing opportunities and guard all access to and manipulation of corporate data.
The right partitioning model will depend on factors such as the following:
Application requirements. For example, it may make sense to adopt a three-tier partitioning model for transaction-processing systems but to simplify decision-support applications by placing the business logic in the DBMS with the data being accessed. (Of course, the logical design process should help to discover that the same business rules apply in both systems—for example, how to calculate a customer's outstanding balance and its effect on credit rating.)
Which technology services are available in the target execution environment. For example, before reliable networks and ubiquitous communication protocols, one built decision logic into client code rather than distribute processing across networked servers.
Application partitioning models emphasize process packaging and distribution for a specific application but do little to directly address opportunities for reuse across applications. However, if an application's logical structure is used to express the application as invocations of services rather than tiers of software, an architectural model as depicted in Figure 4 emerges.
Figure 4. Services-based architecture for business applications
In such a model, the services that an application requires become the basis for assessing the availability of prefabricated components that deliver those services.
Where reuse is not possible, services are packaged into component implementations based on heuristics that consider technology options, anticipated usage patterns, process and data distribution, performance, and other projected application characteristics, as well as reuse goals.
In either case, the mapping of services to components could be one-to-one but is generally one-to-many. Figure 5 illustrates a potential mapping scheme.
Figure 5. A potential mapping scheme
Component D in Figure 5 implements two services, defined through its interfaces, perhaps based on some common business activity. Component A also implements several services, but these are in different service categories. This may imply a potential lack of reuse or limited deployment options, and a potential problem for subsequent maintenance, but perhaps this is offset by the ease with which a business service with a visual interface may be distributed across the enterprise. Components B and C implement the same service, perhaps for reasons of performance.
These examples also illustrate that the services-based architectural model is not a strictly layered model. As the execution technology makes it feasible to package services into more and finer-grained components, the layers simply become useful ways to partition the services that are to be implemented. The services themselves are implemented as a collection of cooperating components that can be distributed and reconfigured dynamically across one or n computing platforms. Cooperation among components is based not on an interface at the layer boundaries but rather on following the rules of well-behaved component interfaces.
In a services-based architecture, a business application is simply that set of services required to support the scope of a specified business problem; whenever two applications require the same service, they share the component that implements the service. This is shown in Figure 6.
Figure 6. Applications as collections of service innovations
One implication of a component-based approach is that multiple physical components may participate in delivering on a request for services. Transaction analysis, represented in Figures 4 and 6 as arrows between services, is a part of the process of making packaging decisions. For example, if delivering a service to the user involves invoking several business services, the complexity of component coordination dictates that strong consideration should be given to packaging these services in the same component—today.
However, such monolithic packaging decisions reduce opportunities for reuse and process distribution. Also, if application developers truly are to be able to take advantage of prefabricated components as building blocks, there will be no choice but to be able to sequence invocations of autonomous services.
High-level application services that are expressed as sequences of service invocations are referred to as business transactions; such services require a technology infrastructure that can provide component coordination services within transaction boundaries. Required services include monitoring the successful completion of each invocation and taking appropriate action, including notifications, in event of failure. Business transactions may last for milliseconds or be long-lived (that is, take minutes, days, or weeks), and perform work in a variety of business locations.
As illustrated in Figure 7, component coordination must support not only transactions explicitly expressed as sequences of service invocations, but also the situation in which the internal implementation of one of those services in turn relies on an autonomous service.
Figure 7. Business transactions
Business applications are all about data. Decision-support systems provide business decision makers not only with raw data for analysis purposes but with aggregations, summaries, trend analysis and other information necessary to make business decisions. Customer-support systems give support specialists information about the customer and make a broad set of additional information available to answer both anticipated and unanticipated customer inquiries. Transaction-processing systems handle standard business transactions ranging from investment trades to retail orders; the transactions themselves are logged and the associated processing updates databases ranging from stock portfolios to retail inventories.
The persistent data reside in the following:
Most business transactions will result in access to or updates of persistent data stores. In fact, updates rarely apply to only one source but rather to multiple replicated or related data stores. Knowledge of how these various sources are related is often embedded in the software code that accesses and updates them. Because different data stores are managed by different groups of people and are accessed and updated by a range of different applications, it is often difficult to verify that business rules for access and update are being applied consistently to all data.
One area in which IS is contributing measurably to component-based development is in providing component "wrappers" that hide the complexity of legacy systems or that provide a degree of location transparency in a complicated or changing environment. However, their ability to truly hide the underlying complexity is in part dependent on the degree to which the heterogeneous resource managers—the business-independent components that manage access to persistent data—participate as "good citizens" in cooperative business transactions.
Component-based development is often viewed as applying only to transaction-processing–style applications because it is not viable to interject a component layer of rules processing code between an end user's interactive query tool and the target data stores. However, the same business rules are applicable whether applied as a part of high-volume transaction processing, when extracting and summarizing data to build a data warehouse, or when accessing and querying data for decision support. Modeling tools that make it easier to map from modeling abstractions to implementation will help—for example, they would facilitate design-time reuse in which the same business logic is reimplemented as stored procedures in the DBMS, in the user interface, or in middle-tier code. The associated implications for complex dependencies between the design-time models and various instantiations of implementation code underscore the need to be able to track and manage formally the structure of component-based business applications.
Figure 8 depicts the relationship between object modeling, component design, and object use.
Figure 8. Relationship Between Modeling, Component Design, and Object Use
Business object modeling is a discovery technique for identifying the core business concepts and their associated business semantics. By constructing components that encapsulate the behavior of a business object, we expect to increase the opportunities for component reuse and make it easier to manage change.
Because of the realities of business application usage and deployment patterns, it is unlikely that there will be a one-to-one mapping between business object and component. Rather, component packaging will reflect anticipated usage patterns, code performance characteristics, or deployment partitioning opportunities. New development tools are required that will make it easier to manage the structure of applications as complex interrelationships between modeling, construction, and deployment constructs, and to locate and reuse components.
Significant benefits can be gained from design-time and source-code reuse, but the ultimate vision is based on run-time reuse: location- and implementation-independent service invocation that is mapped at run time to the component (or a selected component) that implements it.
This paper has outlined Microsoft's and Texas Instruments' vision for a target development environment based on assembly of applications from components. It has introduced a number of concepts for which there is little or no integrated tool support today. Initiatives such as the Microsoft® Solutions Framework (MSF), Texas Instruments' component-based development guidelines for its Composer™ and Arranger™ products, and other consulting services and publications are designed to help organizations apply these concepts to the design of applications today, even as the execution environment component technology continues to evolve. Because the notion of multivendor heterogeneity is integral to the target environment, beginning to articulate a shared vision is a key step in being able to supply an interoperable tool suite that will enable large organizations to realize the full potential of component-based development.
Reusability of software has been cited as a significant benefit to be gained from a component-based development approach. Additional benefits relate to an improved development process, such as logical problem decomposition, parallel development, and incremental replacement of software. Other benefits stem from the advantages of encapsulation—minimizing effects of change through hiding implementation details behind well-defined interfaces—which positively influences maintainability.
The following core concepts outlined in this paper form the basis for independent and joint work to supply the new development tools and environments that will help organizations reduce cycle time for application development: