A method defines a reproducible path for obtaining reliable results. All knowledge-based activities use methods that vary in sophistication and formality. Cooks talk about recipes, pilots go through checklists before taking off, architects use blueprints, and musicians follow rules of composition. Similarly, a software development method describes how to model and build software systems in a reliable and reproducible way.
In general, methods allow the building of models from model elements that constitute the fundamental concepts for representing systems or phenomena. The notes laid down on musical scores are the model elements for music. The object-oriented approach to software development proposes the equivalent of notes — objects — to describe software.
Methods also define a representation — often graphical — that allows both the easy manipulation of models, and the communication and exchange of information between the various parties involved. A good representation seeks a balance between information density and readability.
Over and above the model elements and their graphical representations, a method defines the rules that describe the resolution of different points of view, the ordering of tasks and the allocation of responsibilities. These rules define a process that ensures harmony within a group of cooperating elements, and explains how the method should be used.
As time goes by, the users of a method develop a certain 'know-how' as to the way it should be used. This know-how, also called experience, is not always clearly formulated, and is not always easy to pass on.
Although object-oriented methods have roots that are strongly anchored back in the 60s, structured and functional methods were the first to be used. This is not very surprising, since functional methods are inspired directly by computer architecture (a proven domain well known to computer scientists). The separation of data and code, just as exists physically in the hardware, was translated into the methods; this is how computer scientists got into the habit of thinking in terms of system functions.
This approach is natural when looked at in its historical context, but today, because of its lack of abstraction, it has become almost completely anachronistic. There is no reason to impose the underlying hardware on a software solution. Hardware should act as the servant of the software that is executed on it, rather than imposing architectural constraints.
More recently, towards the beginning of the 80s, object-oriented methods started re-emerging. History — with a big H — is said to repeat itself. The lesser history of methods repeats itself as well: the paths followed by functional methods and object-oriented methods are similar. To begin with, there was programming, with subprograms and objects as alternatives for the basic elements of structure. A few years later, software scientists pushed the idea of structure towards design as well, and invented structured design in one instance, and object-oriented design in the other. Later again, progress was made in the area of analysis, always using the same paradigm, either functional or object-oriented. Both approaches are therefore able to provide a complete path across the whole software lifecycle.
The evolution of methods, whether object-oriented or not, always progresses from programming towards analysis.
In practice, the situation is a bit more complex — methods often do not cover the full lifecycle. This results in methods being mixed and matched: method A is used for analysis followed by method B, which is used for the design. As long as it uses a single paradigm — the functional approach or the object-oriented approach — this compartmentalization remains reasonable. Although it may remain understandable, the mixture of paradigms is clearly less reasonable.
Towards the mid-80s, the benefits of object-oriented programming began to gain recognition, and object design seemed like a sensible approach for people who wanted to use an object-oriented programming language such as Smalltalk. However, from the analysis standpoint, the concept of object-orientation was still only vapor and supposition. At that time, corporations had developed a strong knowledge of functional analysis and semantic data modeling methods. Computer scientists were inclined to follow a functional analysis phase with an object-oriented design phase.
This approach has serious drawbacks attached to the paradigm shift. Moving from a functional approach to an object-oriented one requires a translation of the functional model elements into object model elements, which is far from being straightforward or natural. Indeed, there is no direct relationship between the two sets, and it is therefore necessary to break the model elements from one approach in order to create model element fragments that can be used by the other. This paradigm shift, right in the middle of the development effort, can greatly hinder the navigation from the statements of requirements obtained early in the analysis phase, to the satisfaction of those requirements in the design phase. Moreover, an object-oriented design obtained after translation very often lacks abstraction, and is limited to the encapsulation of low-level objects available in the implementation and execution environments. All this implies a great deal of effort in order to obtain results that are not very satisfactory.
The combination of a functional approach for analysis and an object-oriented approach for design and implementation does not need to exist today, as modern object-oriented methods cover the full software lifecycle.
During the past decade, object-oriented applications — going from requirements analysis to implementation — have been developed in all sectors of programming. The experience acquired on these projects has helped the understanding of how to join together the various activities required to support a fully object-oriented approach. To date, the evolution of these practices is not complete, and there are still a few advocates of the mixed approach, prisoners beneath the weight of their habits. Corporations that are on the brink of the transition to object-orientation should not reproduce this flaw. It is far easier to deploy an approach that is completely object-oriented. In addition, software developed using this approach is simpler, more reliable and more easily adapted to the expectations of its users.
The first few years of the 90s saw the blossoming of around fifty different object-oriented methods. This proliferation is a sign of the great vitality of object-oriented technology, but it is also the fruit of a multitude of interpretations of exactly what an object is. The drawback of this abundance of methodologies is that it encourages confusion, leading users to adopt a 'wait and see' attitude that limits the progress made by the methods. The best way of testing something is still to deploy it; methods are not cast in stone — they evolve in response to comments from their users.
Fortunately, a close look at the dominant methods allows the extraction of a consensus around common ideas. The main characteristics of objects, shared by numerous methods, are articulated around the concepts of class, association (described by James Rumbaugh), partition into subsystems (Grady Booch), and around the expression of requirements based on studying the interaction between users and systems (Ivar Jacobson's use cases).
Finally, well-deployed methods, such as Booch and OMT (Object Modeling Technique), were reinforced by experience, and adopted the methodology elements that were most appreciated by the users.
The second generations of the Booch and OMT methods, called Booch'93 and OMT-2, were far more similar to one another than their predecessors had been. The remaining differences were minor, and pertained primarily to terminology and notation. Booch'93 was influenced by OMT and adopted associations, Harel diagrams and event traces. In turn, OMT-2 was influenced by Booch and introduced message flows, hierarchical models and subsystems, and model components. More importantly, it removed data flow diagrams from the functional model. These were inherited functional baggage and were not well integrated with the overall OMT approach.
By this stage, both methods offered complete lifecycle coverage, but with a notable distinction in focus. Booch'93 focused on implementation, while OMT-2 concentrated on analysis and abstraction. Nonetheless, there were no serious incompatibilities between the two methods.
Object-oriented concepts have a history that is often complex and intricate. The elements presented in the table below emerged from the experience of deploying the various methods, and have influenced the effort to unify the Booch and OMT methods.
Origin | Element |
Booch | Categories and subsystems |
Embley | Singleton classes and composite objects |
Fusion | Operation descriptions, message numbering |
Gamma et al | Frameworks, patterns and notes |
Harel | Statecharts |
Jacobson | Use cases |
Meyer | Pre- and post-conditions |
Odell | Dynamic classification, emphasis on events |
OMT | Associations |
Shlaer-Mellor | Objects lifecycles |
Wirfs-Brock | Responsibilities and collaborations |