Dave Stearns, Program Manager
Visual Basic Group
April 1996
The benefits of a three-tier, or multi-tier architectures for information systems have been the focus of quite a bit of press lately, and most information technology professionals understand and grasp the potential of this approach. However, information technology shops are often left with the sad realization that they have a tremendous amount of existing information systems in production, most of which have no concept of components and are under pressure to continue delivering enhancements on these systems, even though their longevity might be in question. This paper will address this issue and offer some practical advice not only for moving existing information systems to a componentized, multi-tier architecture, but also for how to transform an information technology shop into a component-based development team.
Before we begin, it's important to realize the distinction between a logical multi-tier architecture and a physically distributed system. Realizing this difference will help illuminate a migration path for your existing systems, while still providing enhancements and without the need for a steep learning curve.
A logically multi-tier design is one in which an application's implementation is logically divided into three distinct areas—user services, business services, and data services. This segmentation allows developers to introduce flexible layers of abstraction between databases and the client applications that use them. Each tier is responsible for a different task, and when put together, they form a cohesive, cooperating system that is flexible, robust, and evolutionary.
Data services are primarily responsible for the locating and persisting of data. Typical examples of these would be your relational database, any stored procedures contained in it, and any components that engage primarily in data access, like a customer searching component. Data services worry about things like data persistence, recovery, and consistency, and they are not involved in implementing any business-specific rules on the data.
Business services, however, are responsible for implementing business-specific rules. They interact with data services to retrieve and persist data, but they add the functionality of applying business rules. Typical examples of business services include components that do tax calculation, validation of business data (for example, customer, product, order, and license) and business functions (for example, faxing and telephony). Business services are nonvisual; because they isolate the business rules, they can be updated quite frequently. Their role is to validate the content of the data they are working with or to do calculations or other computations that exercise business rules.
User services are the visual part of an information system. These services are responsible for displaying data to the user, allowing the user to manipulate that data, and communicating with business services to validate or generate data that is dependent on business rules. Examples of user services are the forms, controls, graphics, and messages displayed on the user's screen.
The benefits of this kind of layering are well known and are outside the scope of this paper (for more information on the benefits of multi-tier designs, refer to the "Building Client-Server Applications with Visual Basic" manual, which is included in the MSDN Library in the Visual Basic 4.0 Enterprise Edition book). Suffice it to say that this approach leads to a much more maintainable, scalable, and evolutionary system that can react to the changes in your business.
The segmentation of these tiers is done logically, and at this stage, it does not dictate where the implementation of those services will lie. It is important to note that one can have a "multi-tier" design where all the implementation of all the services was in one executable file. However, this doesn't allow for any sharing or reuse of components, which is an attractive benefit to componentization, but the point is that you don't need to dive head-first into the world of components to benefit from a multi-tier design.
Once a system has been logically separated into multiple tiers, it can then be broken apart into components. Component is an OLE term that refers to a set of OLE Automation interfaces that are reusable in any OLE Automation client application. This component is compiled into a binary form and is usually bundled with a type library that describes the interfaces exposed by the component. Components can be shared across many clients, allowing for reuse, and can be implemented in any development language that supports the creation of OLE automation servers.
Components can exist in three locations with respect to the client that is using them—in-process, out-of-process, or remote. In-process servers are DLLs, and they run in the same process and address space as the client application. Out-of-process servers run in a separate process and address space, but they are still on the same physical machine as the client application. Remote servers run on an entirely separate machine, using the other machine's CPU and system resources when executing. Each of these options has advantages and disadvantages, and each component in your system can choose a different distribution model.
In-process automation servers are ideal for reusable components that involve a high rate of communication between server and client application. They are fast to load, and since they occupy the same address space, they can transfer data to and from the client application at a very fast rate. Of course, since they are in the same process space as the client application, they must reside on the same machine as the client, making updating of the component potentially difficult. In-process services created in Visual Basic also run on the same thread of execution as the client application, which makes them unable to run concurrent with client code. In-process servers created in Microsoft® Visual C++® can spawn multiple threads to things like background tasks.
Out-of-process servers are ideal for components that need to run in a separate process space, or a separate thread from the client application. These servers are a little slower to load, and transferring data between client and server may take a little longer due to the need to move data from one address space to another. Since out-of-process automation servers are executables, they run in their own thread that is different from the client application, so clients do not block servers when the client's code is running. At this time, out-of-process servers created in Microsoft Visual Basic® cannot create multiple threads within their existing process, but they do run on a separate thread from the client application, so out-of-process servers can use timers to accomplish background or asynchronous tasks. Out-of-process servers are also good to use for components that can also behave as stand-alone applications.
Remote servers are also executables that run in a different process than the client application, but remote servers also run on an entirely different machine than the client application. This is a very powerful option because it lets you off-load from client machines onto more robust server machines, especially when the client application is on the other end of a WAN or slow-link connection like the Internet. Creation of these objects is the slowest of the three options, and the data transfer and call rate is also the slowest. However, once the call has begun executing on the remote machine, the client's CPU becomes freed up and is able to devote itself to other tasks.
Fortunately, Visual Basic's Class modules are completely location independent. It can be confusing and time-consuming to learn which physical distribution model works best for your system, so it's often necessary to experiment, moving classes between in-process, out-of-process, and remote servers. The good news is that both the client code and the server code need not be touched when moving classes between these physical distribution models. In fact, client applications do not need to be touched at all, and developers can even move components from out-of-process servers to remote servers and back again at run time. This location independence sets the stage for how to migrate existing systems to a multi-tier design and eventually a component-based, distributed system.
As noted above, in the real world, information technology groups have continuing pressures to upkeep and enhance existing systems, but know that the longevity of those systems depends on moving to a multi-tier architecture and a physically distributed component-based system. The next section describes some practical advice on how to proceed in moving existing system towards a multi-tier design without putting existing projects on hold while completely tearing them apart.
The first step in preparing for a three-tier architecture is to do a logical design of your system. While this may cause panic in those that object to methodologies (no pun intended), it is a completely essential step in the creation of a component-based system. Following methodologies to create the logical design can be very helpful, but it's not necessary to follow a rigorous process, and you should never end up serving the process more than it serves you.
In this stage, the physical implementation of components and their eventual location is irrelevant. The purpose is to identify the real entities of the system and how they interact with one another. This can often be one of the hardest tasks, since it takes the ability to look past the way things are currently done, and identify what the business is really all about. While the techniques for doing this kind of analysis are outside the scope of this paper, the following section offers some advice on how to choose which components should be constructed first and how you can componentize your system one piece at a time.
Remember that the logical design of a system is evolutionary, and the design should be updated and changed as the system develops. It is unlikely that a project team will arrive at the perfect design the first time out, and it's even less likely that the perfect design will remain perfect as the needs of the business change. The logical design needs to live and grow, just like the system does, and componentization will allow for this migration to occur more smoothly.
Once you have identified the various user, business, and data services in your system, you should decide which ones you want to construct first. As is true with many systems, the entire project cannot be put on hold while it is entirely re-written as components. However, with Visual Basic OLE server technology, you can componentize your system incrementally, moving pieces out of the main executable as the project allows.
The first candidates for construction are the lowest level of components in the system. By this, I mean those components that are used by the entire system and really offer extensions to the operating system. Examples of these might be your data access components (if they are not already components), logging services, messaging and networking services, telephony integration services, and configuration management services. Since these are used by the entire system, migrating the services to reusable components will not only offer code reuse within your project, but they will also be the most attractive for use in other projects, since most information systems have the same low-level needs. A component that allows your application to write information to the Windows NT event log would probably be usable from many other applications in your organization. Also, as you begin to break out other components from your existing system, these new components will likely need these low-level services, requiring that they exist before the higher-level components can be constructed.
The next candidates for construction are those services that would benefit most from being either in a separate process space from the client application or on a physically different machine. Examples of these services might include credit card validation, fax processing, long-running calculations, and report generation. OLE servers can run either in the same process as the client application or in a separate process or physically separate CPU. While in-process servers (DLLs) are very fast and easy to use, they are still running on the same thread as the calling application. Out-of-process servers (EXEs) have a slower communication rate with the client application, but they can run in their own process space and on their own thread, allowing for background work, protected memory, and in the remote case, distribution of tasks between two CPUs. By componentizing services that perform long-running tasks (like report generation) or tasks that require special hardware (like faxing or credit card validation) into out-of-process servers or remote servers, you can increase the overall performance of your system and allow for the client application to continue operating while these tasks run in the background or on another machine. Asynchronous processing like this can be complex, but it will reap a great benefit in the long run.
By starting with these two kinds of components, one can slowly move an existing information system to a component-based, three-tier system over many project cycles. The cost of tearing the application apart and completely recoding it would be too great, but the extra cost of creating a component or two per project cycle can easily be absorbed. Trying to completely componentize an entire existing system in one project cycle is dangerous and will almost always fail to due to impatience of your user base and the lack of experience on the project team. By taking an evolutionary approach, the team will build experience as they go along, but they will still be able to deliver enhancements to the system in a timely manner.
After you have identified the services that you want to construct, your team then needs to concentrate on the physical design of the components. The basic unit that one uses in Visual Basic to construct a server is the Class module. Each class module defines a template for new objects that a client application, or other classes in the same component, can create. The class defines a set of properties and methods, where each instance of the class maintains its own properties, but shares the implementation of the methods.
In Visual Basic, the Class module is completely location independent and can reside in the same executable as its client, a DLL, another EXE, or even on a remote machine. As a developer, you can choose to distribute these classes in any fashion, and neither the class nor the client code needs to be changed or specialized for any particular deployment.
When migrating an existing system to a three-tier architecture, it is not necessary to separate everything into components (EXEs and DLLs) right away—as long as you create Class modules for your various services, they can be left in the main executable and moved into a DLL or separate EXE at a later time, and even moved back into the main executable if necessary. This portability eliminates the dependency between code and physical location, which is essential for distributed systems.
The following sections offer some practical advice on how to create good classes in Visual Basic and how to distribute them in components. Each system will have its own needs, so these are just general rules of thumb. You need to consider all the aspects of your system when designing classes and components.
Each service in your system will be made up of one or more classes. As defined above, a class is a set of methods and properties, and those elements work together to define an interface. The interface of a class is very important and is the contract of the class with the outside world. Designing a good interface is a skill unto its own, with of course, a little art mixed in.
To design a good interface, you need to forget about your internal implementation and think like a client of your class. The way we do this at Microsoft is through scenario definitions. We think of scenarios, or short examples, that illustrate the problems we want to solve with the class (or classes) or what we want to enable our users to do. By defining these scenarios, you define the common requirements and identify the cases where your classes should be straightforward and easy to use. If you write the sample code for your scenarios, you will quickly discover any awkwardness in your interfaces, missing elements of your interface, or where you need to define helper methods to make the common operations easier. Your interfaces should always be driven from the point of view of the user—not from what is easy to implement given the internals of the class. The goal to strive for is a set of classes that feel natural and will be easy to use in the common scenarios you defined.
There are two kinds of elements in your interfaces—properties and methods. In OLE servers, properties are actually implemented as small methods (known as Property Get and Property Let/Set procedures), and in Visual Basic, if you declare a data member as public, Visual Basic will actually generate the property procedures for you. For example, one can declare a property procedure in their class like so:
Private m_sFirstName As String
Property Get FirstName() As String
FirstName = m_sFirstName
End Property
Property Let FirstName(sNewValue As String)
If IsValidName(sNewValue) Then
m_sFirstName = sNewValue
Else
Err.Raise vbObjectError + 1, _
"MyServer.Customer", _
sNewValue & " is not a valid name!"
End If
End Property
And then use the property in client code like so:
Dim cust as New Customer
cust.FirstName = "Dave"
Text1.Text = cust.FirstName
When the client code assigns a new value to the FirstName property, Visual Basic code actually runs, allowing for you to check the validity of the data or to do any action necessary. This functionality gives you the best of both worlds—data members are not directly exposed to client code, but in the client code, it appears as if they are working with data members of a class. Allowing code to run enables you to validate data as it's assigned or to delay fetching of data until it is requested.
Property procedures are also very handy for implementing read-only properties or write-only properties. For example, if you were defining a customer object, you might want to expose the unique customer ID from your database as a property, but you would not want to allow anyone to change that property. To do so, all you need to do is define a Property Get procedure, but not include a Property Let procedure. The procedure would look like this:
Private m_nID As Long
Property Get ID() As Long
ID = m_nID
End Property
'No Property Let routine since this is read-only!
If a client tried to code the following lines:
Dim cust As New Customer
cust.ID = 5
Visual Basic will automatically realize that there is no Property Let routine. When the client attempts to compile or run, they will receive an error.
Since properties can really be implemented as methods, a question naturally arises—when should one use a method and when should one use a property? There isn't a hard-and-fast rule to this; but in general, properties are intended to be used for exposing state information about an object, and methods are used to perform actions on that object. Back in the days of VBXs, developers had to use properties for everything, since VBX controls did not have methods. This lead to the infamous "Action" property, which many VBX developers included as a way to trigger actions. This a clear case where methods are more appropriate, and now with OCXs and OLE servers, there should be no reason to implement such a property. However, things like back color, fore color, border style and alignment are all good examples of properties on a control since they express a state of the control, and setting them doesn't imply performing an action on the control (except maybe a repaint). In OLE servers, examples of good properties would be first name, phone number, user name, password, and version. Each of these express a certain state of the object that will remain in between method calls, and setting these properties doesn't imply that an action will take place on the object (like saving to disk or to a database).
In addition to separating properties and methods, the developer also needs to determine what state information can be passed into parameters of a method. OLE servers can define optional and required parameters for each method, and the creative use of optional parameters can save a number of lines of client code. Consider the following example:
Dim conn As New Connection
conn.UserName = "Dave"
conn.Password = "LetMeIn"
conn.Server = "SQLServer"
conn.Connect()
The use of separate properties for user name, password, and server name produces a clean and pure interface, but it requires much more code than is really necessary to accomplish the common scenario (connecting to a database). Using optional parameters, the same code could be collapsed to:
Dim conn As New Connection
conn.Connect("SQLServer", "Dave", "LetMeIn")
The interface can still define separate properties for the user name, password, and server name, but the client developer can just pass this information as parameters to the Connect method. If the OLE server is run out-of-process to the client, or even cross-machine, the second code example would run much faster than the first, since each property set must be sent from the client to the server, resulting in four round trips, rather than the one round trip in the second example.
Once you have implemented the properties and methods on your class, they can then be viewed in Visual Basic's object browser. The object browser lists all the classes within your current project as well as any classes that are referenced through the Visual Basic references dialog box. This makes your servers almost self-documenting. Just in case your methods and properties were not self-explanatory, Visual Basic allows you to define a short Help string for each property and method in each of your classes. By filling out this information, you are assisting in the use of your classes by others. Since this short Help string is compiled into your class and into your server, it will always be available to users of the class, as opposed to documentation or Help files, which may get lost or not installed.
To define Help strings for your properties and methods, open the Object Browser (see Figure 1), select your project from the Libraries/Projects combo box, and select your class in the Classes/Modules list box.
Figure 1. Object Browser
The Method/Properties list box on the right shows all the methods and properties available in your class. As you select each one, the prototype of the method or property, along with the short Help string, will appear at the bottom. To define this short Help string, choose the "Options…" button and fill out the Description field. Along with this short Help string, you can also define a Help file and Help context ID that will supply more information if desired. The object browser will use this information to launch WinHelp when you click the "?" button on the lower-left corner of the dialog box. Note that the object browser lists the Property Get and Let methods separately for classes in the current project and adds the [Sub] or [Function] tag on the end of the methods. Once you make the OLE server EXE or DLL, the clients that use this class will just see one entry for each property and no tag on the end of the methods.
The property sheet for a class has three properties that must be set for each class. The Name property should be set to whatever you desire the new name to be. This name will be added to the global namespace and can then be used in a "Dim x As New <class name>" statement. The Public property should be set to True only if you want this class to be exposed outside the current EXE or DLL. If the class is only going to be used within the component, you should set this property to False. The Instancing property has to do with distribution options and will be discussed in the next section.
As mentioned before, Class modules in Visual Basic are location independent. Thus the distribution model of your services can be totally separated from their logical and interface design. Over time, the distribution model can also change without changing the source of either clients or servers, so you can fine-tune your system after it has been deployed, just as one fine-tunes a database schema when it goes into production. This section describes a number of issues to consider when determining your distribution model and what the best choices would be for the components you begin constructing first.
Once a class has been constructed, it can remain in the client project or be moved into an in-process automation server (DLL), an out-of-process automation server (EXE), or a remote automation server (EXE on another machine). The type of component will often dictate what model you choose, and each choice has its strengths and weaknesses.
In-process servers are the fastest in execution, and they work as natural extensions to your applications. They are small and quick to start up. Since they run in the same process space as the client application, exchanging data between the client and the server is very fast and efficient. Out of the two types of services identified above for places to start, the first type, which are the services commonly used by all parts of the application and are at the lowest level, are good candidates for in-process servers.
There are only two instances where an out-of-process server on the client machine is necessary: when you need the server to run on a separate thread of execution from the client and have a separate process space, or when your server can also be run as a stand-alone application. If you service has neither of these requirements, it should be built into an in-process server. However, it is important to realize the added benefit offered by an out-of-process server. While they are slower to start up, larger in size, and have a slower rate of data exchange with the client application, they offer the possibility for asynchronous operations and background or parallel task completion. Services that commonly exchange just a small amount of data, but then perform a lengthy task that shouldn't stop the client application from continuing with its work, are excellent candidates for an out-of-process server. For example, if one were building a service that could do FTP file transmissions out of a queue, it would make sense to create this as an out-of-process server that copied files in the background to and from a remote machine on the Internet. The client application could call a method on the server called GetFile, passing the name of the remote machine, the name of the remote file, and the name to use for the file on the local machine. The server would then add this to its queue of requests and process them on a separate thread from the client application, allowing the user to continue to work while the file is transmitted to their local machine.
If you choose to build your service into an out-of-process server, you also have the added ability to place that server on a physically different machine than that of the client, running the server's code on a separate CPU from the client, allowing for maximum parallel processing, and taking advantage of special server hardware that you might have. Remote automation is a new feature to Visual Basic 4.0 and is an exciting advancement in OLE Automation. While it is slightly more complicated to configure than an out-of-process server, the source code is still exactly the same, and a server can be moved from remote to local and back again without having to rebuild either the server or the client. Services that would benefit from specialized hardware, or services that do long or complicated tasks that shouldn't bog down the client machine are good candidates for remote servers. For example, a service that generates large reports in Excel spreadsheet form, or a service that works with a fax board to send faxes to customers should be off-loaded from the client machine, working out of job queues to perform tasks that may take a substantial amount of time. Using a remote server can also allow you to concentrate special hardware, like fax boards or modems in a single or couple of servers, eliminating the need to have the hardware in each client machine.
When building out-of-process or remote servers, you also have the ability to specify the instancing model for each class. The instancing model simply determines if all instances of a specific class will run in one process space and on one thread, or if each instance of the class should be run in its own process space with its own thread. Obviously, there are benefits and costs to each choice. Creating processes and threads can be time consuming and will consume memory, but once created, each instance of the class is running independently from and concurrently with every other instance, allowing for parallel processing, especially if the machine they are running on has more than one CPU. Many variations on the instancing model are possible when you employ the use of instance pools and process containers, and these are described in detail in Rick Hargrove's white paper titled "Designing Distributed Application with Remote Automation and Visual Basic 4.0."
Once you have created your automation servers, you can use them in your client application just as you did when they were class files in your project by merely establishing a reference to the new server through the references dialog box. The references dialog box lists all registered automation servers on your system. When you build your automation servers in Visual Basic, they will automatically be registered for you. Just by selecting the check box next to your server's name brings all of that server's objects into the namespace and allows you to create the objects like so:
Dim cust as Customer
Set cust = New Customer
The way that you declare your object reference variables in Visual Basic is very important. In Visual Basic version 3.0, all of these had to be declared as "Object", but in version 4.0, you should declare these variables with their specific type, such as "Customer". When you declare the "cust" variable above as type "Customer", Visual Basic will automatically read the type library for your server, check the syntax of method and property calls to that object in the client code, and bind all of those calls directly into the server's virtual function table, making the execution of methods and property access as fast as possible.
Migrating just one application to a three-tier architecture is just the first step in building a component-based enterprise-wide information system. The purpose of building reusable components that contain user, business, and data services is so that you can leverage these components across your entire organization, reusing pieces of one system in another, allowing for sharing of data with a common set of business rules. However, the next step in this journey, the sharing of a few components between projects, is a treacherous one. Whereas the migrating of just one application into a three-tier architecture is much more a technical effort, the sharing of components raises many more project management issues than it does technical ones. Every information technology group struggles with how to get component reuse happening. The following sections of this paper offer some practical advice on how to begin component development and what issues need to be addressed when you begin to reuse components from one project in another.
Although almost any developer would say that reusing code and components is a good thing, common utilities keep getting written over and over again. Although almost every project manager realizes the benefit of taking the time to develop reusable components that will help to cut development time of future projects, ridiculous deadlines keep getting placed on developers as they are told to "just hack it out." The only way to get true reuse and component development happening in an organization is to work on establishing a "culture" of reuse.
The common culture of an information technology group is one of either stoic methodology, where nothing can be done without the 50 required specifications and project reviews, or hack-and-sling chaos, where code is just generated from developers as the need arises and little to no concern is given to its potential reuse. Obviously, both of the descriptions are extremes, but in a culture of reuse, not only does the information technology organization need to strike a balance between these, but it also needs to change the way they structure their project teams. Developing a reusable component is much like developing a small product—the component should go through a development, testing, bug-fixing, and release phase, accompanied by documentation and even localization if necessary. The customers of the product are the other applications that are reusing it. Just like in most products, the components will need to be upgraded over time with releases of new versions. For this reason, it is often necessary to establish project teams that focus solely on the development of reusable components, while other project teams put the components together into solutions for the business.
When reusing a component, a development team can receive benefit in the short term either by reusing the binary DLL or EXE or by cutting and pasting source code between the projects. Cutting and pasting may seem like a viable method of reuse; in the beginning, it can help to get the culture of reuse established. However, in the long run, cutting and pasting code will not result in a component-based information system, as upgrades made to components won't be automatically inherited by the applications that just cut and pasted the source. If a development team reuses a component at the binary level, bug-fixes and upgrades to that component can immediately benefit the application, although it does have the side effect of creating a dependency between the application and component groups. This dependency can lead to problems with schedules, as discussed below, but if managed properly, it results in a synergy that benefits both teams.
Once you have decided to share components in their binary form, the next issue that arises is what is necessary for another development team to reuse a component. When you build an automation server in Visual Basic, the type library that is built into the component contains a list of all the classes in the component and all the properties and methods of those classes. If you used the object browser to add short Help strings to your properties and methods, these will also travel with your class, helping to give a brief explanation of the class and its methods to a potential user. While this will assist most users, a number of additional items should accompany a reusable component.
Visual Basic also allows you to associate a Help file with your component, and a Help context ID for each class, method, and property within the component. If the Help file is present on the user's machine, they can open in using the object browser and receive even more detailed information about your classes and how to use them. There are a number of tools available to make the authoring of Help files very easy, allowing for component teams to provide very rich documentation. Since Help files are created from RTF document files, a component team can produce both online Help and printed documentation from the same source documents.
A good reusable component should also contain a simple text file that gives a short description of the component, a list of the components dependent files, a person or alias to contact for more information, the language of this particular build (if localized), and any other relevant information needed by a group shopping for components. A component may have a long lifespan, lasting much longer than the careers of its creators, so it is essential to the ability of another group to reuse that component to know who to contact for more information.
Keep in mind that a component should be treated as a small product, and everything that you commonly think of as accompanying a software product you buy should also be included with a component.
The concept of ownership of components is very critical, especially as the number of components created and the number that are reused in other projects grows. If a development team is going to reuse a binary component with no access to the source code, they must have a clear owner to speak with when they encounter bugs, problems, or are in need of enhancements. For this reason, it is difficult to develop components in regular project teams, since the team that developed the component might not exist in the future or they may be too involved in another project to do any work with a component. Establishing component teams that clearly own the component and treat it as a small product, can offer the assistance needed by a development team when they encounter problems trying to use it.
Without clear ownership, a component will never be reused in a binary form, since development teams cannot trust that they will get bug-fixes when the component breaks or be able to add features to it in the future.
When a component team does receive bug reports, or when the team adds new features to the component, the question of version control arises. By version control, I'm not referring to source code version control (which is still important), rather I'm referring to component version control, which can be a much harder issue. Once you have established an interface on your component, you cannot change the contents of the interface or the signature of one of its methods without breaking applications that are currently using your older component. Visual Basic offers a feature to help you with this, and you can enable it by setting the "Compatible OLE Server" property on the Project Options dialog box. Developers should set this property to be the filename of a previously built version of their server, and having done so, Visual Basic will warn the developer if he or she attempts to change the interface of a class in such a way that would break existing consumers.
Components built with Visual Basic are automatically stamped with a three-part version number. This number can be set by choosing the "Options" button on the Make EXE or Make OLE DLL dialog box. The EXE Options dialog box appears, as shown in Figure 2.
Figure 2. EXE Options dialog box
The first part of the version number is the "Major" version. This part should change between each major update release of your component. What constitutes a major release is somewhat subjective—generally a new major release should include a significant amount of functionality, the new component would be binary-incompatible with old clients, and the name of the component's binary file should also change. "Binary-incompatible" means that old client applications would need to recompile in order to function with the new component, which implies that one or more of the methods on one or more of the classes has changed. At Microsoft, we postfix our DLL files with their major version number (for example, VBRUN300.DLL). When the major version changes, the filename is also changed (for example, to VB40032.DLL). The "32" is used to distinguish a 32-bit DLL from its 16-bit counterpart, but it isn't necessary when doing 32-bit-only work.
The second part of the version is the minor version number. The minor version number is used to indicate a slight enhancement to a component, but not one that demands existing clients to recompile. If you add a new method to an interface or make a series of bug-fixes and re-release, you would increment your minor version number. New minor versions of a component should be binary-compatible with older clients, implying that you can redistribute a new version of the component and just overlay the existing version. However, many developers also include the minor version number in the filename of the component, just like the major version number, so that users can have applications that use both the minor versions simultaneously.
The third part of the version number is the revision, which is also known as the build number. The revision number should be incremented with each build of your component, and each revision of a component (as long as the major version hasn't changed) should be binary-compatible with other versions. This means that a component with a version of 1.0.1 should be able to be replaced with a component with a version of 1.0.2, without having to recompile the client application. With the "Compatible OLE Server" property set, Visual Basic will ensure that this binary compatibility is possible, and it will warn you when you attempt to break it. If you choose to break the binary compatibility, you should update the major version number and change the component's filename. To automatically increment the revision number each time you choose to make the EXE or DLL, select the "Auto Increment" check box on the dialog box.
As soon as you begin to share components, the issue of schedules will quickly arise. When a development group becomes dependent on a component produced by another group, the client project's schedule becomes dependent on the component's schedule, which of course introduces risk. For this reason, it is imperative that component project managers never over-estimate what they can deliver in a given time period. Promising too much could lead to delays in the client application's schedule and to many disruptions of other schedules that are dependent on the component.
As more and more components are created and reused, the dependencies between components and applications can become staggering. The management of the schedules for each becomes a very central and core role, and it should never be underestimated. This is another example of why teams and project managers must be dedicated to a certain group of components, and someone must take on the role of coordinating the schedules of each of their clients as well as any components that they are also dependent upon.
There will be times when a client project will demand certain work to be done by a component that is not possible to complete within the scheduled time, or the project goes beyond the role or defined scope of the component. In these cases, the leader of the component team must work with the client to establish a clear set of work that each side will do, and in these cases, the client development team may need to do some of the extra work, with the possibility of folding that back into the component at a later time.
A culture of reuse takes quite a while to build, but once achieved, an organization can move into a stage of total component development, where any new functionality needed by an application is put into a reusable component and then shared with the rest of the projects. One should realize that this kind of shift in focus will not happen overnight, and it may take years to get the concept of building components and reusing them into the mainstream thought of the organization.
If a group is attempting to move in this direction, there are a few issues that will arise that are unique to wide-scale component development. These are discussed briefly in the following sections.
In any organization, there is turnover of people, and with that, a loss of knowledge regarding the components that are available for reuse. If an organization is trying to encourage component reuse across the board, there needs to be a central location for cataloging those components, allowing developers to find a reusable component that suits their needs easily, so as to discourage a developer from writing a new component that does the same thing as an existing one.
The Component Manager application (see Figure 3 for main screen), included with Visual Basic Enterprise Edition, offers one solution to the problem of cataloging components. The Component Manager maintains a database of components (either locally or shared on an SQL Server) and allows an administrator to categorize the components across different dimensions. Developers can use the Component Manager's filtering abilities to quickly locate components that they are interested in and choose to install them on their machines.
Figure 3. The Component Manager
The Component Manager allows a developer to see all of the classes inside a particular component and event see each method and property in each class. From the Component Manager, the developer can also see the list of dependent or associated files for the component. When the developer chooses to install the component, these files will be copied to the developer's machine along with the component so that Help files, Word documents, schema diagrams, or any kind of file can be automatically installed with the component on each client machine.
With the potential for hundreds of components, the ability to search, categorize, and filter becomes extremely important. With the Component Manager, an administrator can define any number of category dimensions for each library and can then associate any component in that library with any number of values in each dimension. For example, the BookSaleServer component above can be associated with the "Basic Concepts" and "Data Access" values in the "Sample Type" category, as well as "Jet/DAO" in the "Technology" category. Once these associations are made, the developer can use the categories to filter on just what they are looking for. The developer can also do a textual search for a component by typing into the "Text Search" field.
Along with cataloging components, it is also necessary to track the dependencies created between components and their clients. When sharing just a few components, those in the component team keep track of their clients and know what dependencies exist. However, when the number of components produced and reused grows, and when the number of projects that use a central catalog to find useful components rises, the ability to manually track the dependencies between components becomes nearly impossible. When a component is adding features, it is necessary to know what applications are dependent on it so that they can do regression testing with the new component and choose to distribute it to their users.
Typically, this is what organizations will use a repository system for, which may also be able to keep track of dependencies within components as well. Visual Basic doesn't offer any tool to assist with this today, but many third parties offer repository solutions, and many organizations write their own. In any case, something should be used to track the dependencies between components.
In this paper, I have discussed the technical aspects of moving existing systems to a logical three-tier architecture, as well as the nontechnical aspects of doing enterprise-wide component development. While the technical issues are detailed and often require study and experimentation, they can be dealt with and accomplished easily. The nontechnical issues are much harder to overcome and are often the stumbling blocks that prohibit an organization from moving into component-based development.
Good people make good software. Technologies like Visual Basic, OLE Automation, and Windows are just power tools. A nail gun in the hands of a good carpenter will result in an amazing increase in productivity over using a normal hammer, but in the hands of a two-year-old, it becomes an dangerous instrument of destruction. In the same way, these technologies will only assist those that approach component development realizing that the technical aspects are only a small part of the equation.
To become a component developer, you must become a servant and realize that the development groups that consume your component are now your customers. To make component development successful, component teams need to be responsive to their customers, giving them what they need. Developing components does not mean retreating to an ivory tower, producing the most "correct" answer to the problem at hand and then forcing an overengineered solution on your customers. Just as software vendors listen and respond to their customers, putting in features that their customers request even if the feature disrupts the "purity" of the product, component developers need to build components that solve real needs, leaving the obscure and advanced features to the client application if needed and future versions of the component. Without this attitude, component development won't survive and development groups will recoil by implementing everything themselves again, creating proprietary and nonshared components and defeating the benefits of creating reusable components.