Microsoft Strategy for Universal Data Access

By David Lazar

Overview and Scope

This paper provides an in-depth look at Microsoft® Universal Data Access, Microsoft Corp.’s broadly supported strategy for providing access to information across an organization, from the desktop to the enterprise.

The following people are the intended audience for this paper:

The technologies discussed in this paper are complex, spanning many areas of computing, including architecture, programming, networking, and platform integration. The issues are critically important because data and information are at the heart of almost any computer system, and the efficient and effective use of information is what provides business value and strategic advantage. Today, these issues are magnified as organizations begin to broadly implement applications that leverage the Internet and mobile computing. Access to information is being required in new scenarios, and the complexity of that information continues to grow. Whereas organizations previously had data on the mainframe and in various DBMSs, now important information is found in mail stores, file systems, Web-based text and graphical files, and more. Organizations that are able to leverage all of this information, that can expand rather than replace current UNIX and mainframe systems to embrace client/server systems and the Internet, will thrive.

This paper is intended to explain Microsoft’s strategy for helping organizations achieve maximum business advantage by organizing and accessing their information efficiently. This is a strategy paper; readers looking for in-depth technical analysis are directed to a companion paper titled “Microsoft Data Access Technologies,” which can be found along with other technical and strategic documents on the Microsoft data access Web site, http://www.microsoft.com/data/.

Customer Requirements for Data Access Technologies

Microsoft has solicited extensive feedback from data access customers on the criteria they use in judging and selecting data access technologies and products. Our research has shown that there are four main criteria used in the decision-making process:

This paper has been structured to explain how Microsoft Universal Data Access meets these criteria. After the terms and technologies are defined, separate sections address how Microsoft has met or intends to meet each of the criteria.

Definition of the Universal Data Access Strategy

Universal Data Access is a platform, application and tools initiative that defines and delivers both standards and technologies and is a key element in Microsoft’s foundation for application development, the Microsoft® Windows® Distributed interNet Applications (DNA) architecture.

Today, companies building client/server and Web-based database solutions seek maximum business advantage from the data and information distributed throughout their organizations. Universal Data Access provides high-performance access to a variety of data and information sources on multiple platforms and an easy-to-use programming interface that works with practically any tool or language, leveraging the technical skills developers already have. The technologies that support Universal Data Access enable organizations to create easy-to-maintain solutions and use their choice of best-of-breed tools, applications and data sources on the client, middle-tier, or server.

Figure 1

Another benefit of Universal Data Access is that it does not require expensive and time-consuming movement of all corporate data into a single data store, nor does it require commitment to a single vendor’s products. Universal Data Access is based on open industry specifications with broad industry support and works with all major established database products. Universal Data Access is an evolutionary step from today’s standard interfaces — including ODBC, RDO and DAO — and significantly extends the functionality of these well-known and well-tested technologies.

A Unified Data Access Model Based on COM

One strength of the Microsoft Universal Data Access strategy is that it is delivered through a common set of modern, object-oriented interfaces. These interfaces are based on the Microsoft Component Object Model (COM), the most widely implemented object technology in the world. COM has become the choice of developers worldwide because it provides the following:

Because of the consistency and interoperability afforded through COM, the Microsoft Universal Data Access architecture is open and works with virtually any tool or programming language. It also enables Universal Data Access to provide a consistent data access model at all tiers of the modern application architecture.

Microsoft Universal Data Access exposes COM-based interfaces optimized for both low-level and high-level application development: OLE DB and ADO.

Definition of OLE DB

OLE DB is Microsoft’s strategic system-level programming interface to data across the organization. OLE DB is an open specification designed to build on the success of ODBC by providing an open standard for accessing all kinds of data. Whereas ODBC was created to access relational databases, OLE DB is designed for relational and nonrelational information sources, including mainframe ISAM/VSAM and hierarchical databases; e-mail and file system stores; text, graphical and geographical data; custom business objects; and more.

Figure 2

OLE DB defines a collection of COM interfaces that encapsulate various database management system services. These interfaces enable the creation of software components that implement such services. OLE DB components consist of data providers, which contain and expose data; data consumers, which use data; and service components, which process and transport data (e.g., query processors and cursor engines). OLE DB interfaces are designed to help components integrate smoothly so that OLE DB component vendors can bring high-quality OLE DB components to market quickly. In addition, OLE DB includes a bridge to ODBC to enable continued support for the broad range of ODBC relational database drivers available today.

Definition of ActiveX Data Objects

Microsoft® ActiveX® Data Objects (ADO) is the Microsoft strategic application-level programming interface to data and information. ADO provides consistent, high-performance access to data and supports a variety of development needs, including the creation of front-end database clients and middle-tier business objects, using applications, tools, languages or Internet browsers. ADO is designed to be the one data interface needed for one-to-multitier client/server and Web-based data-driven solution development.

ADO provides an easy-to-use application level interface to OLE DB, which provides the underlying access to data. ADO is implemented with a small footprint, minimal network traffic in key scenarios, and a minimal number of layers between the front end and data source — all to provide a lightweight, high-performance interface. ADO is easy to use because it is called using a familiar metaphor — the COM automation interface, available from all leading RAD, database tools and languages on the market today. And because ADO was designed to combine the best features of — and eventually replace — RDO and DAO, it uses similar conventions with simplified semantics to make it a natural next step for today’s developers.

Design Goals for Microsoft Data Access Components

Microsoft® Data Access Components (MDAC) is a set of redistributable technologies that implement Universal Data Access. MDAC consists of the latest versions of ActiveX Data Objects, OLE DB components and Open Database Connectivity, which have now been released as an integrated set. Developers creating client/server and Web-based data-driven solutions select the components they need and call them from their choice of tools, applications, languages or Web browser to create complete database solutions.

In designing MDAC, Microsoft set a number of guiding project goals, including the following:

These are the project goals guiding the development of the Microsoft Data Access Components to support and enhance the Universal Data Access strategy.

Next, we will take a closer look at each of the key customer requirements for data access technologies — performance, reliability, strongly committed vendors and broad industry support — and detail specifically how Universal Data Access and MDAC meet these requirements.

Universal Data Access Is a High-Performance Architecture

We have seen that performance is of paramount concern to developers and users of data access technologies. Universal Data Access consequently has been designed with performance as its No. 1 goal. This section will examine how the Universal Data Access technologies support this requirement and the positive implications for users.

Flexible, Component-Based Services Model

OLE DB, designed to be a high-performance architecture, accomplishes this via a flexible, component-based services model. Rather than having a prescribed number of intermediary layers between the application and the data, OLE DB requires only as many components as are needed to accomplish a particular task. For example, suppose a user wants to run a query. Consider four scenarios:

In all four cases, the application can query the data. The user’s needs are met with a minimum number of components. In each case, additional components are used only if needed, and, if needed, only the required components are invoked. This demand-loading of reusable components greatly contributes to high performance when OLE DB is used.

Performance Advantages Over ODBC

While ODBC has been a very important and successful data access standard, OLE DB has an improved architecture that provides a significant performance advantage over ODBC. ODBC providers no longer have to implement a SQL relational engine to expose data. With ODBC, services such as cursoring and query processing need to be implemented by every ODBC driver writer. This represents overhead both for the ODBC driver author, as well as for the end user (how many cursor engines and query processors do you need on one machine?). With OLE DB, reusable service components handle the processing chores for a variety of data providers. OLE DB simplifies the process of writing data providers, which means they should come online faster and be of a higher quality; it also reduces the number of components installed on data consumer machines.

ADO Performance Advantages

As with OLE DB, ADO is designed for high performance, which it achieves by reducing the amount of solution code developers must write, by “flattening” the coding model. DAO and RDO, the object models that preceded ADO, are highly hierarchical models. To return results from a data source, the programmer has to start at the top of the object model and traverse down to the layer that contains the recordset. The ADO object model is not hierarchical. The programmer can create a recordset in code and then be ready to retrieve results by setting two properties, and then executing a single method to run the query and populate the recordset with results. The ADO approach dramatically decreases the amount and complexity of code that needs to be written by the programmer. Less code running on the client or middle-tier business object translates to higher performance.

Minimal Network Traffic in Key Internet Scenarios

Microsoft has designed OLE DB and ADO for the Internet, implementing a “stateless” model, in which client and server can be disconnected between data access operations. MDAC contains a Remote Data Service component that provides efficient marshaling of data between the middle-tier or server and the client, including support for batch updates, as well as an efficient client-side cursor engine that can process data locally without constant server requests. Thus MDAC provides greater local functionality and higher performance for Internet applications than previous versions and other approaches.

Ultimately, the performance of Universal Data Access components will be judged in comparison with that of native access methods. Microsoft’s goal is to establish OLE DB as the native interface to major Microsoft and non-Microsoft data stores. To accomplish this, performance tuning for all key scenarios (transactions, decision support, etc.) and with major data sources will be of paramount concern. The No. 1 design goal for ADO and OLE DB is performance, and the architectural foundations to achieve this are in place.

Solutions Built With Universal Data Access Components Are Reliable

As we have seen, reliability is one of the primary requirements for organizations managing and supporting data access applications. Universal Data Access aims to address this need in three areas:  increasing the manageability of client-side components, enabling strong run-time coordination and control capabilities by the server, and delivering well-tested components.

Increasing the Manageability of Client-Side Components

One of the most important tools that organizations can use to increase reliability and decrease support costs is the reduction of the number of components to support on client PCs. Universal Data Access supports this approach in several key ways:

  1. Universal Data Access supports new multitier and Web deployment models, in which data access logic and business logic are centralized on middle-tier servers. Front ends provide presentation services via browser-based interfaces, or via custom and packaged applications. In this model, application functionality is mainly centralized, not distributed to end-user PCs, thus reducing the number of components to manage on those machines.

  2. OLE DB and ADO are essentially system components, shipping with Microsoft® Internet Explorer 4.0, Microsoft® Windows® 98, and the Windows NT operating system. (ADO ships as a component of Windows NT 4.0 Service Pack 4, and is planned as a component of Windows NT 5.0 and Windows 98.) Therefore, organizations can frequently rely on these components being available in a run-time environment and don’t have to manage the distribution and maintenance of them.

  3. ADO is tool- and language-independent and in many cases may be able to replace multiple data access libraries on client PCs. Organizations that may have previously supported DAO, RDO and ODBC on each PC will now be able to get the same functionality by deploying Internet Explorer 4.0 to their clients and moving data access functions previously handled by ODBC clients to their servers. Internet Explorer with ADO can thus replace DAO, RDO and ODBC on client PCs.

  4. For two-tier applications, where data access remains on the client PC, Universal Data Access can reduce the number of components to manage as well. OLE DB eliminates the need for a driver manager, used by ODBC, by directly querying the client’s System Registry, a systemwide repository for configuration data. OLE DB components query the registry to find out what components are installed on the PC, and load them as needed, rather than working through an intermediary such as the Driver Manager.

Enabling Strong Server-Side Coordination and Control

Universal Data Access enables transactional control of diverse data sources and components via Microsoft® Transaction Server. To achieve this, OLE DB data sources implement the functionality of a resource manager which handles transactions local to the data source and enables each data source to participate in distributed transactions. MTS provides a distributed transaction coordinator that guarantees atomic operations spanning multiple data sources using a reliable, two-phase commit protocol, and enables applications to scale as user load grows, with minimal additional effort.

Delivering Well-Tested Components

MDAC reliability is further ensured via rigorous testing of all components. Because ADO and OLE DB have been shipping in high volume since the release of Internet Information Server 3.0, they have enjoyed significant field usage. With the advent of MDAC, and the commitment to ship ADO and OLE DB in a synchronized fashion, these components are now developed and tested side by side. Finally, after testing the components for interoperability, Microsoft stress-tests MDACs with the products with which they ship (e.g., Windows NT, Microsoft® Internet Information Server, Internet Explorer, etc.) to guarantee reliable behavior in multithreaded, continuous operation in highly concurrent environments. This testing is designed to help ensure high-performance, highly reliable components that work well in a variety of real-world scenarios.

The net result of the Universal Data Access architecture’s three-pronged approach should be significant reductions in configuration and support expenses and a reduced total cost of ownership — in short, a reliable data access architecture.

Microsoft Commitment to Universal Data Access

The choice of data access technologies is extremely strategic for organizations. While typically other factors, such as the DBMS, largely drive this choice, customers have told us that data access decisions are made for the long haul, and consequently, need to be considered carefully.

Here are some of the questions that customers have told us they need answered when evaluating a data access vendor:

Based on market share, developer acceptance and broad industry support for many technologies, we believe that Microsoft has consistently met this set of criteria and has proved to be a market leader for data access technology. We’ll now examine Microsoft’s commitment to Universal Data Access.

A Short History of Microsoft as a Data Access Vendor

A look at how the Universal Data Access strategy has evolved at Microsoft will help illuminate the long-term commitment the company is making in this area.

Microsoft began investing in data access shortly after the initial release of Microsoft SQL Server 1.0 in 1989. Initial interest in Microsoft SQL Server was high, but the tools available to program it were limited. The SQL standard was in its infancy, but was clearly bound to the coming client/server revolution. Microsoft knew that acceptance of client/server architecture would be highly beneficial and could see that the biggest problem the industry faced was the proliferation of data access interfaces, and the complexity of creating, maintaining and programming against them.

Open Database Connectivity was the result of these factors. ODBC combined important features to make it extremely attractive:

As ODBC gained broad support as a standard for data access, it became clear that a standards body should be defining its future. Thus Microsoft turned over the specification for ODBC to the SQL Access Group, made up of a broad range of DBMS, middleware and tools vendors.

Despite its features, ODBC had a number of shortcomings, which by 1993 were being addressed in the next phase of data access market development. ODBC was programmed as a Windows API, which made it difficult for the majority of customers to use it. A number of Microsoft tools and applications could use ODBC via the Microsoft® Jet database engine, but ODBC functionality was not extensible except for a few API-level programmers. Thus high-level programming models were created, first Data Access Objects (DAO), and then Remote Data Objects (RDO), which simplified the ODBC programming model and made it accessible to a wider range of programmers.

DAO provided Microsoft® Access and Microsoft® Office programmers, and RDO provided programmers using the Visual Basic® programming system, with higher-level interfaces to ODBC. These interfaces seemed like natural extensions to the Visual Basic language, used in each of these products, and have gained broad usage among database programmers.

By 1995, two major new trends began to shape the next phase of development. These two trends, which are still evolving, are the rise of the Internet as a database applications platform and the rise in importance of nonrelational data, which does not fit the database model encapsulated by ODBC.

One might wonder at this stage why the Internet requires new data access technologies. After all, the fundamental goal of connecting people with data remains the same. The Internet, however, presents new data access challenges on many levels:

As the Internet catalyzes a major paradigm shift in database management and data access, a related shift is occurring:  the emergence of nonrelational data sources. While the Internet highlights the need for management of textual and graphical data, organizations today face a proliferation of data in a variety of DBMS and non-DBMS stores, including desktop applications, mail systems, workgroup and workflow systems, and others. Most established organizations face an even larger challenge, that of leveraging the data in mainframe and minicomputer flat files as they extend access to this information to intranet- and Internet-based customers.

Data access today encompasses all of the issues traditionally addressed by DBMS systems, plus a range of new data types, new clients and new access methods. The Universal Data Access strategy was created to meet this new generation of challenges by leveraging the successful strategies of the past and embracing the architectures of the future.

Universal Data Access Is the Successor to ODBC

Universal Data Access is a strategy that includes and builds on the successful foundation of ODBC. ODBC successes include the following:

The most frequent customer issues surrounding ODBC are related to performance and configuration management, defined as matching database drivers on multiple machines with multiple back-end data sources. Microsoft is aware of these problems and will continue to address them through subsequent ODBC releases, including a new and significantly improved ODBC driver for Oracle.

Moving forward, ODBC is a supported technology under the Universal Data Access umbrella. ODBC in the short and medium term is the best way to access a broad range of relational DBMS-based data due to the high number of drivers available. During this period, as ODBC remains as a mature technology and OLE DB components are becoming available, Microsoft does not want to force customers to choose between the two architectures and make the ensuing trade-offs. Our goal is to enable customers to take advantage of existing ODBC technologies, while adopting the Universal Data Access architecture for new applications.

It was thus by design that the evolutionary strategy for migrating from ODBC to OLE DB was created. The very first OLE DB provider released by Microsoft was the OLE DB Provider for ODBC. The OLE DB Provider for ODBC replaces the ODBC Driver Manager component on client machines, and talks directly to existing ODBC drivers. Applications are then written to the ADO or OLE DB interface, and the OLE DB Provider for ODBC connects to the ODBC data source. Should an organization later decide to change data sources, add data sources or change from the ODBC driver to a pure OLE DB provider for the existing data source, then the application can be adapted with minimal changes. The solution code will require no changes whatsoever; the only required changes will be the addition or replacement of some data access components.

This evolutionary strategy for migrating from ODBC to OLE DB carries some additional important benefits. Since OLE DB is a component-based architecture with service components providing processing capabilities on an as-needed basis, and since ODBC data sources can expose their data as OLE DB, OLE DB service component features may be invoked against ODBC data. For example, Find, a new feature in the OLE DB 1.5 specification, provides for sorting and filtering within a result set. Thus, the result set can be reused and further refined, without an additional round-trip to the server. This capability is not available in ODBC, except when the ODBC driver is called by the OLE DB Provider for ODBC. Thus new and existing applications can gain additional data access features by using OLE DB to call broadly supported ODBC drivers.

This is the evolutionary path from ODBC to OLE DB that Microsoft is providing based on consistent customer feedback. Organizations should continue to plan on broad availability and support for ODBC drivers. As they build new applications, they should look to the Universal Data Access architecture, using ADO and OLE DB interfaces. Nonrelational data will be exposed by OLE DB providers. For relational data, organizations may choose between ODBC drivers, and, as they become available, OLE DB providers and components. In the long run, Microsoft believes customer demand will drive the market for OLE DB components and they, too, will become broadly available. Able to freely choose among and mix ODBC and OLE DB components, organizations will benefit from the highest possible application performance and reliability, while gaining new capabilities at a pace that suits their unique requirements.

Universal Data Access Is Strategic for Microsoft

Key to understanding the Universal Data Access strategy is the realization that it is intertwined with most of the major lines of business where Microsoft is serving organization customers, including operating systems, tools, applications and Internet products. Universal Data Access is designed to work consistently across each of these major product lines, to enable organizations to leverage their data access expertise across teams and projects to build high-performance database solutions accessible to employees, customers and business partners.

One of the strongest examples of this can be seen in ADO, which provides a single interface to data whether it’s being called from a developer tool, Web page, Office or a custom business object. Applications using a variety of front ends may now use the same high-performance interface to data, featuring a small memory footprint, demand-loaded reusable OLE DB components, and a familiar semantic derived from Microsoft’s most widely used interfaces. No matter where in the multitier architecture one is writing data access code, the interface is the same — ADO.

Making integrated access to all forms of relational and nonrelational data ubiquitous is strategic for Microsoft products, because it enables those products to add value through tools that utilize the Universal Data Access architecture. Customers are the ultimate beneficiaries as their tools and applications become more highly adept at processing the information they work with every day.

Relationship of Universal Data Access and Windows DNA

The Windows Distributed interNet Applications architecture is Microsoft’s architectural framework for building modern, scalable, multitier distributed computing solutions that can be delivered over any network. Windows DNA provides a unified architecture that integrates the worlds of client/server and Web-based application development. Microsoft Universal Data Access, a central part of the Microsoft Windows DNA strategy, provides data access services for Windows DNA applications.

Figure 3. Windows Distributed interNet Applications architecture

Windows DNA addresses requirements at all tiers of modern distributed applications:  user interface and navigation, business process and storage. The core elements of the Windows DNA architecture are these:

Because Microsoft Universal Data Access is based on COM, it provides a unified, consistent and common data access model for all applications built to the Windows DNA model.

Broad Industry Support for Universal Data Access

Organizations using data access components have indicated that in order to invest in the Universal Data Access architecture, they need to see the support of vendors of related products and technologies. For customers, broad industry support carries many benefits — safety in numbers, availability of skilled people to work with the products, and products that work together without expensive integration and customization. This section details the activities in which Microsoft is engaged to solidify and publicize the broad range of companies supporting Universal Data Access.

Supporters of Universal Data Access

The industry reception for Universal Data Access has been very positive. Companies building components in each architectural segment recognize key benefits for their customers — improved performance and functionality, flexibility, and reduced cost. This section will detail the industries and key vendors supporting Universal Data Access, discuss Microsoft’s strategy for continued growth in industry support and provide instructions for additional federated vendors to join the Universal Data Access strategy.

The key industry segments supporting Universal Data Access are as follows:

Because the list of vendors in each of the above categories is growing rapidly, the reader is asked to visit http://www.microsoft.com/data/ for a complete, updated list. Leading vendors in each industry segment are represented among the list of Universal Data Access supporters.

OLE DB Provider Strategy

To be successful, OLE DB must gain a broad array of native providers and components so that users can connect to virtually any data source, reuse OLE DB service components, and realize performance and reliability benefits.

The tool that OLE DB provider and component vendors use to simplify their work is the OLE DB SDK. Besides all of the data access consumer components discussed in this paper (ADO, OLE DB and ODBC), users of the SDK receive additional tools, documentation and specifications to help them create high-performance OLE DB components. The latest release of the SDK (in beta as of this writing; due for release fall 1997) contains the following:

These tools simplify the process of writing OLE DB components, provide a framework for creating components that interoperate in well-defined ways and provide criteria by which OLE DB consumers can easily compare component features. Anyone interested in creating OLE DB components should obtain the OLE DB SDK, available for free download from http://www.microsoft.com/data/oledb/ (connect-time charges may apply).

Going forward, the OLE DB SDK will become part of the Data Access SDK, which will include the following:

See the section “Universal Data Access: A Road Map for the Future” for more information about the Data Access SDK.

How Vendors Participate in Universal Data Access

Microsoft is interested in working with vendors of products that support Universal Data Access to help ensure that components address the performance and quality demands of our joint customers. Vendors of DBMS products, development tools and OLE DB service components should visit http://www.microsoft.com/data/ for updated information on programs, products and services. Vendors may also submit e-mail to oledbinf@microsoft.com to register their support.

How Universal Data Access Supports Data on Multiple Platforms

While the Windows NT operating system is emerging as an important platform for database management, many organizations rely on a mixture of operating systems and database platforms. To be successful, any strategy for providing data access must be equally efficient at accessing data on all major platforms. Universal Data Access provides the foundation for supporting efficient and reliable access to data on today’s major computing platforms. Microsoft is actively engaged in supporting third-party development projects involving OLE DB providers for non-Windows-based data. In fact, products using the Universal Data Access architecture to access leading DBMSs on non-Windows platforms are currently available.

Figure 4. Universal Data Access Supports Data Across the Enterprise

Because the OLE DB specification defines interfaces that components support, rather than providing a set of DLLs or actual system components, it is highly portable to other operating environments. OLE DB is based on the COM architecture, which is the Windows object model. This would seem to imply that OLE DB components must run on a Windows- or Windows NT-based PC; however, this is not the case. OLE DB in fact has two separate approaches that provide portability to non-Windows-based DBMS platforms:  a full port of COM, available today from several vendors, and implementations of COM interfaces on non-Windows platforms.

Today ISG International Software Group Ltd. (http://www.isgsoft.com/), with its ISG Navigator product, provides an example, using the second approach described above, of Universal Data Access components that integrate data from Windows NT and several non-Windows NT platforms, including the following:

Operating Environments Database Platforms
Windows 95

Windows NT (Intel and Alpha)

HP UX

IBM RS/6000 AIX

DEC UNIX

SUN Solaris 

DEC OpenVMS (Alpha and VAX)

IBM MVS (planned) 

Microsoft SQL Server

Oracle

Sybase

Informix

RMS

C-ISAM

CA-Ingres

DB2

Adabas (planned)

IMS/DB (planned)

RdbVSAM (planned)

MUMPS (planned)


The important thing to recognize about the ISG Navigator product is that its availability and performance prove the ability of the Universal Data Access architecture to integrate data between Windows and non-Windows platforms. It is not the only approach to satisfying the need for multiplatform data access, but it is a solid implementation available and demonstrable today. The Navigator demo, available to anyone visiting the ISG Web site, demonstrates several important features:

Broad availability of Universal Data Access components that integrate data on multiple platforms will benefit organizations that support multiple DBMS platforms. An additional benefit enables users to take advantage of new OLE DB capabilities when accessing non-Windows-based data. Powerful new service components, running on front ends or middle-tier servers, can be integrated with an OLE DB provider, including those running on non-Windows platforms. For example, general-purpose query processors, cursor engines or custom business objects can all add value to non-Windows-based data exposed by OLE DB. Mainframe and UNIX-based databases that previously did not support remoting of data — an essential feature for the Internet and loosely connected scenarios — may now implement it, thus gaining greater use from existing systems and applications. This powerful extensibility and reusability model is a benefit of component-based software written to a broadly supported specification such as OLE DB.

How Universal Data Access Differs From Other Strategies

A number of leading DBMS vendors have begun shipping new databases and updated versions that follow “universal database” strategies. Many Microsoft customers may be curious about how those strategies differ from Universal Data Access.

In the other approaches, data from across the organization is consolidated in the DBMS, and the DBMS is extended with additional processing capabilities to handle new data types. This strategy is attractive for several reasons:

Microsoft, while recognizing these benefits, believes they may be difficult for some organizations to attain. A universal database approach may require expensive and time-consuming movement to and maintenance of corporate data in the DBMS. It may require tools and applications to support it. And it may require compromises in the selection of supporting products. Customers’ applications will need to either implicitly support this architecture, which is unlikely, or be customized to integrate with it, which could be expensive.

It is very important to note that Universal Data Access does not exclude any data stores, so the two strategies, which may appear to compete, actually cooperate. In fact, OLE DB providers for a number of new “universal database” products are currently under development. Using the Universal Data Access strategy, customers will be able to use data in their existing databases, universal database servers, desktop applications, mainframes, etc. Organizations that combine Universal Data Access and universal database products will ultimately benefit from a broad choice of best-of-breed tools, applications and DBMS products available from leading data access vendors.

Universal Data Access: A Road Map for the Future

Going forward, Microsoft has two important vehicles for shipping the supporting components of its Universal Data Access strategy: the Microsoft Data Access Components and the Data Access SDK. Long-term planning and development for both of these products are under way, and customers may be curious as to the future directions for the Universal Data Access strategy expressed in these plans.

MDAC

The mission for MDAC is to provide in a single synchronized release all of the key data access technologies used across Microsoft tools, applications and platform products. The MDAC release schedule is largely driven by internal “customers” at Microsoft, for which major releases will be delivered. MDAC 2.0 will be shipped concurrently with the next release of Microsoft® Visual Studio™. MDAC 2.5 will be shipped concurrently with the next release of Office. Each subsequent release will be tested and supported for use with the latest versions of following Microsoft products:

There are three project-level design goals for the next release of MDAC:

Data Access SDK

The Microsoft Data Access SDK is a set of tools and samples scheduled for release in 1998. It is designed to help developers create solutions using MDAC. The SDK provides a convenient single source for everything needed to learn about and create data access solutions.

The SDK will contain MDAC version 2.0, tools for getting started with data access component development, tools for testing and distributing components, and documentation. The SDK will be activity-based with content designed specifically for both consumer and provider writers, and developers working with various languages and deployment environments.

Conclusion

Organizations of all sizes today are creating business solutions that leverage data from the desktop to the enterprise. As the types of data and types of access have proliferated, the challenge to create business advantage has remained paramount. Microsoft has designed the Universal Data Access strategy to meet the needs of today’s distributed, multiplatform organization building client/server and Web-based data-driven solutions. By building in performance and reliability features, by making Universal Data Access a key part of the Windows DNA architecture, and by enlisting the support of a broad range of industry players, Microsoft is aggressively meeting customer needs.

Universal Data Access helps organizations build on existing systems and data stores as they create new client/server and Web-based solutions. Universal Data Access bridges the gap between existing systems and new technologies to create an evolutionary path for cost-conscious customers. As customers forge new business opportunities, Microsoft will be there to provide tools and technologies to enable success.