[This is preliminary documentation and subject to change.]
Currently, there is a third-party market of accessibility aids for the Windows operating system, each aimed at supporting individuals with a particular set of disabilities. These tools fall into two distinct camps: those that replace or modify existing Win32® devices (like the keyboard, mouse, or display driver) and those that integrate with applications to provide greater accessibility (like screen readers or voice-recognition utilities). Active Accessibility targets the problems that these aids face.
Screen access aids for blind or low-vision users provide a prime example: the aid must understand and verbalize anything displayed on the screen. Similarly, voice recognition utilities must identify controllable objects on the screen, recognize the name of each when it is spoken, and then programmatically select the object or manipulate its state.
Before the introduction of Active Accessibility technology, accessibility aid developers had to use cryptic hook mechanisms or hack the operating system in an attempt to gain the information they needed. Generally, they succeeded, but were extremely reliant on implementation details that might change between versions of the operating system.
Using OLE technology, Active Accessibility provides high-performance, reliable tools that enable applications and accessibility aids to work together in helping users with special needs. Active Accessibility provides a comprehensive object model including the interfaces, libraries, and other API elements that eliminate the need for unreliable hacks.