Microsoft(R) Windows[TM] for Pen Computing System Architecture

Stephen Liffick

Created: April 7, 1992

ABSTRACT

This article discusses the architecture of the MicrosoftÒ WindowsÔ for Pen Computing system. It describes elements internal to Windows for Pen Computing and explains how these elements interact with the Windows version 3.1 graphical environment.

THE GOALS

The purpose of Windows for Pen Computing is to enable great pen applications for the MicrosoftÒ WindowsÔ graphical environment. Great pen applications require more than simple mouse and keyboard emulation; they need a powerful and flexible application programming interface (API) that can emulate “pen and paper” interaction with the user in a natural and intuitive manner. This requirement implies several behaviors:

The pen must behave in a manner we expect. That is, it must leave ink as it moves across the surface of the screen in the same way that a pen leaves ink on paper.

We must be able to indicate command and position in a single action—otherwise known as gesturing—with the pen. This is a unique advantage of a stylus input device and computer combination not available with normal pen and paper interaction.

We need to ensure that applications have access to all recognition results and their alternatives as well as to the ink entered by the user. Applications can thus do the “right thing” with user input, whether it is leaving the information as ink on the page, putting up the recognized text or shape, or providing a list of alternatives so that a user can indicate the correct result. The API should be flexible enough to support the behavior that an application deems reasonable and appropriate for its interface.

Given that handwriting recognition is difficult and that specialized recognizers can be envisioned (for example, for mathematical symbols or Gregg shorthand), we should enable the creation of custom recognizers and provide a mechanism for passing a single piece of user input to several recognizers, giving each a shot at coming up with the correct recognition results.

The installed base of Windows applications should be able to interact with the pen. Whereas this should certainly take the form of simple mouse emulation and a writing “window” from which recognized results could be sent to applications, it would be preferable to have this behavior work in all writing areas in the system, regardless of whether the application has been modified to specifically support the pen.

OVERVIEW OF COMPONENTS

Figure 1 shows the components of Windows for Pen Computing and their relationships to Windows version 3.1 and to Windows-based applications.

Figure 1.

Environment

The Windows for Pen Computing environment includes Windows version 3.1, old (pen-compatible) Windows applications, and new Windows applications designed or modified for the pen.

Windows version 3.1

Windows for Pen Computing supports all APIs and components of Windows version 3.1.

Old Windows applications

In Figure 1, the box labeled Old Windows Applications refers to unmodified Windows applications that execute in Standard or Enhanced mode. These applications were not designed for the pen interface but can receive pen input in translated form.

New Windows applications

In Figure 1, the box labeled New Windows Applications refers to new applications that call the pen API directly. They bypass the Pen Message Interpreter (see the following sections for a description of this component) to provide direct, intuitive, and leading-edge pen interface functionalities. Often these functionalities take the form of positionality behaviors that are impossible to detect with Pen Message Interpreter “guesses.” For example, if the user enters a circle in a writing area, it is the letter ‘O’; if the user enters a circle in a drawing area, it should be “snapped” to an exact circle. If the user enters a circle in a scratch area, it remains as ink; or if the user enters a circle over an object, it selects the object. The application can handle user input correctly because the application calls the pen API directly and can determine the correct action, given both the location and the context of the user’s input.

Components

The Windows for Pen Computing system consists of six components:

Recognition Context (RC) Manager

Pen driver

Display driver

Recognizer

Dictionary

Pen Message Interpreter

The following sections describe these components in detail.

RC Manager

The RC Manager is the heart of Windows for Pen Computing. Like a device context (DC) that embodies all information necessary to output graphical data to a device, an RC embodies all information necessary to carry out pen interaction and handwriting recognition.

The RC Manager handles recognition events and interacts with other pen components to complete the events. It:

Records points from the pen driver and passes them to Windows.

Serves as the implementation point for the vast majority of new pen APIs.

Integrates the work of the recognizer and dictionaries.

Packages results for applications.

The RC Manager is implemented in PENWIN.DLL; it is the USER.EXE of Windows for Pen Computing. Throughout this document, PENWIN and RC Manager are used interchangeably to refer to Windows for Pen Computing functionalities.

Pen driver

The pen driver interacts with the stylus hardware and passes information to the rest of the Windows system by way of PENWIN.DLL.

Two files are associated with the pen driver: (1) an installable Windows device driver that uses the installable driver interface of Windows version 3.1, and (2) a virtual driver (VxD) that handles interactions with the hardware when Windows is running in Enhanced mode.

Several constraints imposed on the pen driver arise from the need to use its input for handwriting recognition. These constraints are:

1.It must be able to report its (x,y) location 100 times per second. This report rate is necessary to provide a sufficient number of samples for handwriting recognition, given the speed at which the average person writes. In other words, this report rate ensures that the true path of the pen is reported with enough accuracy to support the efforts of vector-based recognizers.

2.It must be able to report pen positions with a resolution of 200 dots per inch (dpi). This requirement is based on the need for sufficient granularity in “ink” coordinates to judge the path of the pen over the digitizing surface accurately. Requirement (1) ensures that the digitizer reports positions fast enough for us to notice changes, and requirement (2) ensures that the positions reported are fine enough for a recognizer to derive useful information from them.

3.Finally, regardless of the actual resolution of the device, it must report the pen position in coordinates of .001 inch. This allows the RC Manager, recognizers, and applications to manage the ink in a known and standard scale. It also corresponds to the HIENGLISH Windows mapping mode that some applications may use.

Display driver

The display driver interacts with the display hardware and the graphics device interface (GDI), as it does in Windows version 3.1. For Windows for Pen Computing, a normal Windows display driver must also have support for inking added to its function set. Inking support takes the form of two functions—InkReady and GetLPDevice—and the ability to be called at interrupt time by the RC Manager to perform inking. The RC Manager calls the InkReady function to tell the display driver that when it is ready it should call back the RC Manager to draw some ink. The GetLPDevice function returns a widget necessary for the RC Manager’s ink-drawing function.

The display driver provides one new cursor: a pen in the NW orientation.

In all other ways, the display driver requirements and responsibilities are the same as those for a standard Windows version 3.1 driver.

Recognizer

The recognizer is a dynamic link library (DLL) that communicates with the RC Manager by means of a defined protocol. This component translates pen input into recognized symbols. The symbols can be members of the ANSI character set, mathematical symbols, symbols associated with electrical diagrams, or any other set of printable symbols that might be of interest to applications.

The recognizer component of Windows for Pen Computing is completely replaceable and modular. An application can send one sample of user input (ink) to a number of recognizers and then examine the results. Applications can use the Microsoft Recognizer or plug in other recognizers designed for pen computing.

The Microsoft Recognizer is vector based. It analyzes the points entered by the user not as an image but as a succession of positions that when broken down correctly yield a set of features against which comparisons can be made. The order in which a user enters points is very important. Microsoft Recognizer supports the ANSI character set, delayed strokes (for example, the crosses on t’s and the dots on i’s and j’s), the 13 standard Windows for Pen Computing gestures, and the circled letters of the alphabet.1 The Microsoft Recognizer is trainable and can learn the peculiarities of a person’s handwriting to achieve higher recognition rates.

Dictionary

Windows for Pen Computing uses a dictionary to check a recognition result against a set of words or, more generally, against a set of acceptable results.

A dictionary is a DLL that communicates with the RC Manager. After the RC Manager receives a result from a recognizer, it passes the result to the dictionary, which can then correct that result. A dictionary can be a common English language dictionary, a medical dictionary, a set of proper names, and so on. The dictionary path can contain multiple dictionary DLLs. The RC Manager calls each dictionary in turn until one corrects the recognition result.

A good way to think of the dictionary is as a general means to perform postprocessing on a recognition result.

Pen Message Interpreter

The Pen Message Interpreter interacts with old Windows applications. It translates gestures, handwritten input, and pen events into the corresponding mouse and keyboard events for use by applications that do not interpret these gestures themselves. This mechanism provides some measure of compatibility between the pen and old Windows applications.

The most significant feature of the Pen Message Interpreter is its ability to act as an intermediary between an old application and Windows for Pen Computing, regardless of whether the application uses the pen API. Windows for Pen Computing can “notice” when an application uses the system I-beam cursor in writing areas and can allow handwritten input of gestures and characters. When a user begins writing in such an area, the Pen Message Interpreter intercepts the normal pen-to-application data flow.

As the system receives pen input, the Pen Message Interpreter translates the information into the appropriate mouse and keyboard equivalent messages and passes it to the application. The application has no knowledge of the pen or of pen input; it simply receives keyboard and occasional mouse messages. Because most Windows applications rely on standard keyboard and mouse interfaces (for example, Cut is SHIFT+DELETE, Paste is SHIFT+INSERT, double-clicking selects a word), this methodology functions well.

The compatibility provided by the Pen Message Interpreter is weak in some areas:

The Pen Message Interpreter cannot handle nonstandard Windows applications.

Applications that run on Windows today were not designed with the pen in mind and thus are not likely to be as “pen and paper” usable as newly conceived and designed pen applications.

DATA FLOW

We now follow pen input through the system to see how the stylus interacts with Windows, with old Windows applications, and with applications designed specifically for the pen (see Figure 2).

Figure 2.

The Beginning

A good place to start is with the stylus device, or the pen driver. Just as Windows is keyboard-input and mouse-input driven, Windows for Pen Computing is pen-input driven. For the sake of this discussion, the application has requested no handwriting recognition, and the stylus is behaving as a mouse. It reports the pen status at least 100 times per second to the RC Manager by means of the PenPacket function in the RC Manager. The pen driver can report information other than simple x and y positions. For example, the application can request the pressure with which the user enters the point, the angle of the pen, the rotation of the pen, and the distance of the pen from the tablet surface.

With the PenPacket function, all stylus input is passed to PENWIN.DLL (the RC Manager), which handles the input. How it handles that input depends on the recognition mode.

The RC Manager--Two Modes

The pen can be in mouse mode or in pen mode. In mouse mode, it acts as a mouse and points, clicks, drags, selects, and so on. The components of Windows for Pen Computing act as a message pump from the stylus to the Windows system, converting pen events into their mouse message counterparts. The pen is a virtual mouse in this mode.

In pen mode, the stylus acts as a pen, dropping ink, passing data to a recognizer, and so on. Windows version 3.1 is not involved in the process. The RC Manager, pen driver, recognizer, and display driver enter a closed universe in which they and they alone party on stylus input. The pen driver reports it, the recognizer recognizes it, the display driver draws it, and the RC Manager handles the interactions necessary to do all the above.

By default, the pen is in mouse mode. When an application, or actually a window, requests that pen input begin, mouse mode changes to pen mode. This transition is invisible to the user, who simply notices that the pen inks where it should ink and points and clicks where it should point and click.

The RC Manager--The Message Pump

While the pen is in mouse mode, the RC Manager translates the stylus events into the appropriate mouse events and passes this information to the Windows kernel. The RC Manager and pen driver create a virtual mouse driver.

Mappings are straightforward. If the pen touches the surface of the digitizer, the RC Manager reports a WM_LBUTTONDOWN message to the kernel. If the pen moves across the surface of the digitizer, WM_MOUSEMOVE messages are generated, regardless of whether the pen touches the surface. (Some devices detect the pen even when it is not in contact with the digitizer.) If the pen leaves the surface of the digitizer, the RC Manager reports a WM_LBUTTONUP message to the kernel. This is all there is to it. The kernel generates double-taps, coalesces mouse messages, handles input synchronously, and so on in the same way as it processes mouse input from a standard mouse driver. All pen behaviors are generated for “free.”

The RC Manager does coalesce messages on its own. Because the data from the stylus is high resolution, a “mousemove” on the digitizer may not mean that the mouse should move on the screen. Because a single screen pixel maps to many digitizer pixels, a move from one digitizer pixel to another does not necessarily mean that the cursor position on the screen associated with the new digitizer pixel has changed. Sending every new stylus location to Windows would result in large numbers of WM_MOUSEMOVE messages for the same screen location. The RC Manager coalesces these messages, reporting a new location only when it will result in a new screen location. (Because of rounding errors, the RC Manager cannot do this job perfectly. Applications can thus receive two WM_MOUSEMOVE messages for the same location, one right after the other.)

The RC Manager--The Data Store

The RC Manager also buffers all data it passes to the kernel. This is necessary for two reasons:

The digitizing device is high resolution with respect to the screen. Because a recognizer requires this resolution to recognize handwritten input, we cannot rely on the WM_MOUSEMOVE resolution information (inherently limited to screen resolution) for recognition. Hence, we must store the high resolution (x,y) information we are not passing to Windows so that it is available later when we need it for recognition.

Pen events are interrupt-level events with very high report rates. Because all input in windows is handled synchronously, an application can take a very long time to handle a message while input continues to stream in. The kernel buffer is not large enough to store all this information, most of which would be coalesced anyway. Therefore, the RC Manager must store it locally to ensure that no data is lost.

The Application

A Windows application receives notification that an inking-recognition process should begin from the WM_LBUTTONDOWN message. Because, by definition, the pen touching the tablet surface results in a WM_LBUTTONDOWN message, the application must understand that this message has occurred over an area that should be an “inking” area, for example, a text box. In a text area, an application will want to begin inking because the user expects this to happen. When an application receives a WM_LBUTTONDOWN message, it must call back into the RC Manager and tell Windows for Pen Computing to begin inking, to recognize the ink, and to manage all stylus interaction until the recognition event is over.

An application calls the Recognize function when it receives a WM_LBUTTONDOWN message and uses it for this process. This function will not return until the results from a recognition event have been obtained from the recognizer, packaged up, modified by any dictionary processing, and returned to the application.

The GetMessageExtraInfo Function--Digitizing Events, Mouse Messages, and Time Travel

It is not possible to determine which (x,y) event from the digitizing device (stored in the private RC Manager buffer) maps to the mouse event being handled by the application. While Windows input is being handled synchronously and potentially with large delays, stylus events still stream in at 100 points per second. Regardless of what Windows applications are doing, the interrupts from the digitizing device continue to stream in, and new (x,y) positions for the pen continue to be recorded in the private RC Manager buffer.

By the time the application gets around to calling the Recognize function, determining which buffered events “belong” to the mouse message being processed by the application is not possible. The solution is to associate the WM_LBUTTONDOWN message being handled by the application with an RC Manager buffered event. The facility for doing this is provided by the GetMessageExtraInfo function, a new Windows version 3.1 API.

The components of Windows for Pen Computing actually report more than just (x,y) information when they call the kernel mouse_event entry point. They also report a 32-bit magic number that the kernel stores with the message. This number is a pointer into the RC Manager buffer that links the Windows message to a specific digitizer event. Because the kernel retains this pointer for the life of the message, the information is available to applications regardless of how long it takes for the message to arrive. An application can access this pointer by means of the GetMessageExtraInfo function while processing the WM_LBUTTONDOWN message and pass it to the RC Manager as one of the parameters of the Recognize function. With this pointer, the RC Manager knows where the buffered events from the digitizer become significant and can safely understand what data to begin sending to the recognizer.

The RC Manager--Pen Mode

After the Recognize function call, the stylus device is put into pen mode, and the recognizer begins to get data and perform recognition.

In Figure 2, notice that Windows is out of the loop when the RC Manager is in pen mode. From the beginning to the end of recognition, the components of Windows for Pen Computing interact without benefiting from, or requiring, the rest of Windows.

After an application calls the Recognize function, three major procedures—inking, recognition, and dictionary processing—must be carried out before the results can be packaged and returned to applications.

Inking

Ensuring that the pen leaves ink on the display models the way a pen leaves ink on paper requires a timely display of “bits” as well as accurate position information to associate a point on a digitizing device with a point on the display. (Note: They are disjoint devices. The drivers in question must calibrate their relationship in such a way that the pen and the ink “line up.” Windows for Pen Computing includes an interface for calibrating simple (x,y) drift. The calibration interface does not, however, deal with rotating and compressing the digitizing matrix.)

The inking procedure is simple. After the pen driver calls the PenPacket function to report and record new data, the RC Manager calls the display driver, informing it that ink is to be drawn. The RC Manager provides a callback function pointer that the display driver must use when it is “safe” to ink. If the display driver is busy, for example, in the middle of a large bit block transfer, it must complete the current operation before calling back into the RC Manager. If the display driver is not busy, it calls back into the RC Manager immediately, which in turn draws the ink. The RC Manager calls a display driver entry point directly to draw the ink, behaving a little like GDI. (The RC Manager uses a display driver function required of all Windows display drivers to draw the ink directly, so this cannot fail.)

The RC Manager calls the display driver directly because ink is drawn at interrupt time. GDI is not reentrant, so it cannot be relied upon to display ink on the screen in a timely manner.

Note:

It was necessary to design an interface between the RC Manager and display driver to avoid calling other components of Windows. The availability of a known set of display driver functions makes this a safe endeavor as long as the display driver controls the timing of the whole procedure. The display driver determines the right time for the RC Manager to draw the ink by initiating the entire process with a callback into the RC Manager.

Recognition

While the display driver and the RC Manager are inking, the recognizer works on recognizing the points associated with the ink being drawn on the screen.

The RC Manager calls the RecognizeInternal function that all Windows for Pen Computing compliant recognizers export. The recognizer then enters a loop in which it queries the RC Manager buffer for data, after which it performs whatever black magic it needs to recognize the data into character (or other) symbols. Logically, the loop looks something like this:

while ( GetMoreXYData() != TERMINATION_EVENT ) {

if ( ThereWerePoints)

DoBlackMagic()

}

SendResultsToDictionariesAndApplications()

return

Note:

A number of different events can terminate recognition, including the following: the pen leaving a bounding rectangle provided by the calling application, the pen entering an exclusion rectangle provided by the calling application, the pen leaving the proximity of the tablet, and a timeout on new pen data.

At interrupt time, while the pen driver is reporting points to the RC Manager, the remainder of CPU attention is devoted to the recognizer. Recognition is occurring concurrently with data entry. In essence, Windows for Pen Computing recognizes input as it is written. This design ensures that the time between the user finishing writing and the results displaying on the screen is short.

When recognition is complete, the recognizer calls back into the RC Manager to process the results. At this point, dictionary processing ensues. Note that the Recognize function calls back into the RC Manager before returning from the RecognizeInternal function.

Dictionary processing

The dictionary path provides a means to check a recognition result against an expected or a preferred set of results. Most recognizers, including the Microsoft Recognizer, return alternatives with their “best-guess” result. Semantic or language knowledge can be applied to modify results (that is, to decide whether an alternative is preferable to the best guess) by passing a data structure that embodies the notion “best guess plus anything else remotely possible” to a widget that can make such a determination. In the Windows for Pen Computing system, that widget is a dictionary. A dictionary takes a data structure that is “best guess plus anything else remotely possible” and decides whether any remote possibilities should replace the best guess.

A Windows for Pen Computing dictionary is a DLL with a predefined set of exported functions. The RC Manager can use the LoadModule and GetProdAddress functions to access the required functions on the fly. The concept of a dictionary and its capabilities is defined fairly loosely in the Windows for Pen Computing architecture. A dictionary may be relatively “smart,” with lots of contextual intelligence, or relatively “dumb,” doing brute-force word lookup and replacing a best guess with an alternative. The API is flexible enough to support either. Dictionary processing is defined and modularized to encourage competition and to accommodate dictionaries by third parties.

The RC Manager passes a single result to a chain of dictionary DLLs, one after the other, until one of the dictionaries decides to correct the results. Once a single dictionary has determined that it knows the results should be corrected, the RC Manager stops calling dictionaries in the chain, packages the results, and sends them to the application that called the Recognize function by means of a WM_RCRESULT message.

The Results

A new message, WM_RCRESULT, carries the recognition results back to an application. The message includes a pointer to a data structure that contains the ink entered by the user, the best guess of the recognizer, the bounding rectangle of the input, and the list of alternatives. The application can process this information as it sees fit.

All of this occurs before the Recognize function terminates. In fact, we are several functions deep at this point (see Figure 3).

Figure 3.

After an application returns from the WM_RCRESULT message, the call tree is “unwound,” with some cleanup occurring at each step. Finally, the Recognize function returns and the application completes the remainder of WM_LBUTTONDOWN processing.

An application that has called the Recognize function receives no WM_LBUTTONUP message. The RC Manager removes this message from the queue and does not allow it to be passed on to the application.

The Pen Message Interpreter and the Rest of the System

The cursor is an I-beam over an area in which text is input and managed. As mentioned earlier, the RC Manager leverages this fact to enable writing in applications that otherwise wouldn’t allow writing. The PenPacket function in the RC Manager detects that the cursor is an I-beam and changes it into a pen cursor by starting pen mode. When the pen goes down and pen mode is entered, the Pen Message Interpreter creates an invisible window that is placed over the entire screen.

This invisible window serves as the “agent” for a pen-unaware application. Inking actually occurs on this window; the window interacts with the pen API to perform recognition. The WM_RCRESULT message is sent to the invisible window, which then maps the results to keystrokes and to potential mouse messages. The WM_CHARs, WM_MOUSEMOVEs, and WM_LBUTTONDOWNs that correspond to the gesture or text are entered into the system at the lowest level—through keyboard_event and mouse_event as appropriate. The Pen Message Interpreter serves both as a logical keyboard and as a logical mouse. After the events are posted, the invisible window is destroyed, and the user is none the wiser. When the window is destroyed, the ink disappears and any pen input is treated correctly. As a result, in the Notepad applet for example, interaction with the pen is possible, all gestures function as expected, and text can be entered at the insertion point.

The Pen Message Interpreter combines a fairly elaborate architecture with an understanding of a Windows-based application’s reliance on standard keyboard and mouse shortcuts to allow pen interaction in applications designed only for the keyboard and the mouse. However, developers and users will find applications designed with the pen in mind significantly more usable than their merely compatible brethren because these applications can improve recognition through context awareness and make judicious and appropriate use of leaving ink on the screen as ink.

The Gesture Macro Layer

The gesture macro layer provides an additional layer of functionality. Because handwriting recognition is difficult and because people frequently use common blocks of text or groups of words, a standard system interface for mapping circle letter gestures to sequences of keystrokes is desirable. The gesture macro layer, or Gesture Manager, is the interface associated with this capability in the Windows for Pen Computing system. This component provides a means to bind circle letter gestures to blocks of text and keystrokes.

The gesture macro layer is a system service, and it is very much a macro layer. In a keyboard macro layer, binding a keystroke to something implies that the keystroke is no longer available to applications. Likewise, in the gesture macro layer, binding a circle letter gesture to something implies that the gesture is no longer available to applications. Users expect this model, although it means that circle letter gestures provided by an application can be superseded by a macro binding.

Figure 4 illustrates the functionality of the gesture macro layer.

Figure 4.

When a recognition result indicates a circle letter gesture, the result is passed to the gesture macro layer, which determines what should be done with the result. The gesture macro layer has three choices: (1) not to bind the gesture; (2) to bind the gesture only to printable characters; (3) to bind the gesture to nonprintable characters.

No gesture binding

If the gesture macro layer does not bind the gesture, the WM_RCRESULT message is passed to the application as if the gesture macro layer did not exist.

Gesture binding to printable characters only

If the gesture macro layer binds a circle letter gesture to printable characters only (a common occurrence), WM_RCRESULT returns the printable characters to the application as the best guess of the recognizer. The alternatives are still available, but the best guess is the user-provided character mapping for the circle letter gesture in question. This action has two consequences:

The application gets a result that bears no resemblance to the ink the user entered or to the alternatives suggested by the recognizer. Specifically, the ink is that for a circled letter, and the result may be a long string. To help the application understand this, a flag is set in the RCRESULT structure, indicating that the Gesture Manager made a replacement.

The positional information associated with the gesture is retained. Because no translation to WM_CHARs has occurred and the entire set of results is available to the application, any position dependence associated with the gesture is maintained. For applications that attach importance to the position of input, this ensures that the maximum level of information is retained.

Gesture binding to nonprintable characters

A gesture can be bound to invisible or nonprintable characters, such as ALT, CTRL, or F1. For example, the circle-s gesture can be bound to ALT,F,S (or Save File) in most Windows applications. If a gesture binding has characters of this nature, it cannot simply be “stuffed” into the best-guess result from the recognizer.2 Because returning nonprintable characters with the WM_RCRESULT message is not possible, the characters must be sent to the application as WM_CHARs. Two subcases exist for nonprintable character binding:

Case 1: The nonprintable characters are keyboard shortcuts for standard editing gestures. Examples are SHIFT+DEL as Cut and SHIFT+INSERT as Paste. In this case, the gesture macro layer replaces the circle letter gesture result with the corresponding standard editing gesture result. (For example, when circle-x is mapped to SHIFT+DEL for Cut, the gesture macro layer replaces “circle-x gesture” with “cut gesture.”) Thus, to the pen-aware caller (remember that this may be the Pen Message Interpreter, which turns around and immediately unmaps our carefully remapped gestures) the circle letter gesture is simply transformed to the appropriate standard editing gesture.

The standard keyboard shortcuts are logically “reserved” by the gesture macro layer and override the application’s own conventions. For example, an application may use SHIFT+DEL for something other than Cut. If the user maps a circle letter gesture to SHIFT+DEL, the gesture macro layer translates it to the standard editing gesture, which is Cut. Hence, the pen-aware application will never see SHIFT+DEL; it will see Cut.

Case 2: The nonprintable characters cannot be mapped cleverly by the Gesture Manager; they must be translated to keystrokes. In this case, the gesture macro layer becomes a logical keyboard driver and sends the characters to the application as WM_CHAR messages. The keystrokes are entered into Windows by means of the keyboard_event API in the kernel.

CONCLUSION

Windows for Pen Computing takes advantage of the complex relationships between Windows version 3.1, device drivers, and applications to generate a powerful and flexible pen development environment. Windows computing has already influenced the growth and direction of the personal computer industry dramatically. Windows for Pen Computing will achieve the same level of success by revolutionizing the design of application software and the hardware on which it runs. Windows for Pen Computing will be the driving force behind a dramatic change in personal portable computing in the 1990s. We look forward to sharing this revolution with you.

1Circled letters are special gestures that can be bound to text and keyboard equivalents.

2The reason is that the best guess from a recognizer is always printable. A recognizer returns values that map only to printable items. Recognizers actually return special 32-bit values for each recognized symbol. Nonprintable characters are not represented in the space of acceptable 32-bit recognizer symbol values.