Binary Compatibility BasicsLast reviewed: November 2, 1995Article ID: Q95162 |
The information in this article applies to:
SUMMARYBecause of differing instruction sets between processors, object code is not binary-compatible across platforms. To enable object code compiled for processor X to run on processor Y, a virtual machine must be created on processor Y to emulate processor X's instructions, which results in a considerable performance hit. For this reason, Windows NT offers a strong source compatibility, which makes it a relatively simple matter to compile the same code into native object code for each platform. The Hardware Abstraction Layer (HAL), smooths over differences between hardware implementations. All access to the hardware from the operating system (OS) goes through the HAL, which makes changing of both the hardware and the OS itself much simpler. The HAL, however, does not emulate the instruction set of the platform; a common misconception.
MORE INFORMATIONSource code is compiled into executable or object code for the instruction set of a specific processor or family (x86, 680x0, R3000/4000, and so forth). This code runs natively only on that type of processor. Remember, this reduces to 1's and 0's, hence binary, so even if two processors have a large overlap in the available instructions (and they do; MOVs, XORs, or whatever at the assembly language level), what is actually executed is not even relatively "high level" human-readable assembly code but merely a series of bytes that by convention/definition only are assigned the meanings that are human-readable at the assembly language level. For example, probably every microprocessor has an OR instruction. On the Intel x86, the OR instruction may be represented in the instruction stream by the bytes "09 [effective address]" (see page 456 of Microsoft's 80386/80486 Programming Guide, 2nd Ed.). On the R4000, however, it's nearly guaranteed to be something different. Thus, if you want to run an executable compiled for Intel on a MIPS chip, you must run a program that behaves as an Intel instruction interpreter (similar to a Basic interpreter, but much more complex). Such a program is called an emulator, and will scan through the Intel object code and then in turn execute equivalent commands in MIPS form on the processor. But the emulator must do much more than that; it must also create a "virtual machine," a complete software execution environment that behaves similar to the original hardware/software environment that the executable was originally compiled for. Note that even in an ideal case, every processor X instruction requires about 5-20 instructions on processor Y. The object code interpreter must examine the next instruction/byte, compare its value to known instruction values via some kind of table (depending on the implementation), and then execute the equivalent native instruction. There is room for optimization but it will never be very fast (relative to native code). Therefore, run non-native object code only if you absolutely must. Below is a binary compatibility table for Windows NT:
System Binary-Compatible on NT? ------ ------------------------ Win16 apps Yes (via Insignia's x86 emulation code) Win32 apps No (must re-compile) POSIX apps No (POSIX is a source-code standard)(1) OS/2 1.x apps No (Don't run on non-x86 machines at all)(2)Notes: (1) POSIX 1003.1 is a C-language source-level standard for basic operating system services. POSIX applications don't need to be binary- compatible even on the same instruction set! That is, a POSIX application compiled and linked for SCO on x86 will NOT run on x86 Windows NT POSIX or x86 Sun/Interactive POSIX. (2) OS/2 1.x support is technically feasible but would have required more work on the non-Intel platforms, and was not considered a high priority.
The Hardware Abstraction Layer (HAL)A common misconception is that the HAL should allow binary compatibility. The Windows NT HAL has absolutely nothing to do with the issue discussed above; that is, running code complied for processor X on processor Y. Nor is the HAL akin to a "virtual machine" implementation; it does not simulate anything. Rather, the HAL is simply an example of a good modular software design that deals with the issue of differences in hardware design between machines with the same instruction set (or between instruction sets). The HAL provides a set of services to the rest of the Windows NT executive that abstracts and "hides" the differences between low- level hardware- software interfaces, such as with DMA controllers, programmable interrupt controllers, clocks and timers, and so forth. In a typical pre-Windows NT operating system, there is lots of code embedded throughout the operating system (particularly in device drivers) that is specifically tied to particular implementations of hardware services (a particular PIC, a particular DMA chip, and so forth). If one of these hardware pieces is changed, lots of code scattered throughout the system will break. As a result, the hardware never changed and a typical 486/66 machine today uses the same low-function hardware devices that appear in the IBM AT 286 (if not the IBM PC itself). In Windows NT, any other part of the OS (including the kernel and all device drivers) that needs to deal with those low-level devices now uses HAL services, and is therefore isolated from changes in the hardware. If you change those hardware pieces you only need to change the HAL. This results in at least the following two advantages:
|
Additional reference words: 3.10 3.50
© 1998 Microsoft Corporation. All rights reserved. Terms of Use. |