This White Paper is written for the IT professional with a strong background in UNIX. It begins by describing the architecture of Microsoft® Windows NT® operating system in a technical framework that is common to UNIX. The paper then moves on to describe many of the interoperability tools that enable the two operating systems to function in a heterogeneous environment.
Companies today are actively re-engineering their information technology (IT) organizations to become closely integrated with mainline business processes. To keep pace with the rapidly changing business environment, organizations must move forward while at the same time leveraging their investment in existing systems. For many, integration between UNIX and the Microsoft® Windows® family of operating systems, especially the Microsoft® Windows NT® operating system, has become a critical success factor.
This White Paper is written for IT professionals with a strong background in UNIX. It begins by describing the architecture of Windows NT in a technical framework that is common to UNIX. This section also serves to bridge the gaps in terminology between Windows NT and UNIX. The paper then moves on to describe many of the integration tools that enable the two operating systems to function in a heterogeneous environment. The final section covers tools and strategies to help with software development targeted for both Windows NT and UNIX. The appendices are designed to point the reader in the direction of other pertinent material.
This is a technical paper, and it is assumed that you have a good working knowledge of UNIX and have at least seen a computer running Windows. It begins with a brief look at both operating systems, including a technical comparison of the two. It moves on to show how Windows NT and UNIX can integrate peacefully in a heterogeneous world, and ends with a discussion about tools available to help developers working in both environments.
In today’s corporate IT environment, a mix of legacy mainframes, minicomputers, technical workstations, database servers, workgroup or departmental servers, and desktop PCs is the usual case. It is in this heterogeneous environment that the IT professional struggles to provide a robust, reliable, and cost-effective infrastructure for the organization to achieve its goals. With this in mind, the more effective tools these professionals have at their disposal, the better they will be able to deliver the required service. Windows NT is an essential one of these tools; UNIX is another.
There are many variations of the UNIX operating system, built and marketed for different purposes. While Solaris is found in technical workstation computing, AIX and Digital UNIX are oriented toward business applications, and HP-UX systems fall somewhere in-between. SGI IRIX systems are designed for graphics-intensive operations, and SCO UNIX is designed for Intel PCs. Each market has its own challenges, and UNIX variations were tailored to meet those challenges. Because there is no single UNIX, a direct comparison with Windows NT is difficult. Windows NT started with a different approach. It began as a Network Operating System (NOS) with file and print services for PCs, an area once led by Novell NetWare. But Windows NT grew from its file and print origins into the best of an application server, a communications server, and Internet/intranet server.
Windows NT is also the foundation upon which the Microsoft BackOffice® family is built. BackOffice is a set of integrated server products, which provide file and print, communications, messaging and groupware, database, host connectivity, systems management, Internet, secure proxy, content creation and Web site management, and information retrieval and search services. Included with Windows NT Server are Internet Information Server, Index Server, Microsoft NetShow™ server, and Microsoft FrontPage® Web site creation and management tool.
Details on other BackOffice products, including Microsoft Exchange Server, Microsoft SQL Server™, Microsoft Proxy Server, and Microsoft Systems Management Server, can be found at http://backoffice.microsoft.com.
The history of UNIX and C are intertwined. In 1968, many years before the PC, Ken Thompson and Dennis Ritchie worked for the Computer Research Group, which was part of a joint Bell Labs/MIT team building a timesharing operating system for the General Electric GE645 mainframe. Although visionary for its time, the system they were working on, called MULTICS for MULTiplexed Information and Computing System, had serious drawbacks and never found favor with AT&T management. General Electric subsequently sold its computer business to Honeywell, which continued to market MULTICS for some years.
There were no color graphics displays in those days, but during the MULTICS project, Ken Thompson became interested in a program called Space Travel. The program featured a spaceship that could be piloted through a simulated galaxy. Even though GE645 mainframe was one of the fastest mainframes of the time, Space Travel did not run well under MULTICS. Not letting that get in the way of his enthusiasm, Thompson took a logical approach to the problem. He borrowed an early DEC PDP-7 minicomputer that other groups at Bell weren’t using, wrote an operating system, and ported Space Travel to it by 1970. Brian Kernighan jokingly called this new operating system “UNICS” (UNiplexed Information and Computing System) in reference to the much larger MULTICS. Eventually, the name was changed to UNIX.
Right about that time, Dennis Ritchie was working on a new programming language called “B,” after its parent “BCPL.” Leveraging Ritchie’s work and talent, Thompson used “B” to write the first UNIX. A few years later when UNIX was rewritten, Ritchie’s “B” had—with Brian Kernighan’s help—evolved into the “C” programming language. The C and its successor, C++, are among the most widely used programming languages today.
Longstanding antitrust provisions prevented AT&T from marketing UNIX or UNIX-based products. Because it had had no commercial viability, AT&T never treated it as a real product. Source code was made available to other groups within AT&T and, for educational purposes, to universities. New versions of UNIX proliferated, resulting in a complex history. Standardization efforts have been under way for many years, and most extant versions trace their lineage back to one of two sources: AT&T System V or 4.xBSD (Berkeley UNIX) from the University of California, Berkeley, Computer Science Department (“UCB”).
UNIX had a positive effect on the computer industry. It allowed new hardware companies such as Sun, Apollo, and Silicon Graphics to focus on hardware design without having to invest huge sums of money designing operating systems. (This is analogous to the effect the MS-DOS® and Windows operating systems had on the personal computer.) But as UNIX matured, manufacturers added enhancements and features of their own to differentiate them from other UNIX variants.
Many UNIX features familiar today have their origins in academia. UCB was the source of the TCP/IP implementation, sockets, and many common UNIX utilities. Because early versions of UNIX were not designed to be secure commercial operating systems, they did not offer strong security features. By the late 1980s and early 1990s, some versions of UNIX began containing commercial characteristics, such as improved security, resilience, and fault tolerance. Although security remained an issue, UNIX continued to flourish because it was the only relatively open standards-based operating system not controlled by a single vendor that could be used as an alternative to mainframes.
Early this decade the industry’s price/performance ratio shifted dramatically. Furthermore, symmetric multiprocessing (SMP) architecture-based hardware emerged. UNIX on SMP architecture worked well, and some early versions delivered significantly higher performance as the number of processors increased. Furthermore, UNIX systems proved more flexible than legacy systems and mainframes. Academia’s love for UNIX was also a plus. But UNIX vendors added non-standards-based utilities and tools in order to enhance features. This led to larger differences between flavors of UNIX, especially in system administration. Security remained sidelined for two major reasons. First, the Internet had not emerged as the force it is today. Second, most corporate networks were not connected to the outside world.
Digital Equipment Corporation (DEC) had one of the most successful general-purpose minicomputer operating systems. It was called VMS, and it supported Digital’s VAX architecture. Dave Cutler led DEC’s VMS development effort.
In 1988, Cutler joined Microsoft to lead the development effort for the new high-end operating system in the Microsoft Windows family, Windows NT. Two primary forces shaped the Windows NT project: market requirements and sound design. The market requirements came from input from its customers around the world. The design goals came from advanced operating system theory and design.
Market requirements dictated that Windows NT provide:
Design goals for Windows NT complimented market requirements:
Windows NT has its roots in the desktop, but it was designed for client/server computing; UNIX has its roots in host-based terminal computing, but was redesigned to meet other specific requirements. They are two different but complimentary computing paradigms.
Windows NT is not a multiuser operating system in the usual sense of the word. Users do not have limited-function character terminals, terminal emulators, or X-terminals connecting to a Windows NT-based host. What users have are single-user, general-purpose workstations, or “clients,” usually running Windows 95 or Windows NT, connecting to multiuser, general-purpose servers with the processing load shared between both. The distinction between the two environments is subtle, but understanding it is key to understanding Windows NT.
Windows NT follows what has become the standard terminology for the client/server relationship, with the client being on the desktop and the server being in the back office.
The most common Graphical User Interface (GUI) on UNIX is the X Window System. This was developed at the Massachusetts Institute of Technology (MIT) as part of Project Athena. The most commonly implemented version of X is X11. Built on top of X is the Motif API, Library and Style Guide from the Open Software Foundation. Common Desktop Environment, which standardizes the desktop and tools, is built on top of Motif. OpenLook is another popular X-based graphical user interface. It was developed by Sun and AT&T. Although Motif is recognized as an industry standard, Sun still delivers OpenLook on its workstations and many users use it as a default graphical user interface.
X allows UNIX systems to deliver a GUI to the user, but it can present management issues. Some issues include heavy resource requirements, the complexity of managing resources such as fonts or boot servers, and the various daemons required on the host.
Microsoft Windows NT uses the same GUI as Windows 95, which is easy to manage, intuitive, and familiar to most users. It also has low resource requirements
Editor’s note: Throughout the paper there are reference to Windows, which is the Microsoft operating system. Do not confuse this with the UNIX X Window System, also known as X Windows or just X. Windows will be used to refer to Microsoft Windows, and X will be used to refer to the X Window System.
In modern operating systems, applications are kept separate from the operating system itself. The operating system code runs in a privileged processor mode known as kernel-mode and has access to system data and hardware. Applications run in a nonprivileged processor mode known as user mode and have limited access to system data and hardware through a set of tightly controlled application programming interfaces (APIs).
Windows NT is a multithreaded microkernel-based operating system. This is akin to Mach, a multithreaded, microkernel-based UNIX operating system developed at Carnegie Mellon University. Keeping the base operating system as small and as tight as possible was one of the primary design goals of Windows NT. To do this, Microsoft kept in the base operating system only those functions that could not reasonably be performed elsewhere. Functionality pushed out of the kernel was put in six nonprivileged servers known as protected subsystems. The protected subsystems provide the traditional operating system support to applications through a feature-rich set of APIs. (Editor’s note: With Windows NT version 4.0, the GUI system was put “back” into the kernel for display performance considerations.)
This design results in a very stable base operating system. Enhancements occur at the protected subsystem level. New protected subsystems can be added without modifying either the base operating system or the other existing protected subsystems.
In UNIX systems, functionality is added to the kernel itself. Although good performance is delivered, adding functionality to the kernel makes the system vulnerable to the harmful side effects of poorly written kernel extensions. Furthermore, few UNIX kernels today are dynamically linked. So relinking and reloading the kernel is often required, which is another systems management task.
Addressing Open Systems and Industry Standards is important before exploring the inner workings of Windows NT. Open Systems means different things to different people. But the goal of Open Systems remains the same: allow the customer to “level the playing field” when choosing between hardware vendors and provide well-defined programming interfaces that won’t risk obsolescence.
The subject of Industry Standards is every bit as perilous as that of Open Systems. There are two categories of standards: de jure and de facto. De jure standards are those that have been created by standards bodies such as the American National Standards Institute (ANSI), the Institute of Electrical and Electronic Engineers (IEEE), and the International Standards Organization (ISO). Examples of de jure standards are the ANSI American Standard Code for Information Interchange (ASCII) character encoding standard, the IEEE Portable Operating System Interface for UNIX (POSIX) standard, and the ISO X.400 and X.500 standards for mail and directory services.
De facto standards are those that have been widely adopted by industry, but not originally endorsed by any of the standards bodies. An example of a de facto standard is the IETF RFCs, which collectively make up the TCP/IP protocol and the Internet. De facto standards have arisen either to fill gaps left by the implementation specifications of the de jure standards or because no standard had yet been defined for the particular area.
Open Systems based on de jure Industry Standards will likely never occur because the computer industry and the academic standards process are so different. In short, the formal standards process cannot keep up with the rapid pace of technological change. But in today’s Open Systems, de facto and de jure standards merge to create interoperable systems. This enables Open Systems to keep pace with technology.
Central to this approach is using strategically placed layers of software that allow the upper and lower adjoining software layers within the operating system to be loosely coupled. These tightly controlled software layers provide a standardized and well-publicized set of APIs to the software above and below. The Network Device Interface Specification (NDIS)—developed jointly by Microsoft and 3Com in 1989—is an example. So is the Open Database Connectivity (ODBC) specification, which provides developers access to different relational and nonrelational databases using a single API.
The architecture allows for simple plug-and-play of modules above and below the isolation layer. In practice, this means you can start out with a module that implements a de facto standard and later supplement or replace it with one that implements a de jure standard. So you end up with the best of both worlds: Open Systems and Industry Standards.
Windows NT conforms to most de facto and de jure standards. Although there are too many to list in this document, standards defined by ANSI, ISO, IEEE, IETF, SQL Access Group, OSF, and many more are implemented—even embraced—in Windows NT.
Several other theoretical models influenced the design of Windows NT. Three of the most important ones were: the client/server model, the object-oriented model, and the symmetric multiprocessing model. These models provide a framework for understanding the inner workings of Windows NT.
In the X environment, the display control software is a method of moving the graphical display processing from the host computer to the desktop. So display software on the desktop is a server of display windows to client programs (applications) running on the host. Although this implies a form of distributed computing, it does not qualify the X environment as a true client/server one. Client/server applications can be implemented in an X environment—many are—but the existence of X by itself does not imply a true client/server application.
Windows NT is based on client/server principles and presents a true client/server environment. For example, when a user runs an application, the application is a client requesting services from the protected subsystems. The idea is to divide the operating system into several discrete processes, and each implements a set of cohesive services such as process creation or memory allocation. These processes communicate with their clients, each other, and the kernel by passing well-defined messages back and forth. The Microsoft DCE-compatible remote procedure call (RPC) is the preferred technique for passing these messages across a network.
The client/server relationship creates a modular operating system. The server processes are small and self-contained. Because each runs in its own protected, user-mode address space, a server can fail without taking down the rest of the operating system. Furthermore, the self-contained nature of the operating system components facilitates distributing services across multiple processors on a single computer or multiple computers on a network.
Software objects are a combination of computer instructions and data that model the behavior of things, real or imagined, in the world. One of the central concepts of objects is encapsulation—the attributes that collectively define the object (or its state) are accessible only through methods defined for that object.
Objects interact with each other by passing messages back and forth to invoke other objects’ methods. The sending object is known as the client and the receiving object is known as the server. The client requests and the server responds, and often in the course of conversation the client and server roles alternate between objects.
One of the hidden powers behind UNIX is the file metaphor. In UNIX, devices such as printers, tape drives, keyboards, and terminal screens all appear as ordinary files to both programmers and regular users. This simplifies many routine tasks and is a key component in the extensibility of the system.
Windows NT advances this metaphor using objects. The object metaphor is pervasive throughout the architecture of the system. Not only are all of the things in the UNIX file metaphor viewed as objects by Windows NT, but so are things such as processes and threads, shared memory segments, and access rights. Windows NT is not an object-oriented system in the strictest sense of the term, but it does use objects to represent internal system resources.
Multitasking, multiprocessing, and multithreading are three terms that are closely related and easily confused. Multitasking is an operating system technique for sharing a single processor among multiple paths of execution. Under multitasking, the fundamental unit for a path of execution is the process, and various processes take turns using the single processor. Each process has its own variables and data in memory, and the instructions are executed serially.
Multithreading is a technique that allows a single process to be broken into multiple paths of execution, or threads. Under multithreading, the fundamental unit for a path of execution is the thread. Threads belonging to the same process share the same variables and data in memory, and although each thread can be executed before, after, or simultaneously with other threads, techniques exist to allow the threads to be synchronized when required. This allows for a savings of system resources: Instead of starting a separate process for each task, in many cases only a new thread is required, forgoing the requirement for additional memory.
Multiprocessing, on the other hand, refers to computers with more than one processor. For example, systems with only a single processor are sometimes referred to as uniprocessor systems. A multiprocessing computer is one that is able to execute multiple threads simultaneously, one for each processor in the computer. Multithreading is required to take full advantage of a multiprocessing computer, but multithreading does not require multiprocessing to be implemented in the hardware.
A multitasking operating system appears to execute multiple threads at the same time; a multiprocessing operating system actually does it, executing one thread on each of the computer’s processors.
Multiprocessing systems can be either tightly or loosely coupled, depending on the degree to which resources such as memory and I/O are shared between processors. Loosely coupled MP systems have few shared resources but can scale to hundreds of processors. Tightly coupled systems usually share memory, I/O, and even power supplies. These are most commonly seen in general-purpose workstations and servers today.
Another distinction in MP systems architecture is asymmetric or symmetric. The main difference is in how the processors are used. In asymmetric multiprocessing (ASMP), one or more processors is set aside for exclusive use by the operating system or for specific functions such as I/O processing, and the remainder of the processors run user applications. In symmetric multiprocessing (SMP), any processor can run any type of thread. The processors communicate with each other through shared memory. SMP systems are more commonly seen in general-purpose workstations and servers today.
SMP systems provide better load balancing and fault tolerance. Since the operating system threads can run on any and all processors, the chance of hitting a CPU bottleneck is greatly reduced over the ASMP model. A processor failure in the SMP model will only reduce the computing capacity of the system. In the ASMP model, it can easily take down the whole computer if it happens to be one of the operating system processors that fails. SMP systems do have limitations, however. The number of processors added is subject to the law of “diminishing returns.” As additional processors are added, the performance improvement decreases. This is due to the fact that computer performance is limited by factors other than processing power. As the number of processors increases, the speed with which they can all access the shared memory becomes critical. Input/Output, is another limiting factor.
SMP operating systems, such as Windows NT and many flavors of UNIX, are therefore inherently more complex than uniprocessor ones. There is a tremendous amount of coordination that must take place within the operating system to keep everything synchronized. For this reason, SMP operating systems are best if designed and written as such from the ground up. As noted earlier, Windows NT was designed with SMP capability as a primary requirement.
The Windows NT Executive is the kernel-mode portion of Windows NT and, except for a user interface, is a complete operating system unto itself. Its microkernel architecture differs in architecture from older UNIX kernels, and, in another departure, the Windows NT Executive is never modified or recompiled by the system administrator.
The Windows NT Executive is actually a family of software components that provide basic operating system services to the protected subsystems and to each other. The Executive components are completely independent of one another and communicate through carefully controlled interfaces. This modular design allows existing Executive components to be removed and replaced with ones that implement new technologies or features. As long as the integrity of the existing interface is maintained, the operating system runs as before.
Windows NT Executive and its components
Detailed below is a description of the components in the Windows NT Executive.
The Object Manager creates, manages, and deletes Windows NT Executive objects. Executive objects are abstract data types used to represent operating system resources such as shared memory segments, files and directories, and processes and threads. The Object Manager manages the global namespace for Windows NT, which is modeled after the hierarchical file system, in which directory names in a path are separated by a backslash (\). As with other Windows NT components, the Object Manager is extendible and modular so that new object types can be defined as the technology advances.
A program is a static sequence of computer instructions. A process is the dynamic invocation of a program along with the system resources needed for the program to run. A Windows NT-based process is not an executable entity, but the UNIX-based process is. A Windows NT-based process contains one or more executable entities known as threads, and it is these threads and not the process that the Windows NT kernel schedules for execution. Remember that Windows NT supports symmetric multiprocessing, which is most effective when used with a multithreaded operating system.
The process model for Windows NT works in conjunction with the security model and the Virtual Memory Manager to provide interprocess protection. The Process Manager is the Windows NT-based component that manages the creation and deletion of processes. It provides a standard set of services for creating and using threads and processes in the context of a particular protected-subsystem environment. Beyond that, the Process Manager does little to dictate rules about threads and processes. Unlike UNIX, it does not impose any hierarchy or grouping rules for processes, nor does it enforce any parent/child relationships. Instead, these design decisions are left for the protected subsystems to implement. For example, the parent/child relationship that exists between UNIX processes is implemented in the POSIX protected-subsystem of Windows NT.
Both Windows NT and most UNIX versions implement 32-bit linear memory addressing and demand-paged virtual memory management. Microsoft is planning to incorporate 64-bit memory addressing in future versions of Windows NT. Except in the most unusual and specialized applications, however, the 4-gigabyte (GB) limit is sufficient.
Microsoft plans to support 64-bit data in the Windows NT operating system in order to meet the needs of customers who require efficient access to extremely large databases. This functionality is targeted for availability in the Windows NT 5.0 time frame and will be supported initially on Digital Equipment Corporation's Alpha platforms.
Windows NT already supports applications up to 2 GB in size, more than sufficient for most high-end business solutions. Some applications, however, especially extremely large databases, can benefit from 64-bit very large memory (VLM) support, with which data of virtually any size can be mapped directly into addressable memory. Examples are a back-end system for processing millions of credit card transactions per day, and a worldwide airline reservation system.
Under Windows NT, each process is allocated a four gigabyte (232 bytes) virtual address space, meaning that each app can have up to 3 GB of memory with an additional 1 GB allocated for the system. The Virtual Memory Manager maps virtual addresses in the process’s address space to physical pages in the computer’s memory. In doing so, it hides the physical organization of memory from the process’s threads. This ensures that the thread can access its process’s memory as needed, but not the memory of other processes.
Most machines today typically have less than 4 GB of physical memory. When physical memory becomes full, the Virtual Memory Manager transfers, or pages, some of the memory contents to disk. Windows NT and many UNIX systems share a common page size of 4KB. The process that the Virtual Memory Manager uses to determine which pages to move to disk is referred to as the paging policy.
The primary goal of any paging policy is to allow as many processes or threads as possible to use the machine’s memory without adversely impacting the overall performance of the system. It is a very fine balancing act. On one side you have wasted machine resources; on the other you have a situation in which the memory manager uses up most of the CPU cycles by swapping things back and forth from memory to disk, a condition known as thrashing.
The Windows NT Virtual Memory Manager uses a paging policy known as local first in, first out (FIFO) replacement. With local FIFO replacement, the Virtual Memory Manager must keep track of the pages currently in memory for each process. This set of pages is referred to as the process’s working set.
One of the most important features of this paging policy is that it enables Windows NT to do some unattended performance tuning. When physical memory runs low, the Virtual Memory Manager uses a technique called automatic working-set trimming to increase the amount of free memory in the system. Roughly speaking, this is a process whereby the Virtual Memory Manager attempts to equitably allocate memory to all of the processes currently on the machine. As it cuts the amount of memory to each process, it also monitors their page-fault rates and works to strike a balance between the two.
Applications and the protected subsystems have a client/server relationship. The application (client) makes calls to the protected subsystem (server) to satisfy a request for some type of system service. In general, clients and servers communicate with each other through a series of well-defined messages. This is known as Inter-Process Communications (IPC), and can take the form of either Local Procedure Call (LPC) or Remote Procedure Call (RPC).
When the client and server are both on the same machine, the Windows NT Executive uses a message-passing mechanism known as the Local Procedure Call (LPC) facility. LPC is an optimized version of the industry-standard Remote Procedure Call (RPC) facility that is used by clients and servers communicating across networks.
For calls to remote servers, Microsoft has implemented a DCE-compatible Remote Procedure Call (RPC) interface. Clients running Microsoft’s RPC can communicate with DCE-compliant servers, including traditional UNIX environments such as AIX, Digital UNIX, HP-UX, Solaris, and many others.
This RPC support allows a user at a Microsoft workstation to access a variety of disparate information sources. Conversely, Microsoft servers talk to DCE-compliant clients, so that business systems on Microsoft servers can provide DCE services to a variety of clients in UNIX or other environments.
The I/O Manager is the part of the Windows NT Executive that manages all input and output for the operating system. It is made up of a series of subcomponents such as the file systems, the network redirector and server, the system device drivers, and the cache manager. A large part of the I/O Manager’s role is to manage the communications between drivers. To simplify the task, it implements a well-defined, formal interface that allows it to communicate with all drivers in the same way, without any knowledge of how the underlying devices actually work. In fact, the I/O model for Windows NT is built on a layered architecture that allows separate drivers to implement each logically distinct layer of I/O processing. The I/O Manager is the Windows NT Executive component that makes the most use of the software isolation layers mentioned above in the Open Systems discussion.
In addition to the uniform driver model, the I/O Manager works with other Windows NT Executive components, most notably the Virtual Memory Manager, to provide asynchronous I/O, mapped file I/O, and file caching. The latter bears special mention. File caching in Windows NT is controlled by a subcomponent of the I/O Manager called the Cache Manager. While most caching systems allocate a fixed number of bytes for caching files in memory, the Windows NT cache dynamically changes size depending on how much memory is available. This load-balancing feature is provided by the Cache Manager and is another example of automatic self-tuning within Windows NT.
In a multitasking operating system, applications share a variety of system resources including physical memory, I/O devices, files and directories, as well as the system processor(s). Applications must have proper authorization before being allowed to access any of these system resources, and it is the Security Reference Monitor—in conjunction with the Logon Process protected-subsystem and the Security-protected subsystems—that enforces this policy. These components form the security model for Windows NT.
The Security Reference Monitor acts as the watchdog, enforcing the access-validation and audit-generation policy defined by the local Security protected-subsystem. It provides run-time services to both kernel-mode and user-mode components for validating access to objects, checking for user privileges, and generating audit messages. As with other components in the Windows NT Executive, the Security Reference runs exclusively in kernel-mode.
Under Windows NT, a finer level of access permissions was incorporated into the Windows NT File System. Permissions can also be enforced at the “share” level. (A share is analogous to an NFS export.) These permissions can be specified for any number of users and groups, rather than the basic owner/group/all sets seen in UNIX. The types of permissions that can be enforced are discussed later in this document.
While further levels of security can be added to UNIX with ACLs and DCE, Windows NT incorporates these features in the basic system. By using domains and trust relationships, complex and effective enterprise-wide security policies can be implemented.
The Kernel is at the core of the layered architecture for Windows NT and manages only the most basic of the operating system functions. The microkernel design enables this component to be small and efficient. The Kernel is responsible for thread dispatching, multiprocessor synchronization, and hardware exception handling.
The Hardware Abstraction Layer (HAL) is an isolation layer of software provided by the hardware manufacturer that hides, or abstracts, hardware differences from higher layers of the operating system. Because of the HAL, the different types of hardware all look alike to the operating system, removing the need to specifically tailor the operating system to the hardware with which it communicates. The goal for the HAL was to provide routines that allow a single device driver to support the same device on all platforms.
HAL routines are called from both the base operating system (including the Kernel) and from the device drivers. The HAL enables device drivers to support a wide variety of system architectures (for example, x86, Alpha, PowerPC) without having to be extensively modified. The HAL is also responsible for hiding the details of symmetric multiprocessing (SMP) hardware from the rest of the operating system.
The protected subsystems are user-mode servers that are started when Windows NT is booted. There are two types of protected subsystems: integral and environment. An integral subsystem is a server that performs an important operating system function, such as security. An environment subsystem is a server that provides support to applications native to different operating system environments. Windows NT currently ships with three environment subsystems: the Win32® subsystem, the POSIX subsystem, and the OS/2 subsystem.
Conceptual View of Windows NT Protected Subsystems
The Win32 subsystem is the “native-mode” subsystem of Windows NT. It provides the most capabilities and efficiencies to its applications and, for that reason, is the subsystem of choice for new software development. The POSIX and OS/2 subsystems provide “compatibility-mode” environments for their respective applications and, by definition, are not as feature-rich as the Win32 subsystem.
The Win32 subsystem is the most critical of the Windows NT environment subsystems. It provides the graphical user interface and controls all user input and application output. It is the server for Win32-based applications and implements the Win32 API. Not all applications are Win32-based, and the Win32 subsystem does not control the execution of non-Win32-based applications. It does, however, get involved. When the user runs an application that is foreign to the Win32 subsystem, it determines the application type and either calls another subsystem to run the application or creates an environment for MS-DOS or 16-bit Windows to run the application.
The Win32 subsystem also handles all display input/output for the OS/2 and POSIX subsystems.
The subsystems for MS-DOS and 16-bit Windows run in user-mode in the same way the other environment subsystems do. However, unlike the Win32, POSIX, and OS/2 subsystems, they are not server processes, per se. MS-DOS-based applications run within the context of a process called a Virtual DOS Machine (VDM). A VDM is a Win32-based application that establishes a complete virtual computer running MS-DOS. The 16-bit Windows environment is a hybrid application, one that runs within the context of a VDM process, but calls the Win32 API to do most of its work.
Creating environments for MS-DOS and 16-bit Windows as user-mode subsystems affords them the same protection that the other subsystems have. They cannot interfere with the operation of each other, the other protected-subsystems, or the Windows NT Executive.
POSIX, which stands for Portable Operating System Interface for UNIX, began as an effort by the IEEE community to promote the portability of applications across UNIX environments by developing a clear, consistent, and unambiguous set of standards. POSIX is not limited to the UNIX environment and has been implemented on nonUNIX operating systems, such as Windows NT, VMS, MVS, MPE/iX, and CTOS.
POSIX actually consists of a set of standards that range from IEEE 1003.0 to 1003.22 (also known as POSIX.0 to POSIX.22). These standards deal with many subjects, including shell interface, utilities, language binding, and real-time extensions. However, not all of these standards are widely implemented. These POSIX standards are all based on specifications for which there is no binary reference implementation. The POSIX subsystem on Windows NT supports 1003.1, which is also known as the international ISO/IEC IS 9945-1:1990 standard. This standard defines a C-language, source-code-level API to the operating system environment.
The OS/2 subsystem supports 16-bit graphical and character-based applications. It provides these applications with an execution environment that looks and acts like a native OS/2 system. Internally, the OS/2 subsystem calls the Windows NT Executive to do most of the work, because the Windows NT Executive services provide general-purpose mechanisms for doing most operating system tasks. However, the OS/2 subsystem implements those features that are unique to its operating environment.
Microsoft builds products that are world-ready; Windows NT is no exception. There are currently more than 40 international versions of Windows NT. A process called localization is used to create these different versions.
When installing Windows NT, the user selects a language and is assigned a default locale. The default locale gives the culturally correct defaults for keyboard layout, sorting order, currency, and date and time formatting. Of course, these defaults can be overridden by the user.
At its most basic level, a locale consists of a language, a country, and the binary codes used to represent the characters of a particular language. The latter is referred to as the code set. The United States has traditionally adopted the ASCII standard for representing data. However, ASCII is woefully inadequate for some other countries because it lacks many of their common symbols and punctuation. For example, the British pound sign and the diacritical marks used in French, German, Dutch, and Spanish are missing.
To address these shortcomings, Windows NT employs the new Unicode standard for data representation. Unicode is a de jure standard for encoding international character sets. The Unicode standard was developed by the Unicode Consortium, a consortium of vendors including Microsoft, IBM, Borland, and Lotus. Unicode separates the “essence” of a character from the font and formatting information used to display it. It employs a 16-bit character coding scheme, which means that it can represent 65,536 (216) individual characters. This is enough to include all languages in computer commerce today, several archaic or arcane languages with limited applications (such as Sanskrit and, eventually, Egyptian hieroglyphics), all punctuation marks, mathematical symbols, and other graphical characters. With all of this, there is still plenty of room for future growth.
Unicode is the native code set of Windows NT, but the Win32 subsystem provides both ASCII and Unicode support. Character strings in the system, including object names, path names, and file and directory names, are represented with 16-bit Unicode characters. The Win32 subsystem converts any ASCII characters it receives into Unicode strings before manipulating them. It then converts them back to ASCII, if necessary, for output.
Localization is only one part of the effort that goes into ensuring that an operating system can be used effectively in a worldwide environment. A world-ready operating system must also provide services to support the use of international applications and to support the global market by making the application developer’s job easier. For example, some of the language issues that international users and application developers face are:
Microsoft is incorporating international language support at the operating system and API level. Built-in, international language support adds functionality that provides solutions for developing and using software and exchanging documents around the world.
Windows NT is a complete operating system with fully integrated networking, including built-in support for multiple network protocols. These include NetBEUI, NWLink (IPX/SPX compatible), DLC, AppleTalk, and TCP/IP. The TCP/IP implementation is robust, complies with relevant IETF and IEEE standards, and includes DNS and routing features.
Windows NT offers built-in support for both peer-to-peer and client/server networking. It provides interoperability with and remote dial-in access to existing networks, support for distributed applications, file and print sharing, and the ability to easily add networking software and hardware.
The International Standards Organization (ISO) developed a seven-layer, theoretical model called the Open Systems Interconnection (OSI) reference model. It is used to describe the flow of data between the physical connection to the network and the user application. The model is the best-known and most widely used model to describe networking environments, and we will use it as the framework to discuss the networking components for Windows NT. The layers are Physical, Data Link, Network, Transport, Session, Presentation, and Application.
In the OSI model, the purpose of each of the seven layers is to provide services to the next higher layer, shielding the higher layer from the details of how the services are actually implemented. The layers are abstracted in such a way that each layer believes it is communicating with the same layer on the other computer. In reality, each layer communicates only with adjacent layers on the same machine.
Layer 0, which is not officially a layer in the OSI model, is commonly used to define the underlying transmission media, such as cables or fiber, that interconnect each of the computers on the network. Layer 0 is known as the Media Layer.
The Network Adapter or Network Interface Card (NIC) connects the internal communication bus of the computer with the external network. It acts as a bridge between the Media Layer (Layer 0) and the Physical Layer (Layer 1) in the OSI model. Windows NT views the NIC as a peripheral device and controls it through a device driver.
The IEEE 802 project further defined sublayers of the Data Link Layer (Layer 2) in the OSI model. The two sublayers are the Media Access Control (MAC) and the Logical Link Control (LLC). The MAC sublayer communicates directly with the NIC and is responsible for delivering error-free data between two computers on the network.
In 1989, Microsoft and 3Com jointly developed a specification defining an interface for communication between the MAC sublayer and protocol drivers higher in the OSI model. This standard is known as the Network Device Interface Specification (NDIS), and is a key isolation layer of software. NDIS isolates the details of the NIC from the transport protocols and vice versa.
The transport protocols reside primarily in the Network Layer (Layer 3) and the Transport Layer (Layer 4) of the OSI model and communicate with the NIC(s) through an NDIS-compliant device driver. This includes the five protocols previously mentioned, which are included with Windows NT.
STREAMS was originally developed by AT&T for UNIX System V, Release 3.2. It is an isolation layer of software that wraps around STREAMS-based transport modules. STREAMS has the feature that different modules provide separate interfaces for upstream and downstream traffic. This facilitates the creation of specialized low-level communications applications.
The STREAMS environment allows the many STREAMS-based transport protocol drivers that already exist to be plugged in to Windows NT with little or no modification. New transport protocol drivers, however, should be written to the newer, more versatile Transport Driver Interface.
The Transport Driver Interface (TDI) is a new network API developed for use with Windows NT. Although not yet widely adopted, it does provide a sophisticated 32-bit API, which can take advantage of Windows NT features such as security. The TDI falls at another point in the OSI model, namely in the Session Layer (Layer 5). The TDI specification defines the upper bounds to which all transport protocol device drivers are written. It enables a single version of a session-layer component, such as a network redirector or server, to use any available transport mechanism loaded on the machine, for example, TCP/IP or IPX/SPX.
In network programming, a socket provides an endpoint to a connection; two sockets form a complete path. A socket works as a bidirectional pipe for incoming and outgoing data between networked computers.
Windows Sockets (WinSock), a session-layer interface, is a de facto standard for Windows-based network programming. Version 1.1 of Windows Sockets was developed by a group of 30 vendors, including Microsoft, and released in January 1993. This original version is compatible with the UC Berkeley (BSD) Sockets APIs, which are a de facto standard for UNIX network programming. Version 1.1 provided independence from the underlying TCP/IP protocol stack. As long as the TCP/IP stack was WinSock-compliant, an application written to the WinSock APIs would run on it.
Version 2.0 of Windows Sockets provides true transport protocol independence by extending support to additional protocols, including IPX/SPX, DECnet, and OSI. It has also been extended to support additional network technologies, such as ATM, wireless, and telephony.
Microsoft supports the proposed Secure Sockets Layer (SSL) addition to Winsock version 2 API. This will provide secure communications between client and server applications using Winsock SSL.
The Network Basic Input/Output System (NetBIOS) is a session-layer interface similar in function to Windows Sockets. It is used by applications to communicate with NetBIOS-compliant transports such as NetBEUI Frame (NBF), NWLink, and TCP/IP via the TDI. The NetBIOS interface is responsible for establishing logical names on the network, establishing a connection between any two of those names, and supporting reliable data transfer between computers once the connection has been established. The network redirector is an example of a NetBIOS application.
The redirector is an integral subsystem that forms a key component of the network architecture for Windows NT. The redirector is the network component responsible for sending, or redirecting, I/O requests across the network when the file or device to be accessed is not on the local machine. Redirector is implemented as a service called “Workstation.” Under Windows NT, multiple redirector/server pairs can concurrently execute on each system, enabling transparent, multiserver access.
Redirector facilitates writing applications to a single API; much like UNIX, applications are unaware of whether a file is local or remote. The service runs in privileged mode so that it can directly call other drivers and improve performance. It can be loaded and unloaded dynamically and can coexist with other redirectors such as CSNW for connecting to Novell Netware-based servers.
The redirector service may be likened to the biod daemons for NFS in UNIX. However the analogy should not be carried too far, because only one redirector service is required for each type of server to be accessed.
The server is another integral subsystem in the Windows NT architecture. It is the network component on the remote machine that entertains connection requests from the client-side redirectors and provides them with access to the desired resources. It is implemented as a service called simply “Server” and can also be loaded and unloaded dynamically.
Both Redirector and Server reside above the TDI and are implemented as file system drivers. This transparent resource access is, in many ways, similar to the functionality provided by remote file systems under UNIX, such as the Network File System (NFS) and the Andrew File System (AFS).
The server service may be likened to the nfsd daemon for NFS in UNIX. Since the service is multithreaded, only one process needs to be started.
The Universal Naming Convention (UNC) is a naming convention for describing network servers and share points on those servers. UNC names start with two backslashes followed by the server’s computer name. All other fields in the name are separated by a single backslash. After the computer name comes the share name, followed by optional subdirectory and file names.
The task of the Multiple UNC Provider (MUP) is to assist network requests that use the UNC to specify the destination server. When a request containing a UNC name is received by the MUP, it negotiates with the various redirector services to determine which one can process the request.
There is some likeness between the MUP and the UNIX process of using DNS to resolve a domain name, then invoking the automount process to access the file system. The crucial difference is that MUP directs requests for different types of servers (Windows NT, Novell NetWare, and so on), while DNS searches only for server names. Furthermore, no mount is ever required to access a resource by UNC.
The Multi-Provider Router (MPR) provides a communication layer between applications that make network calls using the Microsoft Win32 API and the redirector services. Not all programs use UNC names requests; many make calls using WNet API, which represents the network call subset of the Win32 API.
MPR functions very much like MUP in that it takes application requests and passes them on to a specific redirector. The difference is that MPR services WNet calls rather than UNC requests.
Though not compatible with UNIX Named Pipes, Windows NT Named Pipes are conceptually similar. Named pipes provide a high-level interface for passing data between two processes, regardless of network location. Named pipes, like files, are implemented as file objects in Windows NT and operate under the same constraints and security mechanisms as other Windows NT Executive objects. The named pipe file system driver is a pseudo-file system that stores pipe data in memory and retrieves it on-demand. When processing local or remote named pipe requests, it functions like an ordinary file system.
The Remote Procedure Call (RPC) facility is the backbone of true distributed computing and is rapidly becoming the InterProcess Communication (IPC) method of choice for software developers. Much of the original design work for an RPC facility was started by Sun Microsystems. It has continued with the Open Software Foundation (OSF) as a core part of their Distributed Computing Environment (DCE) standard.
The Microsoft RPC is compatible with the OSF DCE RPC. Due to the cost of licensing OSF code and the performance issues associated with it, Microsoft developed a new multithreaded RPC facility. The Microsoft RPC facility is completely interoperable with other DCE-based RPC systems, such as those from Hewlett-Packard and IBM.
The RPC facility is unique because it relies on other IPC mechanisms to transfer functions and data between the client and the server. In the case of Windows NT, RPC can use named pipes, NetBIOS, or Windows Sockets to communicate with remote systems and the LPC facility to communicate with systems on the local machine. This IPC-independence makes RPC the most flexible and portable of the IPC mechanisms for Windows NT.
Microsoft Internet Information Server is a native implementation of the current Internet standards for Web Servers, HTTP 1.0, and includes the Internet standard FTP and Gopher services. Moreover, Internet Information Server is fully integrated into Windows NT Server, making it the most secure and easiest-to-manage Web server on Windows NT Server. It includes the broadest set of tools to build an intranet:
A router is a device that receives network packets from a source and routes them to their destination using the shortest path available, thereby optimizing network routing performance.
Microsoft Windows MultiProtocol Routing service (MPR) is a Windows Server 4.0 and Windows NT Server 3.51 service that enables small and medium organizations to deploy Windows NT Server as a low-cost LAN-LAN routing solution, eliminating the need for a dedicated router.
In addition, customers who are transitioning from NetWare to Windows NT Server are now able to replace their existing NetWare-based LAN-LAN routers with Windows NT Server running this service.
Windows NT Server already provides routing support for remote users to a LAN environment and LAN-LAN routing support for AppleTalk networks. With Windows NT Server MPR service, the LAN-LAN routing support is now enhanced for TCP/IP and SPX/IPX networks. Like other routing solutions, Windows NT Server MPR service will provide a WAN routing solution when used with additional network cards.
Under Windows NT, disks are partitioned to form file systems, which can be of three different types. Using the disk administrator, Windows NT can create partitions with the following types of file systems:
Also, Windows NT can recognize and use HPFS file systems created under OS/2.
A computer running Windows NT can use one or more of these file system on its disks and partitions. This is referred to as multiple active file systems. Our interest in this paper is with the NTFS file system because it most closely parallels features found in UNIX file systems.
When the NTFS file system is created, disk administrator offers several advanced features. Many of them closely parallel the logical volume manager found on modern UNIX systems. They include:
NTFS is a journaling file system with fast file recovery. Journaling file systems are based on the transaction processing concepts found in database theory. Internally, it more resembles a relational database than a traditional file system. It is comparable in function to the Veritas file system found on some UNIX implementations.
NTFS was designed to provide recoverability, security, and fault tolerance through data redundancy. In addition, support was built in to NTFS for large files and disks, Unicode-based names, bad-cluster remapping, multiple data streams, general indexing of file attributes, and POSIX. All of these contribute to making NTFS an extremely robust file system.
Fault tolerance is the ability of a system to continue functioning when part of the system fails. The expression fault tolerance is typically used to describe disk subsystems, but it can also apply to other parts of the system or the entire system.
Fault-tolerant disk systems are standardized and categorized in seven levels known as Redundant Arrays of Inexpensive Disks (RAID) level 0 through level 6. The RAID levels are somewhat loosely defined, and details on performance or disk use vary from one configuration to the next. Depending on the implementation, definitions may overlap or be combined. Each level offers various mixes of performance, reliability, and cost.
The major difference between RAID and earlier, more expensive large-disk technologies (also called Single Large Expensive Disks, or SLED) is that RAID combines multiple disks with lower individual reliability ratings to reduce the total cost of storage. The lower reliability of each disk is offset by the redundancy.
This strategy is commonly known as disk striping without parity. Data is divided into blocks and spread sequentially among all of the disks in the array. RAID level 0 enhances disk performance most effectively when data is striped across multiple controllers with only one drive per controller. RAID level 0 does not provide redundancy. For that reason, it is not considered to be true RAID.
This strategy is commonly known as disk mirroring, disk duplexing, or disk shadowing. It provides an identical twin for a selected partition; all data written to the primary disk is written to the twin or mirrored partition. Some hardware implementations can improve read performance by reading from both drives, but write performance is the same as a single disk. This strategy provides the best performance when a member fails, but it is also the most expensive to implement due to the 100 percent redundancy factor.
This strategy is commonly known as Hamming Code ECC. This method achieves redundancy with ECC (Error Correcting Code). It employs a disk-striping strategy that breaks a file into bytes and spreads it across multiple disks. At the same time, the ECC data is spread across multiple check disks. In the event of data loss, the ECC information can be used to reconstruct the lost data. Due to the complexity and cost, this scheme has seen little acceptance.
This strategy is the first in the series of levels that provide disk striping with parity. It employs striping, but instead of ECC, this method employs a parity-checking scheme that requires only one disk on which to store the parity information. In this type of RAID the parity disk is dedicated to that function, so the ratio of data to parity disks is fixed. High read performance and transaction performance are equal to single disk.
This strategy is similar to level 3 in that it stores the parity information on a separate check disk. Where it differs is in the striping. This method stripes data in much larger chunks. It has little acceptance due to a poor transaction rate.
This has become the most popular strategy for recent fault-tolerance designs. Like level 4, it stripes data in big chunks. Unlike level 4, it does not use a separate check disk for parity information. Rather, it stripes the parity across the disks as well. The data and parity information are arranged on the disk array so that the two are always on different disks.
This is essentially the same as level 5 with the addition of a second parity set. By having dual parity sets, level 6 provides for an extremely high data fault tolerance and can sustain multiple simultaneous drive failures. It has little acceptance due to its poor transaction rate.
Sometimes called level 1+0, this strategy combines the performance of level 0 with the reliability of level 1 by striping data across mirrored disks. This comes with a high price tag due to the 100 percent redundancy.
Windows NT supports RAID levels 0, 1, and 5 through software (as described previously). In addition, many excellent third-party products are available that implement RAID in hardware with enhanced caching.
An uninterruptible power supply (UPS) provides power when the local power fails. It is usually rated to provide a specific amount of power for a specific period of time. This power comes from batteries that are kept charged while main power is available. The main power is converted from AC voltage to the DC voltage used to charge the battery. When needed, the DC power is converted to an AC voltage compatible with the computer power supply. Usually, all that is needed from a UPS is time to shut down the system in an orderly fashion by terminating processes and closing sessions.
Configuring UPS under Windows NT
Many UPS devices offer the ability to interface with operating systems, enabling the operating system to notify users automatically of the pending shutdown process or to provide notification that the power has been restored and a shutdown is no longer necessary. Windows NT provides an interface for these types of UPS devices through a serial port connection. Communication is handled in much the same way as hardware handshaking is handled on a normal RS-232C connection. Hardware signals are translated into power-state messages, which are then interpreted by Windows NT software.
For example, during a power failure, the UPS service for Windows NT immediately pauses the Server service for Windows NT to prevent any new connections and sends a message to notify users of the power failure. The UPS service then waits a specified interval of time before notifying users to terminate their sessions. If power is restored during the interval, another message is sent to inform users that power has been restored and normal operations have resumed.
The Windows NT Domain should not be confused with an Internet DNS Domain. In DNS, a domain refers to a group of computers that share a common namespace (for example, microsoft.com) under TCP/IP. In Windows NT, a domain is a group of servers running Windows NT Server that share common security policy and user account databases. Therefore, the Windows NT Domain is the basic unit of security and centralized administration for Windows NT, and the servers in the domain, in some ways, can be viewed as a single system.
One computer running Windows NT Server acts as the Primary Domain Controller (PDC), which maintains the centralized security databases for the domain. Other computers running Windows NT Server in the domain can function either as backup domain controllers (BDCs), or as an ordinary server. User logon requests in a Windows NT Domain are authenticated by the PDC or by a BDC. BDCs contain copies of the user account information and are available to authenticate users when they log on to the domain. BDCs also provide authentication fault tolerance. If the PDC is down for any reason, BDCs are available to authenticate users and guarantee their access to network resources.
When changes are made to the user account information on the PDC, those changes are replicated to each of the BDCs. The replication process is designed to limit network bandwidth consumption. When the user account information is updated, the changes are sent to a fixed number of BDCs at a time. This fixed number is configurable by the administrator and ensures that all transmissions do not take place serially. Also, the replication process requires only 2 kilobytes to set up the transmission session and a maximum of only 1 kilobyte per user, so minimal network bandwidth is used. For update and backup purposes, all servers and client machines in the Windows NT Server-based network can be synchronized to a single system clock.
Again, BDCs share the user authentication processing load with the PDC. Frequently, such as in wide-area networks (WANs), a BDC is physically closer to an individual user's point of logon than the PDC. Therefore, the BDC's ability to authenticate the user reduces both authentication time and network traffic. This is particularly useful in large networks with a single, master user domain. In addition, BDCs provide added system fault tolerance. If the PDC goes down, users can still be authenticated by the BDCs. Further, any BDC can be promoted to PDC so that changes to the directory can still be made and propagated throughout the network.
Domains can also contain server computers that are not running Windows NT Server, and client computers such as those running Windows NT Workstation, Windows 95, Windows for Workgroups, and MS-DOS operating systems.
A key concept in Windows NT Domains is the trust relationship. A trust relationship is a link between two domains that enables a user with an account in one domain to have access to resources in another domain. When you establish a trust relationship between domains, one domain (the trusting domain) trusts the authentication service of the other domain (the trusted domain). Trust relationships are unidirectional. Bidirectional trusts are created with two unidirectional ones. In addition, trust relationships are not transitive. For example, if Domain A trusts Domain B, and Domain B trusts Domain C, Domain A will not trust Domain C by default. If Domain A needs to trust Domain C, a separate trust relationship must be set up between the two domains.
Domains and trust relationships are the foundation of Windows NT Directory Services. By combining domains and trusts, you are able to strike a balance between access, control, and administration for your particular network requirements. There are four common ways of combining domains and trust relationships into what are known as domain models: Single Domain, Master Domain, Multiple Master Domain, and Complete Trust.
In the single domain model, there is only one domain. Because there are no other domains, there are no trust relationships to administer. This model is best for organizations in which:
In an organization with multiple departments that have no need to share information among them, the best configuration is often multiple single domains.
The master domain contains the user accounts database and provides authentication services to the trusting subdomains. This model is best for organizations in which:
In this model, one domain, the master domain, is trusted by all non-master subdomains, but does not trust any of them. This model offers the benefits of both central administration and multiple domains.
In this model, there is more than one master domain. All of the master domains trust each other, and all are trusted by the non-master subdomains, but none of the master domains trusts any of the subdomains. This model works best where:
This model works well in large organizations. Because all the master domains trust each other, user accounts need only exist in one of them.
In the complete trust model, all domains trust each other. There are no master domains. This model is the simplest to understand, and works where:
As with the multiple master domain model, the multiple trust model is scalable as the organization grows. Because each domain has full control over its own user accounts, it can work well for a company without a centralized information services (IS) department.
Discussing domains and trust relationships is not complete unless it includes directory services. In Windows NT, directory services are operating system features that simplify the use and administration of computer networks. Currently, NTDS is used by 80 percent of Windows NT Server customers to improve how corporate networks are managed. Key benefits to NTDS are:
Single network logon to all network resources.
Centralized administration of user accounts and security.
Integration with server applications for comprehensive directory and security models.
Users need to remember one user ID and password, regardless of where they log on from and regardless of what network resources they need to utilize. And the logon process is the same whether users are in their own office, on the road, or at home.
Administrators set up organizational groups and access rights using command buttons and drag-and-drop actions. Windows NT Servers Directory Service also provides a single network logon for server applications such as Microsoft BackOffice.
To appreciate the power of integrated WINS and DNS in Windows NT Server 4.0, it is necessary to first know something about the Dynamic Host Configuration Protocol (DHCP). DHCP relieves the administrative burden associated with assigning and maintaining IP addresses. It offers dynamic configuration of computers for a large number of parameters, including:
DHCP is a safe, reliable, and simple-to-use tool for TCP/IP network configuration. It ensures that address conflicts do not occur, and helps conserve the use of scarce IP addresses through centralized management. DHCP services for Windows NT are implemented under RFCs 1533, 1534, 1541, and 1542.
DHCP, which uses a client/server model, is based on leases for IP addresses. The system administrator controls how IP addresses are assigned by specifying address allocation ranges and lease durations. It is also possible to have static IP addresses, which are addresses with leases that do not expire. During system startup, a DHCP client computer sends a “discover” message that is broadcast to the local network and might be relayed to all DHCP servers on the private network. Each DHCP server that receives the discover message responds with an offer message containing an IP address and valid configuration information for the client that sent the request.
The DHCP client collects the configuration offerings from the servers, chooses one of the configurations, and sends a request message to the DHCP server for the selected configuration. The selected DHCP server sends an acknowledgment message to the client. The DHCP acknowledgment message contains the IP address originally sent with the offer message, a valid lease for that address, and the appropriate TCP/IP configuration parameters for use by the client. After the client receives the acknowledgment, it enters a bound state and can now participate on the TCP/IP network and complete its system startup.
Client computers save the received address for use during subsequent system startups. By default, the client attempts to renew its lease with the DHCP server when 50 percent of the lease time has expired. If the current IP address lease cannot be renewed, a new IP address is assigned.
As an example of how maintenance tasks are made easy with DHCP, consider the case in which a computer is moved from one subnet to another. The IP address is released automatically for the DHCP client computer when it is removed from the first subnet, and a new address is automatically assigned to it when it is attached to the new subnet. Neither the user nor the system administrator needs to intervene to update the configuration information.
The Windows Internet Name Service provides a dynamic database for registering and querying name-to-IP address mappings. WINS depends on the ability of the network node to broadcast its configuration periodically. This “registration” process means that the WINS database is always current, to within about 45 minutes. As with DHCP, no intervention is required of either the user or the system administrator to update the configuration information.
WINS consists of two components: the WINS server, which handles name queries and registrations, and the client software, which queries for computer name resolution. WINS servers support multiple replication partners in order to provide increased service availability, better fault tolerance, and load balancing. Each WINS server must be configured with at least one other WINS server as its replication partner. These partners can be configured to be either pull partners or push partners depending on how replications are to be propagated. WINS can also provide name resolution service to certain nonWINS computers through proxies, which are WINS-enabled computers that act as intermediaries between the WINS server and the non-WINS clients.
The Domain Name System is a distributed database that provides a hierarchical naming system for identifying hosts on the Internet. DNS was developed to solve the problems that arose when the number of hosts on the Internet grew dramatically in the early 1980s. Although DNS provides service similar to WINS, there is a major difference: DNS requires static configuration for computer name-to-IP address mapping, while WINS is dynamic and requires far less administration.
In the DNS namespace, each domain (analogous to a directory in a file system) is named and can contain subdomains. The domain name identifies the domain’s position relative to its parent domain in the database. A period (.) separates each part of the name, similar to slashes in file names. For example, tsunami.microsoft.com could be the name of a computer owned by Microsoft. The root of the DNS database is managed by the Internet Network Information Center. The top-level domains were assigned organizationally and by country. These domain names follow the ISO 3166 standard.
Each DNS server contains information about its administrative zone, which may (or may not) correspond to its domain. The server makes this information available to client routines called resolvers, which query the name server across the network. The servers forward requests to higher-level servers when they cannot be locally resolved.
The domain administrator sets up name servers that contain database files with all the resource records describing all hosts in their zones. This is traditionally done in UNIX with the Berkley Internet Name Daemon (BIND). BIND configuration files are normally created “by hand,” using vi or a similar text editor.
Windows NT Server includes a DNS server with a familiar graphical interface and several automated functions that simplify setup. It is fully compatible with BIND and can participate in resolving Internet DNS requests. In addition, a special record type called a WINS record is available to instruct the server to query a WINS server for requests it cannot resolve.
A computer on a network usually has both a name and an address. Computers use numeric addresses to identify each other, but people usually find it easier to work with the computer names. Therefore, a mechanism must be available to convert computer names into their corresponding numeric addresses. This mechanism is known as name resolution.
A computer running Windows NT can use one or more of the following methods to ensure accurate name resolution in TCP/IP networks:
Windows NT Server is an indispensable tool for TCP/IP networks. Using DHCP, node configuration is easily accomplished, and scarce IP addresses are conserved. Using Windows NT DNS, even UNIX and other hosts (which may be unaware of WINS) can resolve host names in a dynamic environment by indirectly querying WINS.
The security model for Windows NT is designed to meet both national and international security criteria. In the United States it is the C2-level criteria as defined by the U.S. Department of Defense Trusted Computer System Evaluation Criteria document (DOD 5200.28-STD - December 1985). This document is commonly referred to as the Orange Book. The C2-level requires what is known as Discretionary Access Control and is one in a range of seven levels of security specified by the DOD. Some of the most important requirements for C2 are:
In the European Community, the rough equivalent of the Trusted Computer System Evaluation Criteria (TCSEC) document is the Information Technology Security Evaluation Criteria (ITSEC) document. ITSEC is the product of the Common Criteria Editorial Board, an international standards body made up of representatives from various countries including France, Germany, the Netherlands, the United Kingdom, and the United States. Because of the differences in criteria, there is no direct, one-to-one rating between the TCSEC document and the ITSEC document. A TCSEC rating of C2 translates roughly into an ITSEC rating of F-C2,E2. The higher TCSEC rating of B1 translates roughly into an ITSEC rating of F-B1,E3.
According to the ITSEC, “F-C2 is derived from the functionality requirements of the U.S. TCSEC [the NSA evaluation] class C2. It provides a more finely grained discretionary access control than class C1, making users individually accountable for their actions through identification procedures, auditing of security-relevant events, and resource isolation.” In the ITSEC scheme, products are evaluated on a functional level, as well as on an assurance scale of E0 (lowest) to E6 (highest). This means that, while a product may be evaluated as F-C2, it must also carry with it a level of confidence that this product will meet these functional criteria in as many different scenarios as possible. Windows NT 3.51 and Windows NT 4.0 have received an FC2/E3 rating for both the client side and the server side. For more information on Windows NT security visit, http://www.microsoft.com/security.
Windows NT Security Model
The security model for Windows NT is made up of the following components:
Together, these components are known as the security subsystem. This protected subsystem is an integral subsystem rather than an environmental subsystem because it affects the entire Windows NT operating system.
Under Windows NT, any person who needs access to resources on the network must have a valid user account on a domain that allows or is allowed that access. The Windows NT user account contains the following information:
Like UNIX, Windows NT supports the concept of groups. With groups, you can group together users who have similar jobs and resource needs. Groups make granting rights and resource permissions easier; a single action of giving a right or permission to a group gives that right or permission to all present and future members of that group.
Where groups within Windows NT and groups within UNIX differ is in Windows NT support of local and global groups. The Windows NT security model suggests that users be added only to global groups, which span the domain and any trusted domains. Next, global groups are made members of local groups, which are created within a domain or participating server. Finally, access to resources is granted to those local groups. At first this may seem to be a complex model, but when multiple domains and trust relationships are present, it greatly simplifies account maintenance.
Windows NT provides a set of built-in groups that gives members rights and abilities to perform various tasks, such as backing up the computers or administering the network printers. Examples of these built-in groups are: Administrators, Backup Operators, Print Operators, Power Users, Users, and Guests. These built-in groups cover most of the standard combinations of rights and permissions that you would expect to find. User accounts can belong to more than one group at the same time and will share the combined rights and permissions of all of them.
Like UNIX, Windows NT provides features that enable a system administrator to create a very robust account policy. The password policy can set the following limits on user passwords:
Windows NT has no equivalent to the UNIX /etc/passwd file. With Windows NT, passwords are not hashed and stored in a flat file; they are instead stored in the registry. The registry is a protected and encrypted database, which can only be accessed by privileged kernel routines.
No direct access to passwords, hashed or otherwise, is provided under Windows NT. The system administrator can only reset a user’s password. Security-wise, this is a better solution than storing passwords in a file accessible to all users on a UNIX system. Nowadays, it’s commonplace to find software for all hardware platforms and operating systems that use a dictionary file and so-called crypt function. By using it, hackers can easily break a UNIX passwd file should they break into a UNIX system, and if users on the system use “normal” words in English as their passwords.
Windows NT also provides an account lockout feature. When this feature is enabled, a user account becomes locked if there are a number of incorrect attempts to log on to that account within a specified amount of time. Locked accounts cannot log on. A locked account remains locked until an administrator unlocks it, or until a specified amount of time passes, depending on how the account lockout feature has been configured. By default, account lockout is disabled.
As discussed earlier, Windows NT supports multiple file systems. However, only the Windows NT file system (NTFS) has built-in security. For that reason, NTFS is the file system of choice for Windows NT.
Using NTFS, a fine level of access permissions was incorporated into the system design. Beyond RWX (read, write, execute) permissions in UNIX, Windows NT adds D (delete), P (change permissions), and O (take ownership) as additional parameters. These permissions—RWXDPO—are called individual permissions.
Windows NT offers a set of standard permissions for files and directories in NTFS volumes. These standard permissions offer useful combinations of individual permissions. Thus, permissions can be specified either directly as individual permissions or indirectly as standard permissions. Examples of standard permissions are Read (RX), Change (RWXD), No Access (None), and Full Control (All). As with the built-in groups, it is possible for the system administrator to create new types of standard permissions, although most of the combinations that you would expect to find are covered in the existing ones.
Older UNIX versions support only three sets of file and directory permissions: owner, group, and world. This is the familiar -rwxrwxrwx that shows up in the output from the UNIX ls -al command. With Windows NT (and newer UNIX versions that support ACLs in their file systems), permissions can be granted to either individual users or to groups. In NTFS, multiple sets of permissions can be given to many groups and/or individual users. You are not limited to just three sets as in older, nonDCE UNIX implementations.
Setting Special Directory Permissions or Taking Ownership
Every file and directory on an NTFS volume has an owner. The owner controls how permissions are set on the file or directory, and can grant permissions to others. File ownership provides a way for users to keep private files private. The system administrator can take ownership of any file on the system, but the system administrator cannot then transfer the ownership to others. (Editor’s note: Prior to widespread POSIX compliance, this could be done with certain versions of UNIX). Therefore, if an administrator wrongly takes ownership of someone’s files, that administrator cannot subsequently transfer ownership back to the original owner, and the original owner can easily find out who the new owner is.
System Administration differences between Windows NT and UNIX are slowly disappearing. Windows NT has a fully integrated, Windows-based graphical user interface (GUI). Virtually everything a system administrator does on the machine is GUI-based. Similarly, many UNIX systems now have sophisticated GUI-based tools for system administration.
Windows NT Administrative Tools
UNIX systems still allow command line operations for almost all tasks. This feature gives administrators the ability to write powerful scripts that perform routine administration tasks. Windows NT also has scripting capability that uses command language that is a superset of the MS-DOS batch commands. POSIX utilities available in the Microsoft Windows NT Resource Kit can also be used to write useful scripts.
Finally, UNIX administrators are familiar with tweaking configuration files using the vi editor. This is not at all unlike editing registry entries in Windows NT using the registry editor regedt32.
Differences between Windows NT and UNIX are greatest in the type of system administration tasks required. Because Windows NT has a strong client/server orientation, many tasks that the UNIX administrator would be responsible for are actually done by users on their workstations, such as adding fonts or customizing desktop layout. While the GUI makes management tasks easy to accomplish, they are less likely to be repeated—and therefore are automated.
Windows NT supplies a GUI-based backup tool similar to the X-based tools found on some versions of UNIX. It makes use of the archive attribute of a file, which is set whenever the file is modified. Features of the backup/restore utility include:
Normal—copies all files and resets the archive bit.
Copy—copies all files without resetting archive bit.
Incremental—copies files with archive bit set; resets the archive bit.
Differential—copies files with archive bit set; does not reset archive bit.
Daily Copy—copies only files modified that day; does not reset archive bit.
The Backup tool provides a Backup Status dialog box that shows the active status of the tape operation. Windows-based Event Viewer can view the backup history in the system event logs.
More sophisticated enterprise-level backup tools are also available from third parties. Some of them are listed below.
Product | Vendor |
Backup Exec | Seagate Software, Inc. |
ARCServe 6 for Windows NT | Cheyenne Software |
Legato NetWorker for Windows NT | Legato Systems, Inc. |
StorageCenter | Software Partners/32, INC. |
NovaBack+ | NovaStor Corporation |
Disk Administrator is the tool used for:
Disk administrator could be viewed as the graphical replacement for the UNIX mkfs, partition, and the entire family of lvm commands for file systems using the OSF logical volume manager.
Windows NT records all system events in three logs. This contrasts with UNIX systems in which various subsystems and applications write to their own log files, which can be located in various places in the file system. In addition, these logs can optionally be configured to be “circular,” so that they never grow beyond a certain size but merely overwrite the oldest portion of the log as needed.
System-wide audit policy is established and maintained with User Manager for Domains. Individual object access auditing is controlled with either File Manager, for files and dir
ectories, or Print Manager, for printers. The audit log entries are examined and manipulated with Event Viewer.
Windows NT can record a range of event types, from a system-wide event such as a user logging on, to an attempt by a particular user to access a specific file. Both successful and unsuccessful attempts to perform an action can be recorded.
Setting System-Wide Audit Policy
The following types of events can be audited:
After you select a log for display in Event Viewer, you can view, sort, filter, and search for details about events. You can also archive logs in various file formats.
Like UNIX systems, users of Windows NT must be licensed. There are two different licensing models that customers can choose, either per server or per seat. The per-server licensing (concurrent licensing) will cause Windows NT to implement a concurrency check to only allow the number of concurrent file/print users as configured in license manager. Per-seat licensing does not implement the license manager check, though the customer must still purchase a client access license (CAL) for such clients. The following uses require a CAL:
The following applications do not require a CAL:
With per-seat licensing, every computer that will access a server requires a Client Access License. Each properly licensed client can access any server of that type on the network. Per-seat mode is recommended when client computers need frequent or persistent connections to server services or information, especially in multiple server configurations.
With Per-server licensing, a Client Access License applies to a particular server and allows for one additional concurrent connection to that server. You must have at least as many Client Access Licenses dedicated to that server as the maximum number of clients that will connect to that server at any point in time. Per-server licensing is recommended when clients need occasional access to the server, such as ad hoc database queries.
Since usage patterns for server products tend to vary, customers may mix license types on a per-server basis. If desired, you may convert one time from the per-server to the per-seat model. The license manager facilitates this, as well as other related tasks.
This tool is designed for administrators who wish to convert from Novell NetWare to Windows NT. It transfers groups and accounts, and directories and files from NetWare Servers to Windows NT-based servers. It allows you to:
The Windows NT Server CD contains an installation package for Windows 95, TCP/IP for Windows for Workgroups, and other MS-DOS networking utilities. This tool allows you to create diskettes that facilitate installation of the software across the network. This is a very simple tool to use, and installations can be automated by storing option selections in a file.
The equivalent tools exist in some UNIX versions, but they vary considerably between different versions.
A major design goal of Windows NT was to eliminate the complex of parameters that characterized earlier systems. Adaptive algorithms were incorporated in the design so that correct values are determined by the system as it runs. The result is that under Windows NT, optimizing the system is not the art of manually adjusting many conflicting parameters. It is a process of determining which hardware resource is experiencing the greatest demand, and then adjusting the operation to relieve that demand.
Windows NT includes a Windows-based tool for tracking computer performance called Performance Monitor, which differs markedly from sar on UNIX systems but is somewhat similar to UNIX tools such as mon (or top), BEST/1 from BGS, and GlancePlus from Hewlett-Packard. Performance Monitor is based on a series of counters that track such things as the number of processes waiting for disk time, the number of network packets transmitted per second, and the percentage of processor utilization.
Performance Monitor displays information graphically in real time or as text (through Reports). Data can be collected from the local system, or from a remote system through the network. It can be displayed in real-time or collected in logs for later display as a graph or report. Performance Monitor can be configured to generate alert logs. Alert log entries are posted every time a counter exceeds or falls below a user-specified value. They can also be used to trigger Performance Monitor to generate network messages or run programs based on the value of the log entry.
Performance Monitor — Chart View
Remote Access Services (RAS) for Windows NT is based on a client/server architecture, in which a remote RAS client connects to a local RAS server. Windows NT Workstation supports a single RAS client; Windows NT server supports up to 255 RAS clients. With RAS, off-site users and remote administrators have transparent access to resources as if they were working on-site. RAS supports dial-in over asynchronous telephone lines (using modems), ISDN, and X.25 networks from workstations running MS-DOS, Windows for Workgroups, Windows 95, LAN Manager, Windows NT, and non-Microsoft operating systems such as UNIX.
RAS also offers support for PPP-compliant Multilink Channel Aggregation. This enables clients dialing in to Windows NT Server to combine all available dial-up lines to achieve higher transfer speeds. For example, users can combine two or more ISDN B channels to achieve speeds of 128KB or greater, or combine two or more standard modem lines. This provides for overall increased bandwidth and even allows users to combine ISDN lines with analog modem lines for even greater performance.
Windows NT Server supports AsyncBEUI, Point-to-Point Protocol (PPP) and Serial Line Internet Protocol (SLIP). PPP is recommended. RAS supports TCP/IP, IPX, and NetBEUI over PPP and so delivers the ability to use the dial-in server as a gateway to the Internet and UNIX servers. The gateway functions allow clients running TCP/IP or NetBEUI to access only hosts on the network reachable by the server using a different protocol.
With Server Manager, the administrator can see that users are connected to a server, how long they have been connected, and what resources they have opened. The information can be viewed in different formats. For example, You can see which resources are currently being used, or, if you are interested in just one resource, you can view information specific to it, such as who is connected and for how long.
This tool could be viewed as a graphical replacement for the UNIX who, netstat, and fuser commands.
The System Policy Editor is a graphical tool for editing the Windows NT registry. It allows the administrator to manage registries on local or remote systems. It differs from the regedt32 utility in that registry entries are presented as system policies rather than registry key names. For example, ”Do not display last logged on user name” replaces "\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\DontDisplayLastUserName”.
Two types of policies are available: system policies relating to the operating system, and user profiles (default or specific).
User profiles are repositories of information about the user environment. Information about desktop layout, home directory, fonts and colors, network connections, and even applications menus are stored in the profile. In Windows NT, these profiles can reside on a Windows NT Workstation, or they can reside on the server. When the profiles reside on a server, they are known as “roaming profiles,” and the user’s desktop will be identical no matter which Windows NT Workstation the user logs on to.
As discussed below, specifying the location and whether the profiles are optional, mandatory, or roaming is accomplished through the User Manager for Domains tool.
Editing user profiles is somewhat analogous to modifying the /etc/profile and /etc/environment files on UNIX, as well as the individual users’ .profile, .kshrc, .mwmrc, .xdefaults, .xinit.rc (and so forth) files in their home directories.
Through the magic of domains and trust relationships, users have a single logon to the network, which gives them access to their resources from any available workstation. This not only reduces the administrative burden on the system administrator, but it also directly benefits the users in that they have only one logon name and one associated password to remember. In addition, their desktop environments follow them as they move from workstation to workstation.
In Windows NT, user accounts are established and maintained with a Windows-based tool known as User Manager for Domains. Similar tools (for example, SMIT or SAM) exist in UNIX. The difference is that under Windows NT, a user cannot use vi to manually view or edit the /etc/passwd and /etc/groups files. (This is usually considered a more secure arrangement.) You can use User Manager for Domains to perform many additional security-related tasks as well:
User Manager for Domains — New User Dialog Box
User Profile policy is established and maintained with another Windows-based tool: System Policy Editor (discussed in previous section).
This tool allows the system administrator to examine how the system is configured. It provides information about operating system version; CPU and BIOS; physical and virtual memory; device drivers loaded; status of services; diskette, CD-ROM, and disk drives; DMA, IRQ, and memory resources by device or by resource; network statistics; and user environment.
Much of this information is available in UNIX systems, but the tools to discover it vary greatly between the different UNIX implementations.
A number of tools are available to help manage servers. In fact, using the Administrative Wizards makes it easy for even nonprofessionals to perform routine maintenance tasks on a Windows NT Server.
Administrative Wizards
In addition, The Microsoft Windows NT Resource Kit provides advanced tools and hard-copy documentation for developers and systems administrators.
With client/server-based computing, many of the required resources (for example, CPU, memory, and disk storage) exist on the client’s desktop. For this reason, accounting information is less of an issue. However, with auditing enabled, administrators can use the filter function of Event Viewer to see many types of accounting information. For example, you can see:
Filtering with Event Viewer
The client/server model, coupled with domains, trust relationships, and single network logon, makes it possible for most system administration to take place anywhere on the corporate network. Control Panel utilities are the exception because these must be run from the local machine. However, Windows NT support for Remote Access Services makes editing the registry of any Windows NT-based system possible from any location with a telephone.
The object-oriented nature of the Windows NT interface guides the administrator to browse the file system by making selections on the icons for the various drives, directories, and subdirectories. The file finder enables you to search for and view files or directories by name, type, date and time of last modification, or size. Local and Remote file systems are accessed without distinction.
All Windows NT-based systems have built-in administrative shares at strategic points in the file system. These (hidden) shares make it possible for administrators (only) to perform maintenance on file systems from other Windows NT-based systems on the network.
Currently, Windows NT does not support the concept of disk quotas per se. It is possible to do something similar with judicious use of disk partitions and shared directories, and there are third-party products, such as Quota Manager from New Technology Partners, that provide traditional disk quota features.
The built-in job scheduling features of Windows NT are comparable to those that come with UNIX, such as cron and the at command. Windows NT also has a built-in at command, and the Microsoft Windows NT Resource Kit has a graphical interface to the utility known as the Command Scheduler. These are very basic tools, as are their UNIX counterparts.
In most cases in which job scheduling is required, scheduling falls under the control of the application itself. This is consistent with the object-oriented paradigm. For instance, third party, enterprise-wide backup products usually have their own scheduling function.
For more sophisticated tools, the kind of tools you need for data center operations, there are third-party products available for Windows NT. (Some of them have even been ported from UNIX). Some of these packages are listed in the following table.
Products | Vendor |
Argent Queue Manager | Argent Software, Inc. |
AshWin for Windows NT | Creative Interaction Technologies, Inc. |
CA-Unicenter for Windows NT | Computer Associates, Inc. |
POLYCENTER | Digital Equipment Corporation |
We have already covered how DHCP, DNS, and WINS work together to automate the tasks of IP address administration and name resolution on the network. Additional administrative tools in the Microsoft Windows NT Resource Kit such as Net Viewer, Domain Monitor, Browser Monitor, and Process Viewer compliment the list of tools already reviewed.
Any discussion of Network management must include Simple Network Management Protocol (SNMP). SNMP utilizes agents and managers. An SNMP agent application is an SNMP application entity that responds to queries from SNMP manager applications and generates traps to SNMP manager applications. Most manageable network devices such as routers incorporate SNMP agents. An SNMP manager application is an SNMP application entity that generates queries to SNMP agent applications and receives traps from SNMP agent applications.
Windows NT includes a configurable SNMP agent, implemented as a service. You can specify one or more communities to which the Windows NT computer using SNMP will send traps. Also, Microsoft BackOffice products such as SQL Server, Internet Information Server, and Systems Management Server can all make extensive use of SNMP traps.
More sophisticated tools for managing the network devices such as hubs, routers, switches, and so on are available from third parties, listed in the following table.
Products | Vendor |
IntraSpection | Asanté Technologies |
OpenView for Windows | Hewlett-Packard Company. s |
CA-Unicenter for Windows NT | Computer Associates International, Inc. |
Installing and maintaining software is a major cost to corporations with distributed networks. Often, the system administrator must install, upgrade, and configure each computer manually. For a large corporation with locations across a wide geographical area, these installation and support costs may increase exponentially. In fact, most of the cost-of-ownership for a corporate computer system comes not from the initial purchase price of the software, but from the software installation, support, and maintenance costs.
If a corporation already has a distributed network in place, it makes sense to take advantage of its wide-area connectivity for managing software for the entire corporation. But knowing where software is going is essential before it can be installed over the network. And before software can be maintained, its location must be known. This includes what hardware it has, what software is already installed, and how the computers are configured. And administrators need a logical way to group these resources so they can be recognized more easily by location or configuration.
Microsoft Systems Management Server provides system administrators with a method for centrally managing software and hardware for their corporate networks. It is based on client/server architecture. The server runs on a Windows NT Server-based machine; the clients need not. Systems Management Server is an easy-to-use, integrated system that:
Systems Management Server maintains a database containing system information and inventory, carries out distribution and installation jobs, monitors the progress of these jobs, and alerts you to important system events. With Systems Management Server you can distribute and install software on clients and servers across your corporate network, set up network applications, automatically collect and maintain hardware and software inventory, provide direct support to users, and monitor your network.
Windows NT will be familiar to anyone who has used Windows 95. The Windows NT 4.0 and Windows 95 operating systems share a common object-oriented graphical user interface. This interface has evolved from the older Windows NT 3.51 and Windows for Workgroups versions, and can now be considered a state-of-the-art user interface. Virtually all of the tools and facilities of Windows NT use this interface, including the online help engine.
This new object-oriented user interface provides users with a work environment that closely models a traditional office. A good example of the change is word processing. With the old interface, users have to select the icon representing the word processor and, once the program has started, select a file to work on. With the new interface, each separate document will be represented by its own icon. When a user selects one, the word processor runs and automatically loads the file. Users no longer need be concerned about where files are located and which programs to run to access them.
The transition from the UNIX Common Desktop Environment (CDE) to Windows NT graphical user interface is not particularly difficult. The object-orientation of the interface makes it more intuitive than ever before. For those whose only exposure to a graphical environment has been X, the transition will still be straightforward. They will find that Windows NT is far more standardized than X. The graphical user interface permeates not only the Windows NT operating system, but also the applications that run on it. Routine tasks such as printing and getting help also tend to be much easier for the user. These services are graphics-based and standardized across both the operating system and the applications.
The administrator account on Windows NT is the closest thing to superuser or root on UNIX. It is the most powerful logon to the system but does not have the carte blanche powers of root. Windows NT also provides a guest account that is disabled by default.
Logon names for Windows NT can be up to 20 characters in length. Uppercase and lowercase characters are permitted, but the names are not case-sensitive. For example, user names: marcg, Marcg, MarcG, and MARCG, all represent the same user. The user can log on with any of the case combinations shown. This is in contrast to UNIX systems, which tend to balk at logon names starting with uppercase letters. It also reduces false logons that occur when the “Caps Lock” key is left on.
If the user is added as MarcG, then that is the way the name will appear in listings on the system. Logon names for Windows NT cannot contain unprintable characters such as backspace or tab. There are other illegal characters as well such as [, ], ?, >, and <.
Passwords have similar limitations. They can be up to 14 characters in length. They cannot contain unprintable characters but can contain the other characters that are illegal for logon names. Unlike logon names, they are case-sensitive.
Windows NT is not a multiuser operating system in the traditional sense of the word. Multiuser is a concept that comes from the one-computer, many-user paradigm, which is also known as host-based computing. Client/Server is a different world. Users work at client computers, and client computers connect to server computers. The relationship between users and clients is one-to-one; the relationship between clients and servers is many-to-many. So, in a way, a server is multiuser; it is just that the users are client computers, not people.
Nonetheless, there is an operating system based on Windows NT that is multiuser in the traditional sense. Citrix Systems Inc. was granted a license by Microsoft to modify the Windows NT source code to add traditional multiuser support. Tektronix has licensed the Citrix product and incorporated it into their Windows Distributed Desktop (WinDD) product.
Most packages written for Windows 3.x and Windows 95 run on Windows NT. This means that there are literally thousands (more than 15,000 at last count) of shrink-wrapped, off-the-shelf packages available today, and they all share a common user interface.
Windows NT currently ships with built-in support for more than 950 printers. Printers are set up and controlled graphically on Windows NT with the Control Panel and Print Manager applications. Printers can be connected locally to a workstation or, more typically, remotely on the network. With the exception of naming convention, network printers and local printers function identically. Once the printers are configured, all applications in Windows NT have access to them, and, for the user, printing involves little more than selecting an icon with the mouse or making a menu selection.
LPR, LPD. LPR is one of the network protocols in the TCP/IP protocol suite. It was originally developed as a standard for transmitting print jobs between computers running Berkeley UNIX. The LPR standard is published as Request For Comment (RFC) 1179. Windows NT complies with this standard, as do most implementations of Berkeley UNIX. Most new UNIX System V implementations can comply with this standard, so Windows NT will be able to send print jobs to System V computers, or receive print jobs from them.
With LPR protocol, a client application on one computer can send a print job to a print spooler service on another computer. The client application is usually named “LPR” and the service (or daemon) is usually named “LPD.” Windows NT supplies a command line application, the lpr utility, and it supplies the LPR Port print monitor. Both act as clients sending print jobs to an LPD service running on another computer. Windows NT also supplies an LPD service, so it can receive print jobs sent by LPR clients, including computers running UNIX and others running Windows NT.
To use a printer on a Windows NT from a UNIX workstation, use the Windows NT server system name and the printer share name as the server and queue names. The Microsoft Knowledge base has many tips on setting up LPR/LPD printers.
There are no UNIX man pages in Windows NT. Rather, there is a hypertext help system that follows a single consistent model and is used by all programs throughout the system. It provides both general and context-sensitive help as well as keyword and topic search capabilities. You can also set bookmarks, annotate the text, cut and paste, and print selected topics offline. Each application that you install on a Windows NT system uses the same interface; the hypertext database, however, is limited to each application. This contrasts with UNIX, where man pages all reside in a single database.
Microsoft also has a technical support line, forums on most of the major online services, and ftp, gopher, and Web sites on the Internet. In addition there are CD-ROM subscription services available, such as the Microsoft Developer Network (MSDN) and the Microsoft TechNet CD-ROM.
Windows NT is a platform-independent, scalable operating system. It is designed to take full advantage of the advanced computing capabilities of either the x86 or RISC-based processors on which it runs. In addition, it supports both uniprocessor and symmetric multiprocessor (SMP) systems. There are actually two Windows NT-based operating systems that share a common source-code base: Windows NT Server and Windows NT Workstation. This is similar to UNIX, which runs on both server and workstation. Typically, the workstation is a scaled-down version of the server in terms of both hardware and software, especially software for system administration. The same holds true for Windows NT.
Hardware requirements for Windows NT can be broken down into three main areas: processor, memory, and disk space. As a general rule, you will need more of each for Windows NT Server than for Windows NT Workstation. The minimum requirements are:
Windows NT is a portable operating system in the true sense of the word. It runs on many different hardware platforms and supports a multitude of peripheral devices. Windows NT gives you choice.
In the UNIX world, there might be a single UNIX-based server with a dozen applications. The Windows NT counterpart might be two or three lower-cost, Windows NT-based servers with applications, or pieces of applications, split among them. This model of distributed computing improves load sharing, which boosts efficiency. Furthermore, clustering and load-sharing features in Windows NT will revolutionize the whole IT industry. Corporations will be able to deploy enormously powerful, distributed solutions on low-cost, industry-standards-based servers, while increasing the reliability and availability of their systems. Cost-versus-performance comparisons become difficult because the two approaches to computing are so different. Much depends on each environment, such as number of workstations, number and type of applications, and whether or not there is an existing installed base of one or the other type of system. General data is available from industry research firms, such as the Gartner Group, The Burton Group, the META Group, International Data Corporation (IDC), and Transaction Processing Performance Council (TPC).
Modern UNIX systems install with relatively little problem. The reason is simple; a single vendor usually supplies the hardware and the operating system. One might think that, in comparison to UNIX, installing and configuring Windows NT would be a much more difficult problem. The hardware platform can be one of many Intel or RISC systems, all supplied by different manufacturers.
Note For a comprehensive listing, check Microsoft’s Windows NT Hardware Compatibility List at http://www.microsoft.com/isapi/hwtest/hcl.idc.
Installing Windows NT Server is straightforward despite the greater complexity. The software can install by booting from the setup diskettes; or any system booted with MS-DOS, Windows, Windows for Workgroups, Windows 95, or older versions of Windows NT. (A CD-ROM is required, either locally or accessible through the network.) Support for most types of hardware is included on the distribution CD-ROM.
If the system is running MS-DOS, Windows, Windows for Workgroups, or Windows 95, the installation process will detect this and install a boot menu which allows you to select which operating system you wish to run at boot time. If you are running older versions of Windows NT, you have the option of upgrading your old installation or installing both, and choosing which version to run from the boot menu. The boot menu is configured with a default option, which will be executed after a user-specified timeout.
Setup is a GUI-based program that uses standard dialog boxes to ask you for configuration information at various points throughout the process. For example, you are asked for things such as which network protocols to install and which file system to use on a specific disk partition. Once setup has finished, all configuration information is centrally stored in the Windows NT Registry database, and installation of the operating system is complete. The entire process—from start to finish—usually takes less than an hour, and even less for an upgrade of a previous version of the operating system
Adding additional peripherals to the system is usually just a matter of running the appropriate Control Panel application and answering questions in a series of dialog boxes. If required, you are prompted for the source of the distribution files (CD-ROM or network), and asked a series of configuration questions in dialog boxes. This is the common theme for system administrators. Virtually everything you do involves the same process: run a program, and answer questions in dialog boxes.
It is possible for an administrator to make different versions of UNIX appear very similar to the user, but the same cannot be said for system administration. While the command-line tools have some similarity between UNIX versions, the GUI-based tools are strikingly different. However, effective systems administration depends largely on knowing exactly what needs to be done. This is more important than remembering command-line options. Windows NT takes UNIX administration one step further by giving the administrator identical GUI tools on all hardware platforms.
Application software installation and configuration for Windows NT follows the same pattern outlined in the preceding section. You run setup and answer questions in dialog boxes. To change configurations, you typically select an options menu entry and answer more questions in dialog boxes. If you have ever installed a software package under Windows, you know the process. It is the same one that Windows NT uses.
System error messages in UNIX can be difficult to understand. Standard error messages are prone to scrolling off the screen, which means that the error conditions must be recreated. When finally trapped, these errors must be cross-referenced with documentation, and log files must also be checked for error messages. With X, if you set an invalid configuration, it simply ignores the error and moves on without saying a word to you about the problem.
Windows NT takes a different approach. First, configuration changes are made through carefully controlled dialog boxes. This control cuts the chances of error way down. Second, when an error does occur, Windows NT informs you with a descriptive error message in a dialog box. Once you acknowledge the message, Windows NT attempts to recover gracefully from the error condition. If it is an application error, the application is terminated, and its resources are returned to the system.
Over the past few years, several studies have reported the causes and costs of system downtime. The survey base and study time frames affect the data, but both highlight the key areas that need attention when planning a highly available system. The table lists causes of failure in systems operated by Fortune 1000 companies in 1993 and 1994. It includes a wide variety of system types. Total costs were estimated at a staggering $4 billion (US), consisting of lost sales plus lost worker productivity. Another study by Contingency Planning Research estimates the financial impact of system failure in various industries, for a single system at $28,000 per hour in the package shipping business, to $6.45 million in the brokerage operations area.
Cause | Frequency |
Storage | 25% |
CPU | 25% |
Software | 25% |
Communications | 20% |
Operator or User | 5% |
Windows NT Server is a modern operating system that builds on the crash recovery techniques of earlier operating systems. In addition, it is intrinsically a Network Operation System (NOS) that assumes the existence of other Windows NT-based servers in the same domain and employs replication techniques over the network to protect data and increase the availability of basic services. Windows NT Server includes the following capabilities designed to specifically enhance system availability:
Additional capabilities, not specifically designed for high availability, also contribute:
SQL Server implements transaction-based (or time-based) replication. If the transaction-based replication service is used, a perfect mirror of the database is maintained on another server. Should the primary server fail, user applications can manually reconnect to the alternate server. Software products such as Tuxedo can automate the switching of a client from one database server to another. It is interesting to note that SQL Server will replicate transactions to other DBMS engines such as Oracle. Replicating databases works well for smaller databases and lower transaction rates. Performance tends to be limited by the interconnection and the serialization of the local and remote database updates. When a failed node is restored to service, performance may lower for a period while the databases are re-synchronized.
The principal benefits of the replication approach are:
Clustering is a grouping of two or more systems for the purpose of improved application availability and/or improved performance. A client interacts with a cluster as though it were a single server. Microsoft Cluster Server clustering is intended to enhance data and application availability on Windows NT Server by allowing two servers to work together as a single logical system.
Many enterprise customers have used clustering technology in the past to provide greater availability and scalability for their high-end, mission-critical applications. However, these clustering solutions were complex, difficult to configure, and built using expensive proprietary hardware. Microsoft, in conjunction with the computer hardware and software industry, will bring the benefits of clustering technology to the mainstream of client/server computing.
The Microsoft Cluster Server clustering project was first announced in October 1995. Microsoft Cluster Server is based on open specifications, industry-standard hardware, and the ease of use customers have come to expect from Microsoft products. The main goal of clustering is to ensure the availability and scalability of network servers. An additional advantage of Microsoft Cluster Server is manageability. The Microsoft Cluster Server features that address these enterprise issues are described below.
When a system in the cluster fails, the cluster software will respond by dispersing the work from the failed system to the remaining systems in the cluster.
When the overall load exceeds the capabilities of the systems in the cluster, additional systems may be added to the cluster. Formerly, customers that desired future system expansion capability needed to make up-front commitments to expensive, high-end servers that provided space for additional CPUs, drives, and memory. With clustering, customers can add systems as needed to meet overall processing power requirements.
Microsoft Cluster Server is designed to ease the burden of server management. With Cluster Server, administrators will have the ability to remotely manage a cluster as a single system, control the priority of recovery in the event of failure, and easily move workload between servers to take a server offline for maintenance without disconnecting users.
Clusters can be based on "shared everything" (like the Oracle Parallel Server model), or "shared nothing" (like the SQL Server model), and can combine both models. The advantages of clusters are many:
Clusters also protect against site and geographic disasters. Clustered servers may span multiple buildings and provide protection against catastrophic loss of services such as fire and flood. Using high-bandwidth ATM connections, nodes may be located in different cities and provide continuous services in the event of floods, earthquakes, hurricanes, and so on (Editor’s note: This assumes that clients still function.)
Key to the implementation of generalized clustered services is the idea of location transparency. It's hard to hide a server's failure when the client is requesting services from a specific server. Today, the file manager, print manager, Microsoft Exchange, network shares, and so many other aspects of Windows 95 and Windows NT use Universal Naming Convention addresses (for example, \\servername\filename). Microsoft has announced that Windows NT 5.0 will provide location-transparent directory services for "objects" (documents, printers, servers, and so on) and will pave the way for the generalized cluster. Until then, only dual server failover configurations will be supported.
Many companies, including Microsoft, offer software that enables you to create as much of a UNIX-like environment on Windows NT as you need. These solutions range from the POSIX subsystem to a full-blown UNIX development environment and just about anything in between. There are X servers, Internet access utilities, various flavors of shells, virtually every common tool known to UNIX, even vi. This section deals mainly with software that provides user functionality. Another section, “Cross-Platform Application Development,” covers the development tools and environments. There will, however, be some overlap between the two sections.
Many TCP/IP utilities for network access are included with Windows NT; additional tools are available in the Microsoft Windows NT Resource Kit. Commercial packages are also available from third-party sources.
In addition to Remote Access Server, Windows NT includes ping, ftp, telnet, traceroute, rsh, rexec, and Internet Explorer. As you would expect, ping, traceroute, rsh and ftp are non-graphical, command-line utilities. Telnet can be launched from either the command line or from an icon. It is a graphical version complete with VT 52/VT 100/ANSI terminal emulation. Internet Explorer is a World Wide Web browser.
Windows NT Server includes the following utilities:
An updated index to many related client and server tools is available at: http://search.microsoft.com/us/products/windows/ntserver
There are a number of commercial packages that provide various combinations of network access utilities such as telnet, ftp, gopher, archie, and WWW. Most are geared for connecting to the Internet, but they can be used for general-purpose network access tasks, such as those between internal corporate machines. The following table gives examples of some of packages.
Product | Vendor |
Network Access Suite 3.0 | FTP Software |
CyberSuite | Ipswitch |
Newt | NetManage |
Windows NT provides a built-in network file system through its network redirector and server components. Virtually all UNIX implementations provide similar functionality. The three most common UNIX network file systems are the Network File System (NFS), the Andrew File System (AFS), and Remote File Sharing (RFS).
NFS, originally developed by Sun Microsystems, allows directories and files to be shared across a network. It is the de facto UNIX standard for network file systems and has been ported to many non-UNIX operating systems as well. Through NFS, users and software can access files located on remote systems as if they were local files. It works transparently through the UNIX hierarchical file system by “grafting” a branch from the remote file system onto a mount-point or stub of the local file system. Once attached, it appears as just another limb of the tree, and, to either a user or software, it looks like any other local file.
Just as the network file system for Windows NT has two components, the redirector and the server, so too does NFS with the NFS client and the NFS server. As you would expect, the functionality is similar. The main difference is that in the Windows-based client, the remote file system is represented locally as a separate drive, instead of being “grafted” to the local file tree. The client makes the request and the server services the request. Any given machine can be an NFS client, an NFS server, or both. Third-party software is available to turn Windows NT into any combination of NFS client/server. Some of those packages are:
Product | Vendor |
NFS Maestro | Hummingbird Communications Ltd. |
ChameleonNFS/X | NetManage, Inc. |
DiskAccess/DiskShare for Windows NT | Intergraph Corp. |
Solstice NFS Client | Sun Microsystems Inc. |
The next most popular network file system under UNIX is AFS. Originally developed at Carnegie Mellon University, it is now commercially distributed by Transarc Corporation, which is owned by IBM. AFS has a somewhat different focus from NFS in that it is geared for very large, widely dispersed UNIX networks.
There is at least one package available for supporting AFS on Windows NT. It is PC-Interface (V.5.0) from Locus Computing Corp.
RFS, developed by AT&T, has been available under UNIX System V for a number of years. It is not widely used, and, hence, no packages are available for Windows NT.
Many packages are available for starting an X session under Windows. There are two basic techniques of doing this. One opens individual windows on the display for each application started and relies on Windows to perform screen management (resizing, hiding, closing windows, starting new windows, and so forth).
The second technique opens a single window under Windows, which then uses the Motif Window Manager on the host to open manage the subwindows that are opened for each application. While the first technique is more manageable and demands fewer resources, the second technique preserves the “look and feel” of the X environment, since window management is performed by a host-resident window manager. Some of the available packages include:
Product | Vendor |
XoftWare | AGE Logic |
ChameleonNFS/X | NetManage, Inc. |
Excursion | Digital Equipment Corporation |
X-One | Grafpoint |
Exceed | Hummingbird Communications |
Multiview/X | JSB Corporation |
PC-Xware | Network Computing Devices |
X/Vision | VisionWare |
Reflection/X | Walker Richer & Quinn |
Exodus | White Pine Software |
UNIX tools for Windows NT are available both commercially and through the public domain. In addition to the Microsoft Windows NT Resource Kit, there are a number of commercial packages on the market that provide varying degrees of functionality. These packages range from a single tool, such as emacs, to a full-up UNIX environment. The following is a list of some of the products that are available.
There are numerous places where you can find public domain software. The commercial online services, such as CompuServe, America Online, and Prodigy, all have libraries of software available for downloading. The Internet provides a veritable orchard of free software, ripe for the picking. Caveat emptor is the main rule to keep in mind when shopping for public domain software; in many cases, you really do get what you pay for. Here are a couple of World Wide Web (Web) sites that you can use to get started:
Location | Description |
http://www.shareware.com | Cnet Ltd. |
http://www.freeware32.com | Freeware32.com |
http://www.microsoft.com | Microsoft |
http://www.winntmag.com | Windows NT Magazine |
http://www.ora.com | O’Reilly & Associates |
Several products are available that provide access to Windows-based applications from UNIX. Each of these products has taken a slightly different approach to the solution. They range from emulators with somewhat limited functionality to modified versions of Windows NT with 100 percent functionality.
An emulator impersonates the real thing. Effective emulation must meet one or both of the following requirements:
The hardware instructions can be translated in one of several ways:
This library interception/translation can occur in one of two ways:
In general, emulation runs much slower than “native” software compiled for the platform it is running on.
Windows Interface Source Environment, known simply as "WISE," is a licensing program from Microsoft to enable customers to integrate Microsoft Windows-based solutions with UNIX and Macintosh systems. For developers writing applications simultaneously for Windows, UNIX, and Macintosh platforms, WISE Software Development Kits, (WISE SDKs), will significantly reduce development and maintenance time. WISE SDKs remove the need for developers to learn multiple application program interfaces (APIs). With WISE SDKs, developers can write to a standard, consistent, and well-documented set of APIs and take advantage of their solution for the Windows family, UNIX, and Macintosh platforms.
WISE emulators run existing Windows-based applications unmodified on UNIX and Macintosh systems. WISE emulators make thousands of Windows-based applications available to users on UNIX and Macintosh systems. With WISE emulators, users can get inexpensive, shrink-wrapped, Windows-based applications off the shelf and use them on UNIX and Macintosh systems.
WISE SDKs use a developer's knowledge of Windows for UNIX and Macintosh platforms. The WISE SDK remaps the Windows APIs to X and UNIX APIs. The X APIs can either be low-level window creation and manipulation functions (Xlib functions) or high-level toolkit functions (such as Motif). WISE emulators maximize investments in inexpensive, shrink-wrapped, Windows-based applications for UNIX and Macintosh systems. WISE helps MIS managers reduce costs for software development, maintenance, and training.
Microsoft has licensed the Windows source code to MainSoft Corporation, Bristol Technology Inc., Insignia Solutions Inc., and Locus Computing Corporation. Using the products being developed by Mainsoft and Bristol, developers will be able to write to the Win32 API and OLE on different UNIX platforms. Insignia Solutions provides a product that enables shrink-wrapped, Windows-based applications to run on Macintosh and non-Intel-based UNIX systems. Locus provides a product that enables shrink-wrapped, Windows-based applications to run on Intel based UNIX systems.
Wabi™ from SunSoft is not a WISE product. It intercepts the output from Windows-based applications and converts it into X. Under Wabi, Windows-based applications look more like X-based applications. Additionally, each application comes up in its own separate X window, resulting in a desktop model very different from that of Windows NT.
Because Wabi is not based on the Windows source code, application compatibility is also an issue. At the time of this writing, there are only about 20 Wabi-certified applications. They do, however, cover a good range of business productivity applications, such as Microsoft® Word, Microsoft® Excel, Microsoft PowerPoint® presentation graphics program, and Microsoft® Access.
Windows Distributed Desktop (WinDD) is not an emulator. It is, however, similar to Wabi and WISE in that it seeks to provide similar functionality. The WinDD server provides access to personal computer applications through the X Windows protocol. It is a modified version of Windows NT that adds traditional multiuser support. Produced and marketed by Tektronix, it consists of client and server pieces.
WinDD appears as a complete Windows NT-based desktop inside an X window. Everything that would typically be displayed on the personal computer monitor, such as color schemes, fonts, wallpaper, and application customizations, will be present in the WinDD window on the X-terminal or workstation. This means that WinDD provides the same desktop model to the X-terminals or UNIX workstations as Windows NT does to the machines on which it runs.
The WinDD client is responsible for displaying the output from applications running on the server to UNIX workstations or X-terminals. The WinDD Server software compresses updated screen images and transmits the data to the client. Mouse movements and keyboard input are directed by the local client to the WinDD server. The client also manages the overall display characteristics of the X-terminal or workstation, resulting in reduced loads for both the network and the server. Frequently used images, such as icons, bitmaps, and buttons, are cached in the client’s memory, further reducing network traffic and greatly improving the performance of the applications.
There are a number of tools on the market that enable developers to create applications that run on both UNIX and Windows NT. The difference in architecture between the two systems means that a certain amount of discipline is required on the part of the developer to make this work. For UNIX developers this is nothing new; they have had to contend with UNIX inconsistencies for years. These tools also enable them to migrate existing applications from UNIX to Windows NT and vice versa. Some even enable them to take advantage of the best of both worlds by adding functionality from Windows NT- to UNIX-based applications or functionality from UNIX- to Windows-based applications.
Microsoft recommends the following integrated development environments for developing full 32-bit software applications for Windows NT and Windows 95:
They combine graphical interface design tools with industrial-strength language compilers/interpreters to produce a seamless, fully integrated development environment. They are comparable to similar products for UNIX, such as the SoftBench line from Hewlett-Packard.
Java™ is an object-oriented language that allows developers to create applets. Applets are limited-functionality applications that are downloaded to the client at the time they are required. At that point, they are interpreted and run on the client system. Java provides a very “pure” development environment that has no dependencies on the operating system or hardware; everything takes place in a “virtual machine.”
The astounding pace at which the Internet—and the WWW in particular—has been embraced by the user community has contributed greatly to the interest in Java. Microsoft is fully committed to supporting Java; in addition to Internet Explorer, Microsoft provides a robust Java development environment: the Microsoft Visual J++™ development system (V.1.1).
However, Java does not provide all of the functionality and performance required to build many types of applications. Microsoft has extended the functionality of the Internet Explorer browser with ActiveX™ controls.
ActiveX controls, formerly known as OLE controls or OCX controls, are components (or objects) you can insert into a Web page or other application to reuse packaged functionality previously programmed. Developers using tools such as Visual Basic, Visual C++, or Borland Delphi can create small, fast software components that could be displayed inside Web pages. This makes it much faster to develop Web-based applications; developers can simply use the code they are already writing, perhaps for internal custom applications, and insert them onto a Web page.
A key advantage of ActiveX controls over Java applets and Netscape plug-ins is that ActiveX controls can also be used in applications written in many programming languages, including all of the Microsoft programming and database languages.
There are more than 1,000 ActiveX controls available today with functionality ranging from a timer control (which simply notifies its container at a particular time) to full-featured spreadsheets and word processors.
In keeping with Microsoft’s commitment to interoperability, Microsoft delivers Internet Explorer for popular UNIX platforms, such as Solaris. Microsoft also provides development tool kits for building ActiveX controls on the supported UNIX platforms.
There are third-party compilers and interpreters available for just about every known language. The Internet and the commercial online services are also good places to look for these language compilers. The following table lists some of the commercial products that are available.
Language | Product | Vendor |
Ada | Ada95 | Aonix |
Visual Ada95 | Active Engineering Technologies | |
Janus/Ada | R.R. Software, Inc | |
Other Ada compilers directory | http://www.adahome.com | |
COBOL | Object COBOL v4.0 32-Bit | Micro Focus, Inc. |
COBOL Workbench v4.0 | Micro Focus, Inc. | |
ACUCOBOL-GT Version 3.1.1 | Acucobol, Inc. | |
RM/COBOL | Liant Software Corporation | |
Other Cobol compilers directory | http://www.flexus.com | |
Forth | WinForth | Laboratory Microsystems, Inc. |
Other Forth compilers directory | http://www.forth.org | |
LISP | Golden Common Lisp | Gold Hill Co. |
LispWorks 4.0 | The Harlequin Group Limited | |
Pascal | Turbo Pascal for Windows 1.5 | Borland International, Inc. |
NDP Pascal | Microway, Inc. | |
Prolog | Visual Prolog for Windows NT | Prolog Development Center |
Quintus Prolog 3.3 | Quintus Corporation | |
IF/Prolog V5.0 for Windows NT | IF Computer | |
RPG | Visual RPG | Amalgamated Software of North America, Inc. |
NuTCRACKER enables UNIX software to port to, integrate with, and evolve on Windows NT. It is a Win32-based UNIX compatibility environment that essentially creates native Win32-based applications that behave like and interoperate with other Win32-based applications. NuTCRACKER provides UNIX commands and utilities, UNIX libraries, and an X Server on Win32 so developers can compile their C and C++ source code with little or no changes and link it against the NuTCRACKER dynamic-link libraries creating a native Win32-based application. NuTCRACKER's DirectLink technology provides UNIX applications with the ability to directly link with enterprise infrastructure software libraries from Oracle, Sybase, HP, and many others. DirectLink also allows Microsoft standards such as SQL Server, COM, DCOM, ActiveX, and so on to be directly called from and linked into ported UNIX applications. For example, a UNIX and Sybase-based stock trading application can be ported and tightly integrated with SQL Server, Excel, and Exchange in order to provide a seamless financial analysis environment for stock traders on Windows NT.
OpenNT can be categorized as an “operating system product” that is also a software development tool that allows developers to port their own tools and applications and host them alongside Softway tools and Windows programs. The OpenNT enhanced POSIX/UNIX subsystem makes this possible. It provides a UNIX execution environment on a Windows NT-based system that allows the UNIX application to run exactly the same way it ran on UNIX system. So, Windows NT becomes another UNIX server operating system and customers are able to deploy their application on a Windows NT-based system in exactly the same way they deploy it on UNIX servers. That means they can use existing hardware, such as character terminals and X terminals, and the same methods for hosting the multiuser application.
The Wind/U portability tool kit implements the Microsoft Windows API under Motif. Wind/U provides support for Win32 and the Microsoft Foundation Class (MFC) library. Windows applications can be recompiled in the UNIX environment and linked to the Wind/U library. Wind/U intercepts the Windows function calls and redirects them to native Motif and X Window System calls. This permits the application to execute in the native UNIX environment. The application has the look and feel of Motif on UNIX, but also has all the functionality of a Windows application, including DDE, MDI, Palettes, Common Dialogs, and WinSock.
The following platforms are supported:
The Open Software Foundation (OSF) is a consortium of hardware and software suppliers. In 1990, OSF produced the Distributed Computing Environment (DCE) specification. The goal of DCE is to create a single set of standards and protocols through which to link diverse computers into a unified network.
In a prime example of de facto standards becoming de jure standards, OSF selected the components of DCE from among multiple vendors’ technologies. The selected components were then consolidated into an Application Environment Specification (AES). AES is a reference implementation product, in the form of source code and a Validation Test Suite (VTS), which is provided to vendor participants for use in their products.
DCE is often mistakenly thought of as a single technology. The DCE specification is a collection of five different technologies, each of which may in turn be used on a variety of different computer systems. The technologies are RPC, security, threads, directory services, and time services. The focus of Microsoft DCE is on providing strong support for multivendor interoperation through the effective use of DCE-compatible services and other technologies.
It is the Microsoft belief that these services need to be made available without resorting to complex, low-level APIs. If organizations are to succeed in building applications that meet their changing business environments, distributed services will need to be available at a simpler and more accessible level. One way in which Microsoft is addressing this need is detailed in the Distributed Component Object Model (DCOM) of OLE. Briefly, DCOM provides the plumbing needed for applications to use services on a distributed and cross-platform basis.
RPC provides the basis of communication and interoperability between the various DCE services. During the development of Windows NT, it was determined that a strong RPC service was required for many of the internal functions within the operating system. Rather than create a new RPC service from scratch, Microsoft used the AES as the basis for the DCE-compatible RPC services in Windows NT. This integral support for RPC allows Windows NT to integrate with DCE at the RPC level. No additional software need be purchased.
Microsoft has made plans for providing functional compatibility with DCE Security Services in future versions of both Windows 95 and Windows NT Workstation. This functionality will provide the user with a single logon for multiple disparate systems with a client authentication module compatible with both native and DCE security services. More complete support of DCE Security services within Window NT Domain services will come later.
Windows NT was designed by Microsoft as a preemptive multitasking operating system with a native threads service based upon the Win32 design. Because of this design, there is no inherent need to add threads as a separate service for Windows NT or Windows 95-based applications, since the operating systems provide this service. Through the Win32 SDK, developers have full access to Win32 threads service. For DCE developers who want to support either the Win32 or native DCE threads APIs, full support is provided via the DEC DCE for Windows NT-based products.
Included within the Microsoft Windows NT Server product is support for both a native Microsoft and a DCE client directory services. By default, the RPC locator service installed on Microsoft Windows NT Server provides Microsoft RPC address services. As discussed previously in this document, since many of the administrative and other tools for Windows NT are RPC-based, such a service is inherently required. However, for organizations where DCE services are being implemented, Windows NT provides native support for the Name Service Interface Daemon (NSID). NSID, which is a DCE service, provides an open method to create and look up network addresses of RPC server hosts within a cell. Ultimately, with access to the Cell Directory Service (CDS), the NSID can thereby provide Windows clients with full access to all available RPC-based servers, whether provided by DCE or Microsoft RPC.
A time service is provided within each operating system currently produced by Microsoft. It provides a way for servers and workstations to synchronize clocks to a domain controller. However, the native time service is not compatible with DTS. For this reason, a variety of third-party solutions are available to address DTS requirements within an organization.
There are several third-party packages for Windows NT that provide either partial or full compatibility with the native DCE APIs. Microsoft has played and continues to play a key role in ensuring that these solutions exist. One such example comes from Digital Equipment Corporation (Digital).
Digital produces a product for Windows NT with full DCE functionality. Known as Digital Distributed Computing Environment Version 1.1C for Windows NT, it is a full implementation of DCE. The product consists of two separate pieces: the Runtime Services and the Application Development Kit. The Runtime Services include all of the DCE client functions and administration tools. The Application Development Kit provides the Interface Definition Language (IDL) and other tools necessary for developers to create DCE-based applications.
Organizations today face many difficult challenges in providing their people with timely and accurate information. Considerations such as budget, internal and external security, technological advancement, and heterogeneous computing environments contribute to make matters more complex.
In this environment and within these constraints, the IT professional must ask, “How can useful information be delivered to the user?” In the past, there were only a few answers, which included mainframes, minicomputers, UNIX systems, some limited-functionality file-and-print servers, and the desktop systems. Windows NT provides a new option: one that provides the user interface and applications required for desktop systems; the manageability and services of file-and-print servers; and the power and standards-compliance of UNIX systems. Network-based computing means that the IT professional can select the best tool for the job, and the end user has no need to know how the information is delivered, or where it is stored.
Another aspect of information delivery is the presence of Windows-based computers on the desktop, with only a few other more-specialized desktop operating systems. Effective management of these desktop systems has become highly important to those responsible for IT in the organization; it is arguably the most visible measure for their success.
Windows NT and UNIX have many similarities as well as differences. One cannot design a new operating system without being strongly influenced by existing systems; just as UNIX strongly influenced the design of Windows NT. The hierarchical file system, the RPC mechanism, and the concept of threading are all successes of UNIX that were incorporated into the design of Windows NT.
Windows NT departed from the UNIX model in other respects. For example, Windows NT removed operating systems development from the hands of the hardware vendors, which ended the tendency to differentiate operating system versions.
The success of information technology deployed in businesses today depends on the efforts of professionals to carefully define requirements, evaluate possible solutions, and implement the best one. Windows NT and UNIX can both provide effective solutions, and more choices can only lead to a better fit. What’s best is that no matter what the choice, Windows NT and UNIX can interoperate successfully and efficiently.
Those interested in Windows NT interoperability with UNIX can visit the Microsoft Web site at http://www.microsoft.com/NTServer. The site contains concise descriptions of third-party interoperability products, as well as links to other information sources.
The Microsoft Windows NT Resource Kit, published by Microsoft Press, contains technical information about Windows NT. It is meant to complement the standard documentation set for Windows NT, not replace it. It consists of two supplements:
Microsoft Press publishes books for the entire family of Microsoft products. It is your best single source for books about Windows NT. They range from introductory step-by-step tutorials to books about the internals of Windows NT, and cover most audiences along the way. The titles below were downloaded from http://mspress.microsoft.com/.
Other books that specifically address interoperability are:
You can also download a variety of White Papers from http://www.microsoft.com/ntserver/nts/techdetails/overview/WpGlobal.asp. Some of the available White Papers include:
The following White Papers are available from The Burton Group, an information services company specializing in network computing:
access right The permission granted to a process to manipulate a particular object in a particular way (for example, by calling a service). Different object types support different access rights.
application programming interface (API) A set of routines that an application program uses to request and carry out lower-level services performed by the operating system.
asynchronous I/O A method many of the processes in Windows NT use to optimize their performance. When an application initiates an I/O operation, the I/O Manager accepts the request but does not block the application’s execution while the I/O operation is being performed. Instead, the application is allowed to continue doing work. Most I/O devices are very slow in comparison to a computer’s processor, so an application can do a lot of work while waiting for an I/O operation to complete. See also synchronous I/O.
audit policy Defines the type of security events that are logged for a domain or for an individual computer; determines what Windows NT will do when the security log becomes full.
auditing The ability to detect and record security-related events, particularly any attempts to create, access, or delete objects. Windows NT uses Security IDs (SIDs) to record which process performed the action.
authentication A security step performed by the Remote Access Server (RAS), before logon validation, to verify that the user had permission for remote access. See also validation.
batch program An ASCII file (unformatted text file) that contains one or more commands in the command language for Windows NT. A batch program’s filename has a .BAT or .CMD extension. When you type the filename at the command prompt, the commands are processed sequentially.
character-based A mode of operation in which all information is displayed as text characters. This is the mode in which MS-DOS-based and OS/2 version 1.2 applications are displayed under Windows NT. Also called character mode, alphanumeric mode, or text mode.
client A computer that accesses shared network resources provided by another computer (called a server). For the X Window System of UNIX the client/server relationship is reversed. Under the X Window System, this client definition becomes the server definition. See also server.
computer name A unique name of up to 15 uppercase characters that identifies a computer to the network. The name cannot be the same as any other computer or domain name in the network, and it cannot contain spaces.
Data Link Control (DLC) A protocol interface device driver in Windows NT, traditionally used to provide connectivity to IBM mainframes and also used to provide connectivity to local area network printers directly attached to the network.
default profile See system default profile, user default profile.
demand paging Refers to a method by which data is moved in pages from physical memory to a temporary paging file on disk. As the data is needed by a process, it is paged back into physical memory.
device A generic term for a computer subsystem such as a printer, serial port, or disk drive. A device frequently requires its own controlling software called a device driver.
device driver A software component that allows the computer to transmit and receive information to and from a specific device. For example, a printer driver translates computer data into a form understood by a particular printer. Although a device may be installed on your system, Windows NT cannot recognize the device until you have installed and configured the appropriate driver.
directory services The defining element of distributed computing, and, ultimately, a logical name space capable of including all system resources regardless of type. The goal is a blending in which the directory and the network become synonymous.
disk caching A method used by a file system to improve performance. Instead of reading and writing directly to the disk, frequently used files are temporarily stored in a cache in memory, and reads and writes to those files are performed in memory. Reading and writing to memory is much faster than reading and writing to disk.
distributed application An application that has two parts—a front-end to run on the client computer and a back-end to run on the server. In distributed computing, the goal is to divide the computing task into two sections. The front-end requires minimal resources and runs on the client’s workstation. The back-end requires large amounts of data, number crunching, or specialized hardware and runs on the server. Recently, there has been much discussion in the industry about a three-tier model for distributed computing. That model separates the business logic contained in both sides of the two-tier model into a third, distinct layer. The business logic layer sits between the front-end user interface layer and the back-end database layer. It typically resides on a server platform that may or may not be the same as the one the database is on. The three-tier model arose as a solution to the limits faced by software developers trying to express complex business logic with the two-tier model.
DLC See Data Link Control.
DLL See dynamic-link library.
domain For Windows NT Server, a networked set of workstations and servers that shares a Security Accounts Manager (SAM) database and that can be administered as a group. A user with an account in a particular network domain can log on to and access his or her account from any system in the domain. See also SAM database.
domain controller For a Windows NT Server domain, the server that authenticates domain logons and maintains the security policy and the master database for a domain. Both servers and domain controllers are capable of validating a user’s logon; however, password changes must be made by contacting the domain controller. See also server.
domain database See SAM database.
domain name The name by which a Windows NT domain is known to the network.
Domain Name System (DNS) A hierarchical name service for TCP/IP hosts (sometimes referred to as the BIND service in BSD UNIX). The network administrator configures the DNS with a list of hostnames and IP addresses, allowing users of workstations configured to query the DNS to specify the remote systems by hostnames rather than IP addresses. DNS domains should not be confused with Windows NT domains.
dynamic-link library (DLL) An application programming interface (API) routine that user-mode applications access through ordinary procedure calls. The code for the API routine is not included in the user’s executable image. Instead, the operating system automatically modifies the executable image to point to DLL procedures at run time.
environment subsystems User-mode protected servers that run and support programs from different operating systems environments. Examples of these subsystems are the Win32 subsystem, the POSIX subsystem, and the OS/2 subsystem. See also integral subsystem.
environment variable A string consisting of environment information, such as a drive, path, or filename, associated with a symbolic name that can be used by Windows NT. You use the System option in Control Panel or the set command to define environment variables.
event Any significant occurrence in the system or in an application that requires users to be notified or an entry to be added to a log.
Event Log service Records events in the system, security, and application logs.
Executive module The Kernel-mode module that provides basic operating system services to the environment subsystems. It includes several components; each manages a particular set of system services. One component, the Security Reference Monitor, works together with the protected subsystems to provide a pervasive security model for the system.
extensibility Indicates the modular design of Windows NT, which provides for the flexibility of adding future modules at several levels within the operating system.
FAT file system A file system based on a file-allocation table maintained by the operating system to keep track of the status of various segments of disk space used for file storage.
fault tolerance The ability of a computer and an operating system to respond gracefully to catastrophic events such as power outage or hardware failure. Usually, fault tolerance implies the ability to either continue the system’s operation without loss of data or to shut the system down and restart it, recovering all processing that was in progress when the fault occurred.
file sharing The ability for Windows NT Workstation or Windows NT Server to share parts (or all) of its local file system(s) with remote computers.
file system In an operating system, the overall structure in which files are named, stored, and organized.
FTP service File transfer protocol service, which offers file transfer services to remote systems supporting this protocol. FTP supports a host of commands, allowing bidirectional transfer of binary and ASCII files between systems.
Fully Qualified Domain Name (FQDN) In TCP/IP, hostnames with their domain names appended to them. For example, a host with hostname tsunami and domain name microsoft.com had an FQDN of tsunami.microsoft.com.
global account For Windows NT Server, a normal user account in a user’s home domain. If there are multiple domains in the network, it is best if each user in the network has only one user account, in only one domain, and each user’s access to other domains is accomplished through the establishment of domain trust relationships.
group In User Manager, an account containing other accounts called members. The permissions and rights granted to a group are also provided to its members, making groups a convenient way to grant common capabilities to collections of user accounts.
Hardware Abstraction Layer (HAL) Virtualizes hardware interfaces, making the hardware dependencies transparent to the rest of the operating system. This allows Windows NT to be portable from one hardware platform to another.
home directory A directory that is accessible to the user and contains files and programs for that user. A home directory can be assigned to an individual user or can be shared by many users.
host table The HOSTS or LMHOSTS file that contains lists of known IP addresses.
hostname A TCP/IP command that returns the local workstation’s hostname used for authentication by TCP/IP utilities. This value is the workstation’s computer name by default, but it can be changed.
integral subsystem A subsystem such as the Security subsystem that affects the entire Windows NT operating system. See also environment subsystems.
interprocess communication (IPC) The exchange of data between one thread or process and another, either within the same computer or across a network. Common IPC mechanisms include pipes, named pipes, semaphores, shared memory, queues, signals, mailboxes, and sockets.
kernel The portion of Windows NT that manages the processor.
Kernel module The core of the layered architecture of Windows NT that manages the most basic operations of Windows NT. The Kernel module is responsible for thread dispatching, multiprocessor synchronization, hardware exception handling, and the implementation of low-level, hardware-dependent functions.
LLC Logical link control, in the Data Link layer of the networking model.
local printer A printer that is directly connected to one of the ports on your computer.
local procedure call (LPC) An optimized message-passing facility that allows one thread or process to communicate with another thread or process on the same computer. The Windows NT-protected subsystems use LPC to communicate with each other and with their client processes. LPC is a variation of the remote procedure call (RPC) facility, optimized for local use. Compare with remote procedure call.
locale The national and cultural environment in which a system or program is running. The locale determines the language used for messages and menus, the sorting order of strings, the keyboard layout, and date and time formatting conventions.
logon authentication Refers to the validation of a user either locally or in a domain. At logon time, the user specifies his or her name, password, and the intended logon domain. The workstation then contacts the domain controllers for the domain which verify the user’s logon credentials.
LPC See local procedure call.
MAC Media access control, in the Data Link layer of the networking model.
mandatory user profile For Windows NT Server, a user profile created by an administrator and assigned to one or more users. A mandatory user profile cannot be changed by the user and remains the same from one logon session to the next. See also personal user profile, user profile.
MS-DOS-based application An application that is designed to run with MS-DOS and which therefore may not be able to take full advantage of all of the features of Windows NT.
named pipe An interprocess communication mechanism that allows one process to send data to another local or remote process. Windows NT-named pipes are not the same as UNIX-named pipes.
NBF transport protocol NetBEUI Frame protocol. A descendant of the NetBEUI protocol, which is a Transport layer protocol, not the programming interface NetBIOS.
NDIS See Network driver interface specification.
NetBEUI transport NetBIOS (Network Basic Input/Output System) Extended User Interface. The primary local area network transport protocol in Windows NT.
NetBIOS interface A programming interface that allows I/O requests to be sent to and received from a remote computer. It hides networking hardware for applications.
network device driver Software that coordinates communication between the network adapter card and the computer’s hardware and other software, controlling the physical function of the network adapter cards.
network driver interface specification (NDIS) An interface in Windows NT for network card drivers that provides transport independence, because all transport drivers call the NDIS interface to access network cards.
NTFS (Windows NT file system) An advanced file system designed for use specifically with the Windows NT operating system. NTFS supports file system recovery and extremely large storage media, in addition to other advantages. It also supports object-oriented applications by treating all files as objects with user-defined and system-defined attributes.
object type Includes a system-defined data type, a list of operations that can be performed upon it (such as wait, create, or cancel), and a set of object attributes. Object Manager is the part of the Windows NT Executive that provides uniform rules for retention, naming, and security of objects.
OLE A way to transfer and share information between applications.
packet A unit of information transmitted as a whole from one device to another on a network.
page A fixed-size block in memory.
partition A portion of a physical disk that functions as though it were a physically separate unit.
permission A rule associated with an object (usually a directory, file, or printer) in order to regulate which users can have access to the object and in what manner. See also right.
personal user profile For Windows NT Server, a user profile created by an administrator and assigned to one user. A personal user profile retains changes the user makes to the per-user settings of Windows NT and reimplements the newest settings each time that the user logs on at any Windows NT Workstation. See also mandatory user profile, user profile.
port A connection or socket used to connect a device to a computer, such as a printer, monitor, or modem. Information is sent from the computer to the device through a cable.
portability Windows NT runs on both CISC and RISC processors. CISC includes computers running Intel 80386 or higher processors. RISC includes computers with MIPS R4000 or Digital Alpha AXP processors.
print device Refers to the actual hardware device that produces printed output.
printer In Windows NT, refers to the software interface between the application and the print device.
print processor A dynamic link library that interprets data types. It receives information from the spooler and sends the interpreted information to the graphics engine.
protocol A set of rules and conventions by which two computers pass messages across a network. Networking software usually implements multiple levels of protocols layered one on top of another.
provider The component that allows a computer running Windows NT to communicate with the network. Windows NT includes a provider for the Windows NT-based network; other providers are supplied by the alternate networks’ vendors.
redirector Networking software that accepts I/O requests for remote files, named pipes, or mailslots and then sends (redirects) them to a network service on another computer. Redirectors are implemented as file system drivers in Windows NT.
remote administration Administration of one computer by an administrator located at another computer and connected to the first computer across the network.
remote procedure call (RPC) A message-passing facility that allows a distributed application to call services available on various computers in a network. Used during remote administration of computers. RPC provides a procedural view, rather than a transport-centered view, of networked operations. Compare with local procedure call.
resource Any part of a computer system or a network, such as a disk drive, or memory, that can be allotted to a program or a process while it is running.
right Authorizes a user to perform certain actions on the system. Rights apply to the system as a whole and are different from permissions, which apply to specific objects. (Sometimes called a privilege.)
RISC-based computer A computer based on a RISC (reduced instruction set) microprocessor, such as a Digital Alpha AXP, MIPS R4000, or IBM/Motorola PowerPC. Compare with x86-based computer.
router TCP/IP gateways computers with two or more network adapters that are running some type of IP routing software; each adapter is connected to a different physical network.
RPC See remote procedure call.
SAM See Security Accounts Manager.
SAM database The database of security information that includes user account names and passwords and the settings of the security policies.
scalability Scalability depends on the overall architecture of the entire application server. The three critical components of a scaleable system are: operating system, application software, and hardware. No one element by itself is sufficient to guarantee scalability. High performance server hardware is designed to scale to multiple processors, providing specific functionality to ease disk and memory bottlenecks. Applications and operating systems, in turn, must be able to take advantage of multiple CPUs. All three components are equally important.
Schedule service Supports and is required for use of the at command, which can schedule commands and programs to run on a computer at a specified date and time.
Security Accounts Manager (SAM) A Windows NT-protected subsystem that maintains the SAM database and provides an API for accessing the database.
security ID (SID) A unique name that identifies a logged-on user to the security system of Windows NT. A security ID can identify either an individual user or a group of users.
server A LAN-based computer running administrative software that controls access to all or part of the network and its resources. A computer acting as a server makes resources available to computers acting as workstations on the network. For the X Window System of UNIX the client/server relationship is reversed. Under the X Window System, this server definition becomes the client definition. See also client.
Server service A service in Windows NT that supplies an API for managing the Windows NT-based network software. Provides RPC support and file, print, and named pipe sharing.
service A process that performs a specific system function and often provides an API for other processes to call. Services in Windows NT are RPC-enabled, meaning that their API routines can be called from remote computers.
session A connection that two applications on different computers establish, use, and end. The Session layer performs name recognition and the functions needed to allow two applications to communicate over the network.
socket Provides an end point to a connection; two sockets form a complete path. A socket works as a bidirectional pipe for incoming and outgoing data between networked computers. The Windows Sockets API is a networking API tailored for use by Windows-based applications.
standards Windows NT provides support for many standards, some of which are: AppleTalk, Apple File Protocol, C2, Connection-oriented Transport Protocol (Class 4), Connectionless Network Protocol (CLNP), Domain Name Service (DNS), Dynamic Host Configuration Protocol (DHCP), Ethernet, Fiber Distributed Data Interface (FDDI), FIPS 151-2, Frame Relay, IEEE 802.x, IEEE 1003.1, IPX/SPX, Integrated Services Digital Network (ISDN), ISO 8073, ISO 8473, ISO 8208, ISO 8314, ISO 8802, ISO 9660, ISO 9945-1, ISO 10646, ITU FAX Standards, ITU Modem Standards, NetWare Core Protocol (NCP), OpenGL, OSI, POSIX, Point-to-Point Protocol (PPP), Personal Computer Memory Card International (PCMCIA), Serial Line Interface Protocol (SLIP), Simple Network Management Protocol (SNMP), Token Ring, TCP/IP, Unicode, and X.25.
synchronous I/O The simplest way to perform I/O, by synchronizing the execution of applications with completion of the I/O operations that they request. When an application performs an I/O operation, the application’s processing is blocked. When the I/O operation is complete, the application is allowed to continue processing. See also asynchronous I/O.
system default profile For Windows NT Server, the user profile that is loaded when Windows NT is running and no user is logged on. When the Welcome dialog box is visible, the system default profile is loaded. See also user default profile, user profile.
TDI See Transport Driver Interface.
Telnet service The service that provides basic terminal emulation to remote systems supporting the Telnet protocol over TCP/IP.
text file A file containing only letters, numbers, and symbols. A text file contains no formatting information, except possibly linefeeds and carriage returns. Text files are also known as flat files and ASCII files.
thread An executable entity that belongs to a single process, comprising a program counter, a user-mode stack, a kernel-mode stack, and a set of register values. All threads in a process have equal access to the processor’s address space, object handles, and other resources. In Windows NT, threads are implemented as objects.
Transport Driver Interface (TDI) In the networking model, a common interface for network components that communicate at the Session layer.
transport protocol Defines how data should be presented to the next receiving layer in the networking model and packages the data accordingly. It passes data to the network adapter card driver through the NDIS Interface, and to the redirector through the Transport Driver Interface.
trust relationship Trust relationships are links between domains that enable pass-through authentication, in which a user has only one user account in one domain, yet can access the entire network. A trusting domain honors the logon authentications of a trusted domain.
Unicode A fixed-width, 16-bit character encoding standard capable of representing all of the world’s scripts.
user account Consists of all the information that defines a user to Windows NT. This includes the username and password required for the user to log on, the groups in which the user account has membership, and the rights and permissions the user has for using the system and accessing its resources. See also group.
user default profile For Windows NT Server, the user profile that is loaded by a server when a user’s assigned profile cannot be accessed for any reason, when a user without an assigned profile logs on to the computer for the first time, or when a user logs on the Guest account. See also system default profile, user profile.
user mode A nonprivileged processor mode in which application code runs.
user profile Configuration information retained on a user-by-user basis. The information includes all the per-user settings of Windows NT, such as the desktop arrangement, personal program groups and the program items in those groups, screen colors, screen savers, network connections, printer connections, mouse settings, window size and position, and more. When a user logs on, the user’s profile is loaded, and the user’s environment in Windows NT is configured according to that profile.
user right See right.
username A unique name identifying a user account to Windows NT. An account’s username cannot be identical to any other group name or username of its own domain or workstation. See also user account.
validation Authorization check of a user’s logon information. When a user logs on to an account on a Windows NT Workstation-based computer, the authentication is performed by that workstation. When a user logs on to an account on a Windows NT Server domain, that authentication may be performed by any server of that domain. See also trust relationship.
virtual DOS machine (VDM) A Windows NT protected subsystem that supplies a complete environment for MS-DOS and a console in which to run applications for MS-DOS or 16-bit Windows. A VDM is a Win32 application that establishes a complete virtual x86 (that is, 80386 or higher) computer running MS-DOS. Any number of VDMs can run simultaneously.
virtual memory Space on a hard disk that Windows NT uses as if it were actually memory. Windows NT does this through the use of paging files. The benefit of using virtual memory is that you can run more applications at one time than your system’s physical memory would otherwise allow. The drawbacks are the disk space required for the virtual-memory paging file and the decreased execution speed when swapping is required.
volume A partition or collection of partitions that have been formatted for use by a file system.
Win32 API A 32-bit application programming interface for Windows NT. It updates earlier versions of the Windows API with sophisticated operating system capabilities, security, and API routines for displaying text-based applications in a window.
Windows on Win32 (WOW) A Windows NT-protected subsystem that runs within a VDM process. It provides an environment for 16-bit Windows capable of running any number of applications for 16-bit Windows under Windows NT.
Windows Sockets An IPC mechanism based on the WinSock specification and compatible with the Berkeley Sockets IPC under UNIX. The WinSock specification allows hardware and software vendors to design systems and applications that can access virtually any type of underlying network, including TCP/IP, IPX/SPX, OSI, ATM networks, wireless networks, and telephony networks.
workstation In general, a powerful computer having considerable calculating and graphics capability. For Windows NT, computers running the Windows NT Workstation operating system are called workstations, as distinguished from computers running Windows NT Server, which are called servers. See also server, domain controller.
Workstation service A service for Windows NT that supplies user-mode API routines to manage the Windows NT redirector. Provides network connections and communications.
WOW The subsystem for running Windows for MS-DOS under Windows NT; sometimes also called Win16 on Win32.
x86-based computer A computer using a microprocessor equivalent to an Intel 80386 or higher chip. Compare with a RISC-based computer.
For the latest information on Windows NT Server, check out our World Wide Web site at http://www.microsoft.com/ntserver, or the Windows NT Server Forum on the Microsoft Network (GO WORD: MSNTS).
Information in this document, including URL and other Internet web site references, is subject to change without notice. The entire risk of the use or the results of the use of this resource kit remains with the user. This resource kit is not supported and is provided as is without warranty of any kind, either express or implied. The example companies, organizations, products, people and events depicted herein are fictitious. No association with any real company, organization, product, person or event is intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.
Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.
© 1999-2000 Microsoft Corporation. All rights reserved.
Microsoft, ActiveX, BackOffice, the BackOffice logo, FrontPage, MS-DOS, NetMeeting, NetShow, PowerPoint, Visual Basic, Visual C++, Visual FoxPro, Visual J++, Win32, Windows, and Windows NT are either registered trademarks or trademarks of Microsoft Corporation in the U.S.A. and/or other countries/regions.
The names of actual companies and products mentioned herein may be the trademarks of their respective owners.