Ruediger R. Asche
Microsoft Developer Network Technology Group
May 9, 1995
Click to open or copy the files in the CLIAPP/SRVAPP sample applications.
This article is first in a series of technical articles that describe the implementation and application of a C++ class hierarchy that encapsulates the Windows NT™ security application programming interface (API). The series consists of the following articles:
"Windows NT Security in Theory and Practice" (introduction)
"The Guts of Security" (implementation of the security class hierarchy)
"Security Bits and Pieces" (architecture of the sample application suite)
"A Homegrown RPC Mechanism" (description of the remote communication implemented in the sample application suite)
In this article, I will discuss security coding on a rather high level; that is, I will show how security can manifest itself in server code without presenting the actual code. CLIAPP/SRVAPP, a sample application suite that consists of a database client and server, illustrates the concepts introduced in this article series.
If you have a rather hazy notion of what security programming in Win32® is about, or if you are interested in security programming from a conceptual point of view, this article is for you. If you are already familiar with the concepts of security programming and would like to see working code, or you wish to plug the C++ security library I provide into your server application, you should skim the last section of this article to get an idea of what the sample application suite does, and then proceed with the next article in this series, "The Guts of Security."
Security should be fairly straightforward to implement in an operating system, right? I mean, all it should take to assign a certain security level to an arbitrary object is a single function call, such as GrantAccessTo or DenyAccessTo, right?
Unfortunately, the Windows NT™ security application programming interface (API) doesn't appear to be that straightforward. It includes a plethora of functions that relate to security, and already the task of, say, opening up an object to only one user is very complex.
To utilize the security API appropriately, you need to understand it at several levels:
As long as these services are used only by third-party applications, it is fairly easy to understand how security works. However, Windows NT is a secure operating system (to stick with our previous definition, the operating system uses the services provided by the security API), and furthermore, networks based on Windows NT also rely on security very heavily. Therefore, the way security is incorporated into the system itself is rather obscure. (I will examine this issue in a future article.)
Before I dive into the subject matter, let me make a few suggestions for further reading that supplements the discussion in this article. First, you should definitely read Robert Reichel's two-part article "Inside Windows NT Security," which appeared in the April 1993 and May 1993 issues of the Windows/DOS Developer's Journal. Robert is one of the key developers of the Windows NT security subsystem, and his discussion of the security components and data structures is probably the most comprehensive information you can find on the subject. Second, for a conceptual discussion of security and how it fits into the architecture of a network based on Windows NT, you should read the Resource Guide in the Windows NT version 3.1 Resource Kit.
Before I go into any details, let me clarify why you might need security. You may not have to bother with security at all, unless the following holds true for you:
You are writing a server application, that is, an application that several users can access, and your application provides data structures that are restricted to only a subset of those users.
Note that this is a fairly broad definition, and deliberately so. Here are a few examples of applications that fit this category.
For stand-alone computers (that is, machines that are not tied into a network), you can write a service that starts up as Windows NT boots and keeps running even as multiple users log on and off the same machine. The service could provide information that is visible only to a few users—for example, if you wish to compile usage patterns or logon data, you might want to restrict access to that data to the machine's administrator(s).
A number of privileges are restricted on the system level. For example, the system Registry is protected so that only users with special privileges are allowed to add device drivers to the system. This is for security reasons—for example, a malicious user could misuse a device driver's ability to monitor user input to spy on other users' work. Security can also help stabilize your system. Consider a poorly written device driver installed by an unauthorized user. Such a driver could crash the machine while another user is working. By restricting the ability to register new device drivers to trusted users, we can shield a Windows NT machine from this kind of misuse.
A large number of server applications that work over a network as well as stand-alone will benefit from some kind of hook into the security system. For example, a database server might serve several users at the same time, some of whom may not be allowed to see some of the data in a given database. Let us assume that everybody in your company can query your employee database. Administrative personnel will need to access all the information on employees, whereas everyone else should be able to see only job titles and office numbers. If you restrict the database fields that contain information on salary and benefits to administrative personnel, you can, in effect, allow everyone in the company to use the same database without compromising security and privacy. We will look into this possibility later in this article.
One of the problems with security is that there is nothing fancy or glitzy about applying the security API. Other people in my group write code that rotates teapots, displays animated images in a window, pops up cool new Windows® 95 controls, sends data back and forth through a MAPI channel, and so on. I always seem to pick the boring stuff. . .
As complex as security under Windows NT may appear, it is rather straightforward on a microscopic level. Each Windows NT domain (or domain group) keeps a database of users that the domain knows about. A user who wishes to work on a computer within a Windows NT domain must identify himself or herself using a user name and a password. As soon as the security system verifies the password against the user database, the user (and every process he or she starts) is associated with an access token, an internal data structure that identifies the user.
The first thing you must know about security under Windows NT is that it is user-centric; that is, each line of code that attempts to access a secured object must be associated with a particular user—a user who must identify himself or herself to the client machine using a password. Each security check is made against the user identification. It is not possible, for example, to write code that prevents Microsoft® Excel from accessing an object. You can secure an object against access from Joe Blow running Microsoft Excel, but if Carla Vip is allowed to access the object, she can do so using Microsoft Excel or any other application she pleases—as long as Carla identifies herself on the client machine, using a password that is known only to her.
The security API, complicated as it may appear, accomplishes only two things:
I am absolutely serious. That is all. Error 5. Access denied. Instead of that error message, the user may see a dialog box that reads something like: "You do not have the privilege to remove the eggs from the carton." Internally, the application that pops up this dialog box probably contains code along the following lines:
if (!RemoveEggsFromCarton() && GetLastError() == ACCESS_DENIED)
AfxMessageBox("You do not have the privilege to remove the eggs
from the carton");
Windows NT uses two mechanisms that cause a failed access attempt to return Error 5: verification against rights and verification against privileges. A right pertains to an action on an object, such as the right to suspend a thread or the right to read from a file. Rights are always associated with a certain object and a known user. For example, the right to read from a file must be associated with a file (to which this right is applied) and with a user who does or doesn't have that right. Likewise, the right to suspend a thread is useless unless it is associated with a specific thread and a user.
Privileges are predefined rights that pertain to operations on the system. For example, there are privileges to debug applications, to back up and restore storage devices, and to load drivers. Privileges are centered around users, not objects.
To make the distinction between the two a little bit clearer, let's look at the data structures that implement rights and privileges: A right is specified in a data structure called an access control list, or ACL. An ACL is normally associated with an object. A user is represented by an access token. When a user tries to access a secured object, his or her access token is checked against the object's ACL. The access token contains the unique identifier (the security ID, or SID) that represents the user. Each right in an ACL is associated with a SID; this way, the security subsystem knows the rights associated with each user.
Privileges, on the other hand, are encoded in the access token, so no objects are associated with them. To determine whether a user is allowed to do something that is associated with a privilege, the security subsystem examines the access token.
Furthermore, whereas rights require the specification of an action (the right to do what?—for example, to read a file or to suspend a thread), privileges do not (that is, the user either does or does not hold them). The action that goes with the privilege is implied in the privilege itself.
The reason why privileges are encoded in the access token is that most privileges override security requirements. For example, a user who is allowed to back up a storage device must be able to bypass file security—adding a new ACE to every single file on a hard drive just to allow the user to touch the file is simply not feasible. Thus, the code to back up a storage device first checks to see whether the user attempting the backup has backup privileges; if so, individual file security is ignored.
The set of privileges that can be associated with an access token is hardcoded and cannot be extended by an application. Server applications can implement customized security regulations using specific rights and generic mappings.
There are two types of ACLs: discretionary (DACL) and system (SACL). DACLs regulate object access, and SACLs regulate auditing.
In most cases, Error 5 is generated internally by a Windows NT–specific Win32 function called AccessCheck. This function takes as input a user's access token, a desired privilege, and an ACL (this simplification is good enough for now; we'll look into details later). An ACL is basically a list of small data structures (called access control elements, or ACEs), each of which specifies one user or a group of users, a set of rights, and the information on whether the rights are granted or denied. For example, an ACL might have an ACE that reads, "The right to remove eggs from the carton is explicitly denied to the users Elephant and Bozo," followed by an ACE that contains the entry, "The right to remove eggs from the carton is explicitly granted to Betty Crocker and all users in the CHEFS group."
ACLs are typically associated with objects and can be built dynamically from your server application. For example, if a file object is associated with an ACL, whenever an application tries to open that file object, the ACL will be consulted to determine whether the user who is running the application is allowed to open the file.
The AccessCheck function is called internally from a number of system functions, for example, CreateFile (when a user attempts to open a file on an NTFS partition or on a named pipe) and OpenFileMapping. However, a Win32 server application can call AccessCheck directly, thereby protecting any object it wishes to.
Note that functions of the security API are called only by server applications; clients never request or employ security directly. All that clients ever see of Windows NT security is Error 5. This allows Windows NT security to work regardless of which software the client is running. All that is required is the server's ability to identify the client in the domain's security database and translate any incoming request from the client to a function call on the server side. This function either calls AccessCheck implicitly, or its result may or may not be sent, depending on the outcome of AccessCheck on the server side. (This sounds very abstract, but I do exactly that in the server application later on.)
Part of the confusion in Windows NT security is the fact that calls to AccessCheck can be very obscure. For example, the ability of Windows NT to monitor attempts to install device drivers is a very shady notion—which "object" does a user try to access when attempting to add a device driver? Where exactly does the system call AccessCheck and display an error to the user if necessary?
In the case of device drivers, the answer is not too difficult: Because device drivers and the system interact through the Registry (Windows NT loads device drivers by traversing a Registry subtree, interpreting each entry, and trying to execute the driver binaries that are specified in the individual Registry keys), the objects that Windows NT protects are Registry keys, which are securable objects under Windows NT. On the Win32 API level, any attempt to manipulate the Registry will be translated into one of the functions that work on the Registry, such as RegOpenKey, which calls AccessCheck internally.
Note that aside from the Registry protection, there is also a security issue with the driver binaries. A frustrated hacker who is denied access to the Registry could still replace an existing driver executable file with a file that clones the existing driver and also has additional functionality—this process does not require access to the Registry, so how can Windows NT prevent this kind of misuse? Rather easily, by requiring that the driver binaries reside on an NTFS partition, and by restricting access to the binaries. This way, an attempt to replace the driver binary (which will inevitably end in a DeleteFile or CreateFile call on the Win32 API level) will be trapped by AccessCheck, and our malicious hacker will be out of luck.
Other system-provided, secured objects may be more difficult to figure out. For example, what exactly prevents a user from accessing a protected network share? What refuses to let you open the service control manager on a remote machine? What is it on the system level that makes Windows NT airtight? Or, even trickier, what makes some of the security functions themselves fail with Error 5, access denied? Imagine what would happen if an application could freely manipulate its access token or call a security function to change the privileges on objects it can see. In that case, it would be easy to bypass security by simply adjusting the entries in the ACLs and tokens. Thus, there must be some kind of "meta-security"; that is, a mechanism to protect the security features themselves from misuse. How is that implemented?
Tune in again next week for our next episode. . . I will discuss the secure architecture of Windows NT itself in a future article. This article and its siblings, "The Guts of Security" and "Security Bits and Pieces," deal only with the easy issue: how to secure your objects in your server application. Back to our main program after these exciting messages from our sponsor.
Note that one of the consequences of a security implementation based on AccessCheck is that security relies heavily on architectures that allow only well-known entry points to secured objects. For example, the Windows version 3.1 family of operating systems includes a large number of different entry points to the file system: int 21h (which interacts with the file system), int 13h (which interacts with the disk device driver), and several types of C run-time and Windows API functions (such as OpenFile and _fopen) that provide file system access. It would make no sense from a security point of view to call a function such as AccessCheck in the internal implementation of OpenFile, when an application could simply call _fopen and bypass file security. As long as all variations of file-open calls are translated into one "secured" call, we are fine; but as soon as one variation performs security checks and another doesn't, we have a security problem.
As a side note, this "open file system" architecture in the 16-bit Windows system is one of the major headaches for vendors who provide security add-ons, such as encryption software and hardware.
When you write a secured server application, it is absolutely necessary that you design your application to be airtight; that is, you must protect all means by which clients may access your sensitive data. In the sample application, I show how remote access from a client to a server may be protected, but if the client and server happen to reside on the same machine, and the shared memory in which the database resides is not protected, the client can "sneak into" the database, thus violating security. One of the challenges of a secured system is to make the sensitive data airtight—this may be a fairly intricate task, as we saw in the case where protecting Registry entries alone is not good enough to protect a machine's device drivers.
With the security API, the system can help you regulate access to almost any kind of object. But what does "access" mean? Isn't the type of access you imply when you talk about database fields something completely different from accessing, say, the message loop of another window?
Exactly, and that is why "access" is a fairly generic term in the security API. Instead of hard-coding access types such as "the right to open, close, read from, or write to an object," access in Windows NT is defined as a collection of bits in a mask. The security subsystem matches the bits in the user's access mask with the bits in the object's access mask. This enables us to design an employee database, for example, in such a way that administrators can read and write information on payroll and benefits; managers can read, but cannot write to, these database fields; and nobody else has read or write access.
By the same token, your application can define its very own access types. For example, if your application wants to secure an OpenGL™ object that can be shared (in the sense that several users could call functions that manipulate the object on the screen), you could simply define unique access rights for all the cool things you can do with OpenGL objects (for example, rotate, stretch, flip, and remove), and assign each user who works on the image a unique subset of those rights.
The security API can work with three groups of rights:
So far, this has been a really abstract discussion. Let's look at some hands-on examples to clarify what we talked about.
I argued earlier that servers are the only applications that need to call security API functions, so they can regulate access by client applications. Thus, a sample that demonstrates security requires at least two parts: a server application that allows or disallows a client to remove eggs from the carton, and a client application that attempts to remove the eggs.
Consequently, the sample code that I've provided with this article consists of two parts: a server application (SRVAPP.EXE) and a client (CLIAPP.EXE). To minimize the potential harm, my sample application suite does not work on eggs, but on data structures in memory. Let's see what you need to run these applications.
To run the sample code, you need at least one computer running Windows NT. This will be the server machine. The server machine will contain the data to be opened or denied to users.
This data will be accessed by applications running on a client machine. The client machine can be the same as the server machine, or it can be another machine running either Windows NT or any other operating system that can execute Win32 binaries and log onto a Windows NT domain (for example, Windows for Workgroups 3.11 with Win32s® extensions, or Windows 95). Please see the next section, "The Secret of Logging On," for more information on what happens when a user identifies himself or herself.
The server machine must be able to access security information for the user who logs onto the client machine. If you set up the client and server on the same machine, this requirement is automatically met. If the client is on a different machine, you must log onto the client machine as a user on a domain that the server can access.
Because security is based on users, you should first create a few test user accounts under which you can log onto the client machine. (On a Windows NT Server machine, you can use the User Manager for Domains application to create these accounts. On a Windows NT Workstation machine, the "normal" User Manager will do, as long as you have administrative privileges on the machine.) You should also create a few groups (also with the User Manager), and set up a few assignments between users and groups. For my test purposes, I created two groups, OOZLES and WABBOTS, and four users, Lillo, Gnorps, Alf, and Picard. I assigned users to groups as follows (we will come back to this scenario later on):
In the Windows NT security model, a user is identified by two components: the user name and a domain name. To create a user account on a specific domain, you must have administrative privileges on that domain. You will not need administrative privileges to run the tests, but only to create the user accounts and groups.
If you do not have the appropriate privileges to create new accounts in your domain, you can run both the server and the client applications on machines that are on the same network (for example, your corporate network), using users and user groups that already exist in your network's domain.
As I mentioned before, the client application does not have anything to do with security. It simply tries to access shareable objects that the server owns; all the code that manages security resides in the server. All the client will notice in terms of security is that it may be denied certain operations on the shared objects—some API functions that the client calls may simple return the Error 5 (access denied) instead of succeeding.
Now let me clarify what "logging on as" means. If your server machine is running the Windows NT Server operating system, you can use the User Manager for Domains application from the Administrative Tools Group to create user accounts. You can then configure the client machine to belong to the domain that Windows NT Server administers. A user who wants to log onto the client machine must identify himself or herself through an account that the server machine administers.
However, if the server is running the Windows NT Workstation operating system, a user on the client machine cannot use an account on the server. Note that if the user who is logged onto the server machine has administrative privileges, he or she can maintain user accounts on that local machine—in that respect, Windows NT is a stand-alone domain that follows the same security rules as a multi-machine domain. The only difference is that client machines cannot register themselves as belonging to the domain defined by a workstation.
However, you can still run the sample application suite, even if you are not running Windows NT Server. Do the following:
net use drive: \\server\share /user:server\account *
where drive is the drive letter you wish to use for the connection, server is the name of the server machine, share is the share that contains CLIAPP.EXE, and account is an account on the server.
For example, if the server machine is called SLACKER, and you create the above user accounts on that machine's domain, the logon would look something like this on the client side:
net use K: \\slacker\database /user:slacker\lillo *
Using the password that the server machine assigned to the user account, you can now log onto the server machine and run the server application remotely. The server will now treat the client machine as if it were the user for whom we created an account. This is possible through a process known as impersonation: When the client connects to the share where CLIAPP.EXE resides, the server treats this connection as if the specified user had logged onto the server.
The sample application suite, which consists of a server application and a client application, demonstrates how to secure named pipes, mutexes, file mappings, and private objects.
The server application is a little database server. Bring up SRVAPP.EXE on the server machine and experiment with the commands in the Database menu: You can insert records (in this case, a record simply consists of two integer values), view the contents of the database, and remove records. No magic to that whatsoever. The main purpose of the server application is to provide a shell that demonstrates how to regulate access to the database from a client application—that is, the server accepts database requests from remote clients, decides whether the access is allowed (according to security requirements that the user of the server application specifies), and, depending on the outcome, grants or denies the database requests to the client.
Note that the database software is homegrown: that is, I supply all the logic and code to maintain the database. In this age of automation and powerful database controls, it makes little sense to reinvent the wheel. Fortunately, the encapsulation mechanisms of C++ allow easy replacement of the database, so for future enhancements of the server application, I plan to replace the homegrown database with an existing database control (for example, an OLE automation object provided by a database server application).
You can use two techniques to allow a client application to utilize the server's database:
See the third article in this series, "Security Bits and Pieces," for more information on these techniques.
On the client machine, follow these steps:
Note Here lurks an opportunity for confusion. I mentioned earlier that security is user-based, but a named pipe is identified by the name of the machine on which the pipe was created. Thus, if the name of the server machine is SLACKER and you log onto the server machine as Gnorps, all security checks are performed based on the identification of Gnorps, but a named pipe is created using the name SLACKER. (Please see the "Garden Hoses at Work" article in the MSDN Library for more information on named pipes.)
The client should now display a line in its main window saying that the named pipe could not be opened due to error—"access denied." Hah! We got 'im!
Note You can change the permissions on the pipe while the pipe is connected, but if you do that, you will not see any change in behavior until the pipe is disconnected and reconnected to a new client. The security system performs the access check whenever a client attempts to open the pipe. If the opening is successful, the access rights do not change until the pipe disconnects. For example, if you decide that a certain user (who currently has read/write permissions on the pipe) should only have read permissions on the pipe, you can change the permissions whenever you like. However, as long as that user has successfully opened the client end of the pipe for reading and writing, he or she can read from, and write to, the pipe until the pipe is disconnected. That is why you should have the client disconnect every time before you make security changes.
Take some time to play with the rights to your heart's delight—the Permissions dialog box lets you grant, deny, or revoke previously granted or denied rights. Try to get a feeling for how group rights relate to user rights; for example, what happens if access is denied to Lillo but granted to WABBOTS when the client is logged on as Alf? What happens if you have a more complex hierarchy of user groups (say, a three-level hierarchy) in which individual users are excluded from a high-level group, but included in a lower-level group? (For example, if both OOZLES and WABBOTS belong to the BALLOONS hyper-group, what if Lillo is in BALLOONS and WABBOTS, but not in OOZLES?)
What happens in the case of ambiguous rights? To test this, log off the client machine and log on as Alf, who, as you may recall, is a member of both OOZLES and WABBOTS. Grant WABBOTS access to the named pipe, and deny access to OOZLES. When the client tries to access the named pipe now, it should see the dreaded "access denied" message.
Does that mean that denying access is stronger than granting access? After all, as a member of both WABBOTS and OOZLES, Alf is both granted and denied access, so why is the client denied access when it tries to access the named pipe? Even stranger, if we now explicitly grant access to Alf, the client will still be denied access! How come?
There is a reason for everything. When Alf tries to connect to the named pipe, Alf's identity is matched against all access rights in order, and the first right that contains Alf is applied. The code that I provide deliberately adds all access-deny rights to the beginning of the list. When the code that tries to open the named pipe traverses the list of assigned rights, the first access that applies to Alf is the one that reads "access denied to OOZLES." As soon as the security system encounters this entry, the security check function returns immediately, regardless of what follows the entry in the security list. If the entry "access granted to WABBOTS" appeared before the "access denied to OOZLES" entry, the security system would return with an access grant.
So why does my application code stuff all of the denied rights in front of the granted rights? Remember that for each object that is associated with a set of rights, a user who has not explicitly been granted access is automatically denied it. Thus, we could simply grant access to the users who we believe should access the object, and let everybody else die in the default case. So why do we need access-denied entries at all?
Very simple: to refine rights we have granted before. Let us assume that we want everyone in the OOZLES group except Alf to be able to access the named pipe. Because OOZLES is potentially a fairly large group, we do not want to enumerate all users within OOZLES and grant each one (except Alf) access. An easier way to exclude Alf would be to deny access to Alf explicitly while granting access to OOZLES. That is where it makes perfect sense to put all access-denied elements in front of the list: While traversing the list of rights, the system would find Alf's denied entry first and return, whereas for every other user in OOZLES, the system would scan the list until it found the access-granted element.
When you have tired of toying with this kind of thing, you are ready for the next step. Go back to a security assignment in which the client can successfully open a connection with the server. On the server side, add a few records to the database, then select View Contents from the server's Database menu. You should now see a few entries in the server's menu reading something like, "Element x has values y and z."
Now do the same thing on the client side: Select View Contents from the client's Remote Access menu. You should now see a number of messages on the server side reading, "Remote retrieve succeeded," and on the client side, you should see the same messages you saw earlier on the server side: "Element x has values y and z."
When I got to this point in my application design, I was pretty happy: I had designed a complete little RPC-based database server and client application in which the database could be accessed from both sides over a network. However, no security was yet involved (except for the named-pipe protection), so I added what you can see next.
Try to add a record from the client, using the Add Record command from the Remote Access menu. On the server side, you will see, "Remote insert failed—propagating error Access Denied," and on the client side, you will very simply see the message, "Could not insert element—access denied."
Why is that? Simple: I have designed the application such that the client can always read from the database, but not write to it unless that privilege is explicitly granted. Both inserting and removing records is considered writing to the database, so the client can enumerate the contents of the database (read from it), but cannot add or remove contents (write to it).
You have probably already spotted the Database command in the server's Permissions menu. Now is the time to use it: Bring up the Permissions/Database dialog box and use it exactly as you used the Permissions/Named Pipe dialog box. Grant access to the user in whose context the client application runs, and voilà! The next time the client application attempts to add a record, you will see the message, "Remote insert succeeded!" on the server and client sides, and viewing the contents of the database on either side will give you the new record. The same thing works with record deletions.
Phew! Now we have not only a client-server database system, but a secured client-server database system, on which the server can restrict access to the database to known and trusted users! Doesn't that cover all we wanted to demonstrate?
Yes and no. I made a point earlier that security must be airtight, and there is one more thing you should do before turning off your computer.
Shut down the client application, and restart it on the server machine. Yes, you heard right: For this test run, we will execute both the client and server applications on the same machine. It should come as no surprise to you that the application suite will behave exactly as it did before—with one little exception: When you select Open Shared Database from the client's Local Access menu, you will see the error message, "Access denied." (Attempting this option when the client and server are executing on different machines returns the message, "The filename is incorrect.") We have seen that one before, right? And the last time we saw this, we modified the Permissions dialog boxes, so let's choose the Shared File command from the server's Permissions menu and grant access to the user under whose name we logged on.
Now selecting the Open Shared Database command should succeed. Miraculously, the Add a Record, Remove a Record, and View Contents commands in the client's Local Access menu are not only enabled, but invoking them always succeeds, regardless of the permissions! Whew! To confirm that something weird is going on, grant the current user access to the named pipe (just as you did when the client and server executed on different machines), open a connection, and verify that the client is now denied remote access, but can party on the database via the shared file! Security violation? Anarchy?
Not really. This little experiment demonstrates that a server application must be very careful in restricting access to its sensitive data through all paths available to the client. In our little case, we are fine because the client cannot access the shared memory unless it is granted explicit permission. However, if the code had not secured the shared memory area, it would have given the client a "trap door" for accessing the data, although the "proper" (in this case remote) access to the database was appropriately protected. Making a server application airtight can be one of the major challenges in security design.
We have established a context for Windows NT security programming and illustrated the concepts with a "black box" sample application suite. You should now have a conceptual idea of how security programming for Windows NT works. I'm sure you are dying to figure out how the server was coded to incorporate all the security magic. (Yeah, right. . . As I write that, I look out the window and wish I had brought in my motorbike today.) If so, you should proceed with the article "The Guts of Security," which goes into all of the gory details of security programming.