High Availability for Microsoft Site Server 3.0

April 1999

Microsoft Corporation

This paper leads you through the process of building a highly available clustered data center. The examples presented here use the Microsoft® Cluster Server and the Microsoft Windows Load Balancing Service (WLBS) to achieve high availability.

For information about hardware configurations, see Appendix A, Tested Hardware.

This document is not meant to replace any product-specific manual or documentation. Wherever doubt or inconsistencies arise, the product-specific manual and documentation take precedence. Other manuals that may be of assistance include, but are not limited to, the following:

Example Data Center System Layout and Topologies

This document will lead you through the construction of an example configuration. All of the procedures refer to this specific configuration. If you plan to build a system of a different configuration, you can extrapolate from this example. If you build our configuration, but use different names and IP addresses, make a list of your names and our names so you can avoid confusion as you proceed.

The cluster is configured with two networks: a public client network and a private local area network (LAN) for cluster communications.

A cluster is a group of independent computers that work together as a single system. This arrangement allows users to access and manage the group of computers as if it were a single computer. In a cluster, all the computers are grouped under a common name, the cluster name (virtual server name), which is used for accessing and managing the cluster. In this example, two computers are combined into a cluster.

Each computer in the cluster is called a node. In this example, there are two nodes: ClusterNodeA and ClusterNodeB. These nodes have a network link between them that they use to communicate with each other. Each node must also have a connection to a shared Small Computer System Interface (SCSI) hard drive. It is on this shared drive that the shared cluster data is stored.

Cluster Terminology

Resource

A resource is a physical or logical entity, such as a file share, which is managed by the cluster service. Resources provide services to clients. A resource is the basic unit managed by the cluster service and can run on only a single node in a cluster at one time.

Dependency

A dependency is a reliance between two resources that makes it necessary for both resources to run on the same node. For example, a file share resource depends on a disk resource with a folder that can be shared.

Group

A group is a collection of resources that are handled as a single entity for configuration and management purposes. If a resource depends on another resource, both of those resources must be members of the same group. In the example of the file share resource, the group that the file share belongs to must also contain the disk resource. All resources within a group must be online on the same node in the cluster.

Failover

A failover is the process of moving a resource or a group of resources from one node to another when the first node fails. For example, in a cluster on which Microsoft® Internet Information Server (IIS) 4.0 is running on ClusterNodeA and ClusterNodeA fails, IIS 4.0 will fail over to ClusterNodeB.

Failback

A failback is the process of returning a resource or group of resources to the node on which it was running before a failover occurred. In the previous example, when ClusterNodeA comes back online, IIS fails back to ClusterNodeA.

Quorum resource

A quorum resource is a resource that stores cluster management data, such as recovery logs for changes made to cluster data. It is accessible by all the nodes of the cluster. Cluster Server requires a shared SCSI drive among the nodes of the cluster. A SCSI drive is the default quorum resource for a cluster.

Virtual server

A virtual server is a server name that represents the cluster. A virtual server consists of two cluster resources: an IP address and a network name. A fixed IP address is required to create an IP address resource. The network name must be a unique NetBIOS name. You can have multiple virtual servers on one cluster.

Shared drive

A shared drive is required for nodes in a cluster to communicate for synchronization purposes and to allow access to all data necessary for failover to occur successfully. At least one shared drive is required.

Structure of Our Clustering Example

We will describe the construction of our example configuration in terms of three tiers.

  1. In the SQL tier, you will set up a cluster that consists of two nodes, each with SQL Server 6.5, Enterprise Edition, installed. In addition, we will set up a SQL Server database for the Membership Directory Service.

  2. In the LDAP tier, you will install Microsoft Windows Load Balancing Service, and then Personalization and Membership feature of Microsoft Site Server 3.0 along with its Service Pack 1. We will then create a new membership instance and connect it to our existing Membership Directory. And then we will create a new local LDAP service and assign it a port number.

  3. In the Application tier, you will install and configure servers for mail, chat, news, and Web services.

Software Requirements

Windows NT Service Pack 4

You must have Microsoft® Windows NT® Service Pack 4 installed on all computers in the system. Be sure to investigate any hotfixes related to security in a commercial space that were released after the Service Pack was released.

Install the Service Pack before you install any other software.

Windows NT Option Pack

You must install Windows NT 4.0 Option Pack, along with Microsoft Internet Explorer 4.01, on all servers running Site Server components. The minimum components required for the Option Pack depend on the services running on a particular physical server. The following table gives the requirements for each server.

Server Option Pack components
Servers running any Site Server components Web server with Index server

Additional Option Pack components may be required, depending on the design of your data center.

SQL Tier

In this section, you will set up a cluster that consists of two nodes, each with Microsoft® SQL Server™ 6.5, Enterprise Edition, installed. In addition, you will set up a SQL Server database for the Membership Directory Service.

Setting up Your Cluster

In this section, you will set up a cluster that consists of two nodes, ClusterNodeA and ClusterNodeB — each node computer running Microsoft® Windows NT®, Enterprise Edition, with two Ethernet cards. You will configure one shared drive for the cluster.

Note   Windows NT, Enterprise Edition, running in a cluster environment requires that the cluster node upon startup can always reach a domain controller and validate the domain account before cluster startup timeouts occur.

Before you begin

  1. Install Windows NT 4.0, Enterprise Edition, on each of the two computers that will serve as cluster nodes.

  2. Install two Ethernet adapters in each computer.

  3. Assign IP addresses for the Ethernet adapters. In our example, the public network for ClusterNodeA uses the IP address 192.168.107.51; its private network uses 10.0.5.1, which is not an actual IP address, but functions only internally. The public network for ClusterNodeB uses the IP address 192.168.107.52; its private network uses 10.0.5.2.

  4. Configure the cluster’s shared hard drive for RAID 5.

  5. Use the Disk Administrator to assign a drive letter to the shared disk and format it as NTFS. Name the shared disk ClusDisk1; assign ClusDisk1 the drive letter I.

In this portion of your installation, you will complete the following tasks:

  1. Install Cluster Server on both nodes.

  2. Set the preferred owners and failback for each cluster resource.

  3. Set up and configure SQL Server, Enterprise Edition, for cluster use.

  4. Test the failover of your cluster.

  5. Install Hotfix 318 for SQL Server, Enterprise Edition.

  6. Set up the Membership Directory Server.

To complete Cluster Server Setup for ClusterNodeA

  1. Close any open applications.

  2. From the Start menu, point to Programs, point to Administrative Tools, and then choose Enterprise Edition Installer. In the next screen, click Continue.

  3. Under Select the components to install, select Microsoft Cluster Server, and then click Start Installation.

  4. When the Files needed for Microsoft Cluster Server dialog box appears, insert the Windows NT Server, Enterprise Edition, CD 2 into the appropriate drive, and then click Next.

  5. In the Microsoft Cluster Server Setup dialog box, click Next.

    If you have any open applications running, close them before you click Next.

  6. You will be asked to ensure that your hardware has been tested for compatibility with Cluster Server. Click I Agree, and then click Next.

    For information about compatible hardware, see http://www.microsoft.com/ntserver/info/hwcompatibility.htm.

  7. Under Select the operation to perform, select Form a new cluster, and then click Next.

  8. Type the name for the cluster you are forming. In this example the name of the cluster is SSCE_CLUSTER.

  9. The next screen shows the folder where your cluster files will be stored. Click Next.

  10. In the next screen, type the user name, password, and domain (CLUSTER in this example) for your administrator account.

  11. In the next screen, disks that have been set up as shared appear under Shared cluster disks (in this example, ClusDisk1). To accept this shared cluster drive, click Next.

  12. Select ClusDisk1, which will store your permanent cluster files, and then click Next. The default is the first shared drive on the system; in this example, drive I.

  13. To specify the network you will use for internal cluster communications, click Next.

  14. In the next screen, in the Network Name box type PrivateNet, and then select Enable for cluster use. Under Enable for cluster use, select Use only for internal cluster communications. The Adapter Name and IP Address are filled in automatically. Click Next.

  15. Next a similar screen appears. In the Network Name box type PublicNet, and then select Enable for cluster use. Under Enable for cluster use, select Use for all communications. The Adapter Name and IP Address are filled in automatically. Click Next.

  16. In the next screen the names of the networks you have set up appear under Networks Available for Internal Cluster Communication. The network that appears first in the list has the highest priority. You can use the Up and Down buttons to change the priority.

  17. In the next screen, type the IP address for the cluster (in this example, 192.168.107.50). The Subnet Mask will appear when you have typed the IP address. Select the name of the client network from the Network list (in this example, PublicNet), and then click Next.

    Note   This IP address must not be in use anywhere on the network and it must not be assigned to any network; it must be a valid IP address on the client network.

  18. Click Finish to complete installation.

  19. When prompted, restart your computer.

To complete Cluster Server Setup for ClusterNodeB

  1. Close any open applications.

  2. From the Start menu, point to Programs, point to Administrative Tools, and then choose Enterprise Edition Installer.

  3. Under Select the components to install, select Microsoft Cluster Server, and then click Start Installation.

  4. When the Files needed for Microsoft Cluster Server dialog box appears, insert the Windows NT Server, Enterprise Edition, CD 2 into the appropriate drive, and then click Next.

  5. In the Microsoft Cluster Server Setup dialog box, click Next.

    If you have any open applications running, close them before you click Next.

  6. You will be asked to ensure that your hardware has been tested for compatibility with Cluster Server. Click I Agree, and then click Next.

    For information about compatible hardware, see http://www.microsoft.com/ntserver/info/hwcompatibility.htm.

  7. In the Microsoft Cluster Server Setup dialog box, under Select the operation to perform, select Join an existing cluster, and then click Next.

  8. Type the name for the cluster you formed earlier: _CLUSTER.

  9. The next screen shows the folder where your cluster files will be stored. Click Next.

  10. Type the user name, password, and domain for your administrator account (CLUSTER in this example).

  11. Click Finish to complete installation.

  12. When prompted, restart your computer.

Configuring Your Cluster Resources

In this section, you will configure your cluster resources to optimize the usage of those resources.

To set the preferred owner and failback on your second node

  1. In the scope pane of the Cluster Administrator, expand ClusterNodeA, and then select Active Groups. The Active Groups folder contains all the resources for the node. The resources for ClusterNodeA appear in the results pane.

  2. In the results pane, right-click ClusDisk1, and then click Properties.

  3. On the General tab of the ClusDisk1 Properties dialog box, click Modify.

  4. In the Modify Preferred Owners dialog box, under Nodes select ClusterNodeA, and then click the arrow pointing to the right. ClusterNodeA moves to the Preferred owners. Click OK.

  5. In the Disk Group 1 Properties dialog box, click Failback.

  6. Select Allow failback and Immediately, and then click OK.

Setting Up and Configuring SQL Server, Enterprise Edition, for Cluster Use

In order to ensure high availability of your cluster, you must install Microsoft® SQL Server 6.5, Enterprise Edition, on the shared drive you set up for the cluster. To use SQL Server as a cluster-aware Symmetric Virtual Server (SVS) you must install SQL Server onto the shared drive, not the local system disk of each cluster node.

To set up SQL Server, Enterprise Edition, on ClusterNodeA

  1. Insert the CD-ROM for Microsoft SQL Server 6.5, Enterprise Edition, into the appropriate drive, navigate to \i386 (for Intel), and then double-click Setup.exe. In the Welcome dialog box, click Continue.

  2. In the Enter Name and Organization dialog box, type your name, company, and the product ID, and then click Continue; confirm your entries in the Verify Name and Organization dialog box, and then click Continue.

  3. In the Microsoft SQL Server 6.5 – Options dialog box, select Install SQL Server and Utilities, and then click Continue. In the Choose Licensing Mode dialog box, select an appropriate licensing mode, click Continue; in the subsequent dialog box click I agree and then click OK.

  4. In the SQL Server Installation Path dialog box, select drive I as the shared disk owned by ClusterNodeA. \MSSQL appears in the Directory box. Click Continue.

  5. In the MASTER Device Creation dialog box, select drive I, and type 50 for the MASTER device size. \MSSQL\DATA\MASTER.DAT appears in the Directory box. Click Continue.

    Note   For this example, 50 MB is adequate for MASTER.DAT. Large-scale production sites may need a larger MASTER.DAT.

  6. In the SQL Server Books Online dialog box, select Do not Install, and then click Continue.

  7. In the Installation Options dialog box, select the following settings:
    Character Set 850 Multilingual
    Sort Order Alternate Dictionary, case insensitive
    Network Support Named Pipes, TCP/IP

    Clear Auto Start SQL Server at boot time and Auto Start SQL Executive at boot time, and then click Continue.

  8. In the SQL Executive Log On Account dialog box, select Install the SQL Executive service to log on to Windows NT as, type your administrator account name and password, type your password again in the Confirm Pwd box, and then click Continue.

    Note   In a production environment, this account should not be the Administrator account. Create a specific services account with administrative privileges.

  9. In the TCP/IP Socket Number dialog box, click Continue to accept the default (1433). Do not change the TCP/IP port number.

  10. SQL Server Setup finishes the basic SQL Server, Enterprise Edition, physical server installation. When prompted, exit to Windows NT.

To set up SQL Server, Enterprise Edition, to operate in Cluster mode on ClusterNodeA

  1. In the Windows NT Explorer, from the CD, navigate to \i386\Cluster, and then double-click SQL Cluster Setup.exe.

  2. Click Next.

  3. In the SQL Cluster Setup – Options dialog box, select Install virtual server, and then click Next.

  4. In the SQL Cluster Setup – SA Password dialog box, type the SA password, and then click Next. For this example the SA password is null. In a production environment, you’ll have an actual password.

  5. In the SQL Cluster Setup - SQL Executive service account dialog box, the name of the SQL Executive user account that you set up earlier for this node appears. Type the password for this account, and then click Next.

  6. In the SQL Cluster Setup – IP Address dialog box type the IP address and subnet mask for the cluster. For this example, the IP address is 192.168.107.53, and the subnet mask is 255.255.255.0. Click Next.

    Note   The subnet mask varies according to your network configuration.

  7. In the SQL Cluster Setup – Server Name dialog box type the name for this virtual SQL server, and then click Next. This example uses the name SQLSVS. This will be the name that clients use to connect to the virtual server.

  8. In the SQL Cluster Setup - Finish dialog box, click Exit.

To configure SQL cluster resources on ClusterNodeA

  1. Click Start, point to Programs, point to Administrative Tools, and then click Cluster Administrator.

  2. In the Open Connection to Cluster dialog box, type the name of your cluster, SSCE_CLUSTER in this example, and then click Open.

  3. In the scope pane of the Cluster Administrator, select the Groups folder.

  4. In the results pane of the Cluster Administrator, right-click SQLSVS, and then choose Properties from the shortcut menu.

  5. On the General tab, click Modify.

  6. In the Modify Preferred Owners dialog box, select ClusterNodeA and then click the arrow pointing to the right. ClusterNodeA appears in the column under Preferred owners. Click OK.

  7. On the Failback tab, select Allow failback and Immediately, and then click OK.

    Note   If you do not select Allow failback, the system administrator will need to manually move a failed SVS back to its preferred owner when the cluster returns to normal operations.

To bring your SVS online

Testing Your Cluster

To verify that your cluster is set for failover, shut down one node to check that your SVS fails over to the remaining node. Then restart that node and check that the SVS fails back to the original node.

To test Failover

  1. Shut down ClusterNodeA to cause it to fail.

  2. Open the Cluster Administrator on ClusterNodeB. In the scope pane of the Cluster Administrator, select Groups. SQLSVS should now appear to belong to ClusterNodeB.

To test Failback

  1. Restart ClusterNodeA so that it comes back online.

  2. In the scope pane of the Cluster Administrator, you should see that SQLSVS now appears to belong to ClusterNodeA.

Installing Hotfixes

When you use Microsoft® SQL Server 6.5, Enterprise Edition, in conjunction with Site Server 3.0 and Cluster Server, you must install Hotfix 318 on each node.

Note   SQL Server Service Pack 4 is automatically included in SQL Server 6.5, Enterprise Edition. You need not install it separately.

Hotfix 318 is available from:

This hotfix contains the following files:

To install Hotfix 318 for SQL Server 6.5, Enterprise Edition

  1. Download the appropriate file to a directory on ClusterNodeA.

  2. Open a command prompt window and navigate to the directory in which you saved the hotfix file.

  3. Type the following password to unlock the executable file:
    Sql318i.exe –s6.50.318 
    

    Note   The README.TXT contains incorrect instructions for SQL Server, Enterprise Edition. Follow these instructions instead.

  4. Make a backup copy of the following files:
  5. In the scope pane of the Cluster Administrator select the Groups folder.

  6. In the results pane of the Cluster Administrator right-click SQLSVS, and then choose Take Offline from the shortcut menu.

    Note   The Cluster Administrator error message that you receive is normal. Click OK and continue with your procedure.

  7. Copy the sqlservr.exe, opends60.dll, sqlservr.dbg and opends60.dbg files from the hotfix self-extracting archive into the directories specified in step 4.

  8. Manually run apfsql.exe <path to binn directory of SQL Server, Enterprise Edition, installation> — in this example, apfsql.exe I:\mssql\binn. This applet is part of the sql65\i386\cluster directory on the CD-ROM.

  9. In the scope pane of the Cluster Administrator select the Groups folder.

  10. In the results pane of the Cluster Administrator right-click SQLSVS, and then choose Bring Online from the shortcut menu.

  11. Run the msg4410.sql script with ISQL.EXE as follows from a command prompt on the server: isql -Usa -P<sa passwd> -imsg4410.sql.

    Note   This command is specific to our example. imsg4410.sql refers to the path to the script.

    The following is the text that will appear if the script runs correctly:

    1> 2> Configuration option changed. Run the RECONFIGURE command to install.
    2> 1> 2> (O Rows affected)
    2> 3> (1 Row affected)
    2> Configuration option changed. Run the REWCONFIGURE command to install.
    1> 2> 1>


To verify that you have applied this hotfix correctly

  1. Click Start, point to Programs, point to Microsoft SQL Server 6.5, and then click SQL Enterprise Manager.

  2. In the SQL Enterprise Manager, from the Tools menu, choose SQL Query Tool.

  3. In the query pane, type SELECT @@VERSION.

  4. In the results pane, the following message appears:

    The version of Microsoft SQL Server for this hotfix should say (on Intel® computers):

    Microsoft SQL Server 6.50 - 6.50.318 (Intel X86)

            Apr 30 1998 22:45:41

            Copyright (c) 1988-1997 Microsoft Corporation

Setting up SQL Server Databases for Membership Directory Server

When you have configured Microsoft® SQL Server 6.5, Enterprise Edition, for cluster use, access to the computer running SQL Server is from the virtual SQL server (in this example SQLSVS). Do not use the physical server names (ClusterNodeA and ClusterNodeB) after SQL Server has been configured for cluster use.

Note   See the white paper Microsoft Site Server 3.0 Membership Directory Configuration and Tuning Guidelines to get basic SQL Server tuning recommendations.

To register your virtual SQL server in Enterprise Manager

  1. Click Start, point to Programs, point to Microsoft SQL Server 6.5, and then click SQL Enterprise Manager.

  2. In the Server Manager, right-click SQL 6.5, and click Register Server from the shortcut menu.

  3. In the Register Server dialog box, in the Server box, type SQLSVS. Under Login Information, select Use Standard Security. Type sa as your login ID, and then type your SA password in the Password box. (In this example, the password is null.) Select Display Server Status in Server Manager, click Register, and then click Close.

To expand tempdb

The default tempdb is not large enough for the purposes of this example. In this procedure, you will increase the size of tempdb to 50 MB.

  1. To create a new device into which to expand tempdb, in the Server Manager right-click Database Devices, and then choose New Device from the shortcut menu.

  2. In the New Database Device dialog box, in the Name box, type a name for your database device. For this example, the name is TempdbX. In the Location box, select I; in the Size box type 50, and then click Create Now.

  3. In the Server Manager, expand Databases, right-click tempdb, and then choose Edit from the shortcut menu.

  4. On the Database tab of the Edit Database dialog box, click Expand.

  5. In the Expand Database dialog box, in the Database Device box, select TempdbX as the device to expand tempdb into. Type 50 for the size of the database device, click Expand Now, and then click OK.

    Note   For this example, 50 MB is the size for tempdb; in a production environment, this value may be larger.

To create new membership database

In this example, one database is created on the shared drive. This database will serve as the Membership Directory.

  1. In the Server Manager, right-click Databases, and then choose New Database from the shortcut menu.

  2. In the New Database dialog box, in the Name box, type a name for the database; for this example, use memDS. In the Data Device box, select <new>.

  3. In the New Database Device dialog box, in the Name box, type memDevice; in the Location box, select I; in the Size box, type 100; and then click Create Now.

  4. In the New Database dialog box, in the Log Device box, select <new>.

  5. In the New Database Device dialog box, in the Name box, type memLogDevice; in the Location box, select I; in the Size box, type 30; and then click Create Now.

  6. In the New Database dialog box, click Create Now.

To set the Truncate on Checkpoint option

  1. In the Server Manager, right-click memDS, and then choose Edit from the shortcut menu.

  2. In the Options tab of the Edit Database dialog box, select Truncate Log on Checkpoint, and then click OK.

Backing Up the Membership Database

Any high availability data center strategy needs to consider the possibility of data corruption or loss. If you have corrupted or lost data, you may experience problems with user authentication. Your data protection strategy can be as simple as performing full backups of your primary database, or you can derive additional protection by applying transaction logs to the backup. To do this, you create a backup SQL server from a dump of the primary SQL server databases. Then you incrementally test the source computer for data corruption using DBCC. In addition, you can apply incremental backups from transaction logs or complete full backups of the database.

Maintaining a backup server provides a compromise among minimal downtime, data integrity, and reasonable administrative overhead. If the primary computer fails or becomes corrupted, the backup computer is available as a standby. You may have to reconfigure your LDAP servers to access the backup copy of the Membership database.

Note   Creating full backups and running DBCC has a significant impact on the performance of the computer running SQL Server. It is best to schedule your backups and DBCC utility during periods of low activity.

You can get a greater degree of protection from data corruption by combining backup with the application of transaction logs. When you run DBCC against the source, it determines whether data corruption has occurred. In the event DBCC detects corruption in a transaction cycle and that corruption is irreparable, you do not apply the transactions that occurred over the corruption cycle and you promote the backup server to the primary server. Running DBCC on a regular basis decreases your chances of data corruption in the first place. Applying transaction updates frequently further enhances your protection.

The data-protection solution you choose will be based on your assessment of your risk, specifically taking into account the probability of corruption and the impact that data corruption would have on your operation. A well-thought-out and executed backup strategy is unquestionably the best single solution for failsafe operations. For a discussion of options in backup strategy, see Microsoft SQL Server 6.0 Administrator’s Companion, Appendix E.

The procedures in this section were written with the assumption that you have created a Windows NT, Enterprise Edition, backup server with SQL Server, Enterprise Edition, installed. In our example, this computer is named SQL_BACKUP. Create devices and databases equal in size to those of SQL_CLUSTER.

To create a full backup of the membership database

  1. Click Start, point to Programs, point to Microsoft SQL Server 6.5, and then click SQL Enterprise Manager.

  2. In the Server Manager, select your SQL cluster, in our example SQL_CLUSTER.

  3. From the Tools menu, choose Database Object Transfer.

  4. In the Database Object Transfer dialog box:
  5. When a message appears informing you that the transfer was completed successfully, click OK.

If your assessment is that frequent full backups does not provide adequate protection, use the combination solution of full backups and applying transaction logs. For more information, see Microsoft SQL Server 6.0 Administrator’s Companion, Appendix E.

LDAP Tier

In this section, you will install Microsoft® Windows® Load Balancing Service (WLBS), formerly known as Convoy, and then install Personalization and Membership feature of Microsoft Site Server 3.0 along with its Service Pack 1. You will then create a new membership instance and connect it to the existing Membership Directory. Then you will create a new local LDAP service and assign it a port number.

Software and Hardware Requirements

In this section, you will set up a WLBS cluster that consists of two nodes, each node computer running Microsoft® Windows NT® Server with Service Pack 4, and two Ethernet cards.

The LDAP hardware configuration consists of:

Installing your LDAP Server with Windows Load Balancing Service

To install WLBS and your LDAP server, on each node

  1. Install WLBS.

  2. Install Microsoft Site Server 3.0 Personalization and Membership feature only.

  3. Install Site Server 3.0 Service Pack 1.

  4. Create a new membership instance.

  5. Connect to your existing Membership Directory, in our example memDS.

  6. Create a new local LDAP service and assign it port number 389.

  7. Verify the membership instance properties.

  8. In your Membership Directory Properties dialog box, change your server name and change the MDM port to 389.

    Note   To avoid confusion, it is best to enter the real system name (or DNS name) rather than the default LocalHost in the Host name box.

To install the WLBS software

  1. On your desktop, right-click Network Neighborhood, choose Properties from the shortcut menu. On the Adapters tab of the Network dialog box, click Add, and then click Have Disk. In the Insert Disk dialog box, type in the path to the WLSB files, and then click OK.

  2. In the Select OEM Option dialog box, select WLBS Cluster Software, and then click OK. Click OK again when the message box appears.

  3. In the WLBS License Agreement dialog box, click Agree.

    When installation finishes, the WLBS Setup dialog box will automatically start.

  4. In the WLBS Cluster Setup dialog box, fill in the following information under Cluster parameters:
    In the text box Type
    Primary IP address The IP address for your cluster (in this example, 192.168.107.213)
    Subnet mask 255.255.255.0
    Full Internet name sample.microsoft.com

    Note   Add the cluster domain name into your domain name service (DNS).

  5. Under Cluster parameters, select the Remote control option.

  6. Under Host parameters, select a different priority ID for each node in the cluster, and then fill in the following information:
    In the text box Type
    Dedicated IP address The IP address of your primary Ethernet card.
    Subnet mask 255.255.255.0.

  7. Under Port rules, type the numbers of the ports that you wish to cluster (in our example, type 389 in both Port range boxes); for Protocols, select Both; for Filter mode, select None for Affinity; select Multiple hosts; for Load percentage, select equal, click Add, and then click OK.

  8. On the Bindings tab of the Network dialog box, in the Show Bindings for box, select all protocols. Expand the WLBS Driver, and bind the WLBS Driver protocol to the WLBS Virtual NIC.

  9. To unbind the WLBS Driver protocol from the dedicated adapter, select the dedicated adapter (in this example, adapter number 1) under WLBS Driver, and then click Disable.

  10. Expand the TCP/IP protocol and WINS Client, and then Bind the TCP/IP protocol and the WINS Client protocol to the WLBS Virtual NIC.

  11. Bind the TCP/IP protocol and WINS Client to the dedicated adapter (in this example, adapter number 1).

  12. To unbind the TCP/IP protocol and the WINS Client protocol from the cluster adapter, select the cluster adapter under TCP/IP (in this example, adapter number 2), and then click Disable. Select the cluster adapter under WINS Client (in this example, adapter number 2), and then click Disable again.

  13. Under the TCP/IP protocol, make sure the WLBS Virtual NIC adapter appears below the dedicated adapter (in this example, adapter number 1) in the list of adapters. Use the Move Up and Move Down buttons to change the order of the adapters. Make sure you have the same order under WINS Client. Click Close.

  14. In the Microsoft TCP/IP Properties dialog box, on the IP Address tab, select WLBS Virtual NIC from the drop-down menu. Specify the IP address, subnet mask, and default gateway for your cluster (in this example, same as before). Click OK.

  15. When prompted, restart your computer.

To test your failover at the IP level

  1. From another computer, ping the WLBS Cluster virtual IP address with the command ping –t 192.168.107.213 in this example.

  2. Shut down one of the cluster nodes.

    The IP address for your cluster will remain active.

  3. Restart the node you shut down, shut down the other node, and then check the IP addresses again. The IP address for your cluster should still remain active.

To test your failover at the LDAP level

  1. Configure a Site Server Membership Server instance with the IP address of your cluster (in this example, 192.168.107.213).

    Note   Refer to your Site Server 3.0 documentation for directions on installing your LDAP server.

  2. Shut down one of the cluster nodes.

  3. Use the LDAP Service to authenticate a user to a Site Server Web service. Success at this step indicates correct failover.

  4. Restart the node you shut down, shut down the other node, and then authenticate a different user to a Site Server service.

    Note   Authenticate a different user in case the information for the first user has been cached.

Application Tier

In this section, you will install and configure servers for mail, chat, news, and Web services.

Web Servers

To set up your Web servers in a high availability format, configure a WLBS cluster.

Note   Each node in the WLBS cluster must contain exactly the same content as does any other node in the cluster. You can use Site Server’s Content Deployment to replicate content across your cluster. For information about Content Deployment, see the Microsoft Site Server Publishing section in the Site Server 3.0 documentation.

To configure the WLBS cluster

To test your Web services

Appendix A - Tested Hardware

Microsoft® Cluster Server requires hardware that is identified in the Microsoft® Windows® hardware compatibility list at http://www.microsoft.com/ntserver/info/hwcompatibility.htm. Configurations tested during creation of this guide include a Digital and a Compaq solution. For information about software requirements, see the Software Requirements section.

Digital Equipment

Reference Equipment Description Quantity
FR-R5C5W-AX Digital Server 7100R;1P x 200 2
FR-PC93U-AD Digital Svr 7100 Powergrade;2 2
FR-PCSMA-AG MEMORY;(4x32MB)128MB EDO DIMM 4
FR-PCTAR-GA PCI 1-CHANNEL ULTRA RAID CONT 2
FR-PCTAR-UB BATTERY BACKUP; PCI RAID 2
FR-CECBA-CA HDD;4GB;7200RPM;WIDE SCSI SBB 4
FR-DE500-BA 10/100 UTP ADAPTER W/DECCHIP 6
BN35S-4E POWERCORD; C13 TO C14; 4.5M 6
FR-PCSPS-AC 400-Watt; Hot-Swap Power supp 4
DS-SWXRA-HA Dual Controller RA7000 1
DS-RZ1DB-VW 9GB 7200RPM UltraSCSI HD 16
KZPBA-CB FWD-20 SCSI controller 2
BN21W-0B 68 Way HD Y cable 2
BN38E-0B 68 Way HD to VHDC 1
H879 -AA SCSI Term 2
QB-53V9A-SA Windows NT Cluster LIC + CROM + DOC 1
BN37A-05 VHDC - VHDC SCSI CABLE 1
DS-BA35X-HE AC INPUT BOX BA370 1
DS-BA35X-HH 180 WATT 100-240VOLT AC POWER 3
QB-5SBAE-SA HSZ70 SOLN SW NTI LIC/MCD/DOC 2
FR-PCA6K-AE DIGITAL SERVER 7100 COUNTRY K 2

Compaq Equipment

Reference Equipment Description Quantity
  Compaq ProLiant 6500 2
233100-001/291 Fibre Channel Array – Rack Mount 1
233180-B21/291 Fibre Channel Host Controller /P 2
234453-001/B31/291 Fibre Channel Storage Hub 7 1
233187-001 Fibre Channel Array Controller 1

For more information about Compaq hardware, see the white paper, Order and Configuration Guide for Compaq ProLiant Cluster Series F Model 100, at http://www.compaq.com/support/techpubs/whitepapers/ecg1100998.html.

Information in this document, including URL and other Internet web site references, is subject to change without notice.  The entire risk of the use or the results of the use of this resource kit remains with the user.  This resource kit is not supported and is provided as is without warranty of any kind, either express or implied.  The example companies, organizations, products, people and events depicted herein are fictitious.  No association with any real company, organization, product, person or event is intended or should be inferred.  Complying with all applicable copyright laws is the responsibility of the user.  Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document.  Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.

© 1999-2000 Microsoft Corporation.  All rights reserved.

Microsoft, Windows and Windows NT are either registered trademarks or trademarks of Microsoft Corporation in the U.S.A. and/or other countries/regions.

The names of actual companies and products mentioned herein may be the trademarks of their respective owners.