Chapter 1: Preinstallation

MSCS Hardware Compatibility List (HCL)

Figure 1. Installation

The picture above shows part of the installation process that mentions the importance of using certified hardware for clusters. MSCS uses industry standard hardware. This allows hardware to be easily added or replaced as needed. Supported configurations will use only hardware validated using the MSCS Cluster Hardware Compatibility Test (HCT). These tests are above and beyond the standard compatibility testing for Microsoft Windows NT, and are quite intensive. Microsoft supports MSCS only when MSCS is used on a validated cluster configuration. Validation is available only for complete configurations as tested together. The MSCS HCL is available on the Microsoft Web site at: www.microsoft.com/isapi/hwtest/hcl.idc.

Configuring the Hardware

The MSCS installation process relies heavily on properly configured hardware. Therefore, it is important that you configure and test each device before you run the MSCS installation program. A typical cluster configuration consists of two servers, two network adapters each, local storage, and one or more shared SCSI buses with one or more disks. While it is possible to configure a cluster using only one network adapter in each server, you are strongly encouraged to have a second isolated network for cluster communications. For clusters to be certified, they must have at least one isolated network for cluster communications. The cluster may also be configured to use the primary nonisolated network for cluster communications if the isolated network fails. The cluster nodes must communicate with each other on a time-critical basis. Communication between nodes is sometimes referred to as the heartbeat. Because it is important that the heartbeat packets be sent and received in a timely manner, only PCI-based network adapters should be used, since the PCI bus has the highest priority.

Figure 2. Shared SCSI Bus

The shared SCSI bus consists of a compatible PCI SCSI adapter in each server, with both systems connected to the same SCSI bus. One SCSI host adapter uses the default ID 7, and the other uses ID 6. This ensures that the host adapters have the highest priority on the SCSI bus. The bus is referred to as the shared SCSI bus, because both systems share exclusive access to one or more disk devices on the bus. MSCS controls exclusive access to the device through the reserve and release commands in the SCSI specification. For more information on SCSI specifications in high availability environments, consult the following links:

www.symbios.com/x3t10/drafts.htm

www.symbios.com/x3t10/io/t10/drafts/hap/hap-r05.pdf

Other storage subsystems may be available from system vendors as an alternative to SCSI, which, in some cases, may offer additional speed or flexibility. Some of these storage types may require installation procedures other than those specified in the Microsoft Cluster Server Administrator's Guide. These storage types may also require special drivers or resource DLLs as provided by the manufacturer. If the manufacturer provides installation procedures for Microsoft Cluster Server, use those procedures instead of the generic installation directions provided in the Administrator's Guide.

Installing the Operating System

Before you install Microsoft Windows NT Server, Enterprise Edition, you must decide what role each computer will have in the domain. As the Administrator's Guide indicates, you may install MSCS as a member server or as a domain controller. The following information focuses on performance issues with each configuration:

The member server role for each cluster node is a viable solution, but may have a few drawbacks. While not incurring overhead from performing authentication for other systems within the domain, this configuration remains vulnerable to loss of communication with domain controllers on the network. Node to node communications and various registry operations within the cluster require authentication from the domain. At times, during normal operations, the need to receive authentication may occur. Member servers rely on domain controllers elsewhere on the network for this type of authentication. Lack of connectivity with a domain controller may severely affect performance, and may also cause one or more cluster nodes to stop responding until connection with a domain controller has been re-established. In a worst-case scenario, loss of network connectivity with domain controllers may cause complete failure of the cluster.

The primary domain controller to backup domain controller (PDC to BDC) configuration is a better alternative than the member server option, because it removes the need for the cluster node to be authenticated by an external source. If an activity requires authentication, either of the nodes can supply it. Thus, authentication is not a failure point as it is in the member server configuration. However, primary domain controllers may require special configuration in a multihomed environment. Additionally, the domain overhead may not be well distributed in this model because one node may have more domain activity than the other one.

The BDC to BDC configuration is the most favorable configuration, because it provides authentication, regardless of public network status, and the overhead associated with domain activities is balanced between the nodes. Additionally, BDCs are easier to configure in a multihomed environment.

Configuring Network Adapters

In a typical MSCS installation, each server in the cluster, referred to as a node, will have at least two network adapters; one adapter configured as the public network for client connections, the other for private communications between cluster nodes. This second interface is called the cluster interconnect. If the cluster interconnect fails, MSCS (if so configured) will automatically attempt to use the public network for communication between cluster nodes. In many two-node installations, the private network uses a crossover cable or an isolated segment. It is important to restrict network traffic to only cluster communications on this interface. Additionally, each server should use PCI network adapters. If you have any ISA, PCMCIA, or other bus architecture network adapters, these adapters may compete for attention of the CPU in relationship to other faster PCI devices in the system. Network adapters other than PCI may cause premature failover of cluster resources, based on delays induced by the hardware. Complete systems will likely not have these types of adapters. Keep this in mind, if you decide to add adapters to the configuration.

Follow standard Windows NT configuration guidelines for network adapter configuration. For example, each network adapter must have an IP address that is on a different network or subnet. Do not use the same IP address for both network adapters, although they are connected to two distinctly different physical networks. Each adapter must have a different address, and the addresses cannot be on the same network. Consider the table of addresses in Table 1.

Table 1. Table of Addresses

Adapter 1 (Public Network) Adapter 2 (Private Network) Valid Combination?
192.168.0.1 192.168.0.1 NO
192.168.0.1 192.168.0.2 NO
192.168.0.1 192.168.1.1 YES
192.168.0.1 10.0.0.1 YES

In fact, because of isolation of the private network, you can use just about whatever matching IP address combination you like for this network. If you want to, you can use addresses that the Internet Assigned Numbers Authority (IANA) designates for private use. The private use address ranges are noted in Table 2.

Table 2. Private Use Address Ranges

Address Class Starting Address Ending Address
Class A 10.0.0.0 10.255.255.255
Class B 172.16.0.0 172.31.255.255
Class C 192.168.0.0 192.68.255.255

The first and last addresses are designated as the network and broadcast addresses for the address range. For example, on the reserved Class C address, the actual range for host addresses is 192.168.0.1 through 192.68.255.254. Use 192.168.0.1 and 192.168.0.2 to keep it simple, because you'll have only two adapters on this isolated network. Do not declare default gateway and WINS server addresses for this network. You may need to consult with your network administrator on use of these addresses, in the event that they may already be in use within your enterprise.

When you've obtained the proper addresses for network adapters in each system, use the Network utility in Control Panel to set these options. Use the PING utility from the command prompt to check each network adapter for connectivity with the loopback address (127.0.0.1), the card's own IP address, and the IP address of another system. Before you attempt to install MSCS, make sure that each adapter works properly and can communicate properly on each network. You will find more information on network adapter configuration in the Windows NT online documentation, the Windows NT Server 4.0 Resource Kit, or in the Microsoft Knowledge Base.

Table 3 shows related Microsoft Knowledge Base articles regarding network adapter configuration, TCP/IP configuration, and related troubleshooting:

Table 3. Microsoft Knowledge Base Articles

Reference Number Article
Q164015 Understanding TCP/IP Addressing and Subnetting Basics
Q102908 How to Troubleshoot TCP/IP Connectivity with Windows NT
Q151280 TCP/IP Does Not Function After Adding a Second Adapter
Q174812 Effects of Using Autodetect Setting on Cluster NIC
Q175767 Expected Behavior of Multiple Adapters on Same Network
Q170771 Cluster May Fail If IP Address Used from DHCP Server
Q168567 Clustering Information on IP Address Failover
Q193890 Recommended Wins Configuration for MSCS
Q217199 Static Wins entries cause the Network Name to go offline.
Q201616 Network card detection in Microsoft Cluster Server

Configuring the Shared SCSI Bus

In a normal configuration with a single server, the server has a SCSI host adapter that connects directly to one or more SCSI devices, and each end of the SCSI bus has a bus terminator. The terminators help stabilize the signals on the bus and help ensure high-speed data transmission. They also help eliminate line noise.

Configuring host adapters

The shared SCSI bus, as used in a Microsoft cluster, differs from most common SCSI implementations in one way: the shared SCSI bus uses two SCSI host adapters. Each cluster node has a separate SCSI host adapter for shared access to this bus, in addition to the other disk controllers that the server uses for local storage (or the operating system). As with the SCSI specification, each device on the bus must have a different ID number. Therefore, the ID for one of these host adapters must be changed. Typically, this means that one host adapter uses the default ID of 7, while the other adapter uses ID 6.

Note   It is important to use ID 6 and 7 for the host adapters on the shared bus so that they have priority over other connected devices on the same channel. A cluster may have more than one shared SCSI bus as needed for additional shared storage.

SCSI cables

SCSI bus failures can be the result of reduced quality cables. Inexpensive cables may be attractive because of the low price, but may not be worth the headache associated with them. An easy comparison between the cheaper cables and the expensive ones can be done by holding a cable in each hand, about 10 inches from the connector. Observe the arc of the cable. The higher quality cables don't bend very well in comparison. These cables use better shielding than the other cables, and may use different gauge wire. If you use the less expensive cables, you may spend more supporting them than it would cost to buy the better quality cables in the first place. This shouldn't be much of a concern for complete systems purchased from a hardware vendor. These certified systems likely have matched cable sets. In the event you ever need to replace one of these cables, consult your hardware vendor.

Some configurations may use standard SCSI cables, while others may use Y cables (or adapters). The Y cables are recommended for the shared SCSI bus. These cables allow bus termination at each end, independent of the host adapters. Some adapters do not continue to provide bus termination when turned off, and also cannot maintain bus termination if they are disconnected for maintenance. Y cables avoid these points of failure and help achieve high availability.

Even with high-quality cables, it is important to consider total cable length. Transfer rate, the number of connected SCSI devices, cable quality, and termination may influence the total allowable cable length for the SCSI bus. While it is common knowledge that a standard SCSI bus using a five-megabit transfer rate may have a maximum total cable length of approximately six meters, the maximum length decreases as the transfer rate increases. Most SCSI devices on the market today achieve much higher transfer rates and demand a shorter total cable length. Some manufacturers of complete systems that are certified for MSCS may use differential SCSI with a maximum total cable length of 25 meters. Consider these implications when adding devices to an existing bus or certified system. In some cases, it may be necessary to install another shared SCSI bus.

SCSI termination

Microsoft recommends active termination for each end of the shared SCSI bus. Passive terminators may not reliably maintain adequate termination under certain conditions. Be sure to have an active terminator at each end of the shared SCSI bus. A SCSI bus has two ends and must have termination on each end. For best results, do not rely on automatic termination provided by host adapters or newer SCSI devices. Avoid duplicate termination and avoid placing termination in the middle of the bus.

Drives, partitions, and file systems

Whether you use individual SCSI disk drives on the shared bus, shared hardware RAID arrays, or a combination of both, each disk or logical drive on the shared bus needs to be partitioned and formatted before you install MSCS. The Microsoft Cluster Server Administrator's Guide covers the necessary steps to perform this procedure. In most cases, a drive contains only one partition. Some RAID controllers can partition arrays as multiple logical drives, or as a single large partition. In the case of a single large partition, you will probably prefer to have a few logical drives for your data: one drive or disk for each group of resources, with one drive designated as the quorum disk.

If you partition drives at the operating system level into multiple partitions, remember that all partitions on shared disks move together from one node to another. Thus, physical drives are exclusively owned by one node at a time. In turn, all partitions on a shared disk are owned by one node at a time. If you transfer ownership of a drive to another node through MSCS, the partitions move in tandem, and may not be split between nodes. Any partitions on shared drives must be formatted with the NTFS file system, and must not be members of any software-based fault tolerant sets.

CD-ROM drives and tape drives

Do not connect CD-ROM drives, tape drives, or other nonphysical disk devices to the shared SCSI bus. MSCS version 1.0 only supports nonremovable physical disk drives that are listed on the MSCS HCL. The cluster disk driver may or may not recognize other device types. If you attach unsupported devices to the shared bus, the unsupported devices may appear usable by the Windows NT operating system. However, because of SCSI bus arbitration between the two systems and the use of SCSI resets, these devices may experience problems if attached to the shared SCSI bus. These devices may also create issues for other devices on the bus. For best results, attach the noncluster devices to a separate controller not used by the cluster.

Preinstallation Checklist

Before you install MSCS, there are several items to check to help ensure proper operation and configuration. After proper configuration and testing, most installations of MSCS should complete without error. The following checklist is fairly general. It may not include all possible system options that you need to evaluate before installation:

Installation on Systems Using Custom Disk Hardware

If your hardware uses other than standard SCSI controllers and requires special drivers and custom resource types, use the software and installation instructions as provided by the manufacturer. Use of the standard installation procedures for MSCS will fail on these systems, as they require additional device drivers and DLLs as supplied by the manufacturer. These systems also require special cabling.