Network Load Balancing |
Although you can use Network Load Balancing to provide failover support for applications, managing the application as a resource in a server cluster is the preferred solution. However, if you choose to achieve failover support with Network Load Balancing, this section describes how.
In this scenario, start the application on every host to which the cluster traffic can fail over.
In all scenarios, Network Load Balancing does not restart the application on failover. It assumes that an instance of the application is running on each host in the cluster.
For Network Load Balancing to provide single-server failover support for a specific application, the files that the application uses must be simultaneously accessible to all hosts that run the application. These files normally reside on a back-end file server. Some applicationsrequire that these files be continuously open exclusively by one instance of the group; in a Network Load Balancing cluster, you cannot have two instances of a single file open for writing. These failover issues are addressed by server clusters, which run the Cluster service.
Other applications open files only on client request. For these applications, providing single-server failover support in a Network Load Balancing cluster works well. Again, the files must be visible to all cluster hosts. You can accomplish this by placing the files on a back-end file server or by replicating them across the Network Load Balancing cluster.
There are two alternatives for configuring the port rules for single-server failover support:
All the traffic goes to the host with the highest priority (the Host Priority ID with the lowest value). If that host fails, all the traffic switches to the host with the next-highest priority.
This option overrides the Host Priority IDs with handling priorities for each application's port range. With this configuration, you can run two single-server applications on separate hosts and fail in opposite directions.
For example, if applications Red and Blue are assigned Handling Priority IDs as indicated in Table 19.1, the applications will run on different hosts and fail over to different secondary hosts.
Table 19.1 Hypothetical Assignment of Handling Priority IDs
Host | Application Red's Port | Application Blue's Port |
---|---|---|
Host A | Handling Priority 1 | Handling Priority 2 |
Host B | Handling Priority 2 | Handling Priority 1 |
Filtering Mode: Single host.
Affinity: Not available when filtering mode is single host.
Load Weight/Equal load distribution: Not available when filtering mode is single host.
Handling Priority: See the application issues discussion for this scenario.