Network Load Balancing |
Briefly, when Network Load Balancing is installed as a network driver on each of the cluster hosts, the cluster presents a virtual IP address to client requests. The client requests go to all the hosts in the cluster, but only the host to which a given client request is mapped accepts and handles the request. All the other hosts drop the request. Depending on configuration of port rules and on affinity, the statistical mapping algorithm, which is present on all the cluster hosts, maps the client requests to particular hosts for processing.
The hosts exchange heartbeat messages to maintain consistent data about the cluster's membership. If a host fails to send or does not respond to heartbeat messages, the remaining hosts perform convergence, a process in which they determine which hosts are still active members of the cluster. If a new host attempts to join the cluster, it sends heartbeat messages that trigger convergence. After all cluster hosts agree on the current cluster membership, the client load is repartitioned, and convergence completes.
Discussion of Network Load Balancing clusters requires clarification of two kinds of client states, application data state and session state:
Means must be provided to synchronize updates to data state that need to be shared across servers. One such means is use of a back-end database server that is shared by all instances of the application. An example would be an Active Server Pages (ASP) page that is supported by an IIS server and that can access a shared back-end database server, such as a SQL Server.
Client/server applications that embed session state within "cookies" or push it to a back-end database do not need client affinity to be maintained.
An example of an application that requires maintaining session state is
By setting port rules, cluster parameters, and host parameters, you gain great flexibility in configuring the cluster, which enables you to customize the cluster according to the various hosts' capacities and sources of client requests. You can:
Network Load Balancing normalizes the load percentage based on the sum of assigned load percentages for all active hosts. In other words, if one host fails, the remaining hosts increase the number of client requests they handle, proportionally to their original load percentages. For example, assume each host in a four-host cluster is assigned 25 percent of the load. If one of these hosts fails, the three remaining active hosts would each handle 33 percent of the load.
You can combine the preceding capabilities by setting cluster and host parameters and creating port rules for your particular scenario. For guidelines on setting parameters and port rules for various scenarios, see "Scenarios" later in this chapter.
Before specific scenarios are discussed, the following sections explore the basic concepts of Network Load Balancing:
This section includes caveats and recommendations.
This section covers basic concepts, such as the parameters and port rules, heartbeats and convergence, how Network Load Balancing maps client requests to hosts, and maintaining client connections.
This section discusses the cluster and host parameters and the port rules in more depth.