Windows Internet Name Service

Previous Topic Next Topic

Overview of the Replication Process

Replicating databases between WINS servers maintains a consistent set of WINS information throughout a network. An example of WINS database replication is shown in Figure 7.9. Two WINS servers, WINS-A and WINS-B, are both configured to fully replicate their records with each other.

Figure 7.9    WINS Replication Overview
Enlarge figure

Figure 7.9 WINS Replication Overview

In Figure 7.9, a WINS client, HOST-1 on subnet 1, registers its name with its primary WINS server, WINS-A. Another WINS client, HOST-2 on Subnet 3, registers its name with its primary WINS server, WINS-B. If either of these hosts later attempts to locate the other host using WINS—for example, HOST-1 queries to find an IP address for HOST-2—replication of WINS registration information between the WINS servers makes it possible to resolve this query.


note-icon

Note

WINS replication is always incremental, meaning that only changes in the database are replicated each time replication occurs, not the entire database.

For replication to work, each WINS server must be configured with at least one other WINS server as its replication partner. This ensures that a name registered with one WINS server is eventually replicated to all other WINS servers in the network. A replication partner can be added and configured as either a push partner, a pull partner, or a push/pull partner, which uses both methods of replication. The push/pull partner is the default configuration and is the type recommended for use in most cases.

Figure 7.10    Replication Partners Properties Dialog Box
Enlarge figure

Figure 7.10 Replication Partners Properties Dialog Box

When WINS servers replicate, a latency period exists before the name-to-address mapping of a client from any given server is propagated to all other WINS servers in the network. This latency is known as the convergence time for the entire WINS system. For example, a name release request by a client does not propagate as quickly as a name registration request. This is because names are commonly released and then reused with the same mapping, such as when computers are restarted or when they are turned off for the evening and restarted in the morning. Replicating each of these name releases would unnecessarily increase the network load of replication.

Also, when a WINS client computer is shut off improperly, such as during an unexpected power outage, the computer's registered names are not released normally with a request to the server. Therefore, the presence of a record in the WINS database does not necessarily mean that a client computer is still using the name or its associated IP address. It only means that a computer recently registered that name and its associated IP address.


note-icon

Note

The primary and secondary WINS servers assigned to any client must have push and pull relationships with each other. You might want to keep a list of pairs of push/pull WINS servers for use when assigning servers to clients.

To replicate database entries, each WINS server in a network must be configured as either a pull partner or a push partner with at least one other WINS server.

WINS Server Push and Pull Partners

The WINS database is collectively managed by the WINS servers, each of which has a copy of the WINS database. To keep these copies consistent, servers replicate their records among themselves. Each WINS server is configured with a set of one or more replication partners. When new computers are added or substituted on the network, they register their name and IP address with another server, which in turn propagates the new record to all other WINS servers in the enterprise. The result is that every server has the record pertaining to that new computer.

Detailed Replication Example

The figure below shows an extremely large WINS implementation, serving more than 100,000 nodes. In a configuration with so many WINS servers, it is tempting to create many push/pull relationships for redundancy. This can lead to a system that, while functional, is overly complex and difficult to understand and troubleshoot.

Image
Enlarge figure

Figure 7.11 Large-Scale WINS Deployment Using Hub Topology

The hub structure imposes order on the sample configuration shown in Figure 7.11. Four major hubs are located in Seattle, San Francisco, Chicago, and Los Angeles. These hubs serve as secondary WINS servers for their regions while connecting the four geographic locations. All primary WINS servers are configured as push/pull partners with the hubs, and the hubs are configured as push/pull partners with other hubs.

For example, assume the primary WINS servers in Figure 7.11 replicate with the hubs every 15 minutes, and the hub-to-hub replication interval is 30 minutes. The convergence time of the WINS system is the time it takes for a node registration to be replicated to all WINS servers. In this case the longest time would be from a Seattle primary server to a Chicago primary server. The convergence time can be calculated by adding up the maximum time between replication from the Seattle primary to Seattle secondary, Seattle secondary to San Francisco secondary, San Francisco secondary to Chicago secondary, and finally Chicago secondary to Chicago primary. This yields a total convergence time of 15 + 30 + 30 + 15 minutes, or 1.5 hours.

However, the convergence could be longer if some of these WINS servers are connected across slow links. It is probably not necessary for the servers in Paris or Berlin to replicate every 15 minutes. You might configure them to replicate every two hours or even every 24 hours, depending on the volatility of names in the WINS system.

This example network contains some redundancy, but not much. If the link between Seattle and Los Angeles is down, replication still occurs through San Francisco, but what happens if the Seattle hub itself goes down? In this case, the Seattle area can no longer replicate with the rest of the WINS system. Network connectivity, however, is still functional—all WINS servers contain the entire WINS database, and name resolution functions normally. All that is lost are changes to the WINS system that occurred since the Seattle hub went down. A Seattle user cannot resolve the name of a file server in Chicago that comes online after the Seattle hub does down. Once the hub returns to service, all changes to the WINS database are replicated normally.

Small-Scale Replication Example

While the large-scale deployment shown in the four-hub diagram of Figure 7.11 is possible, it is also valuable to examine a much smaller example of replication. The simplest case involves just two servers, as shown in Figure 7.12.

Figure 7.12    Database Replication Between Two WINS Servers
Enlarge figure

Figure 7.12 Database Replication Between Two WINS Servers

Tables 7.8 and 7.9 are the database tables for WINS-A and WINS-B on January 1, 2000. All four clients are powered on in the morning between 8:00 A.M. and 8:15 A.M. Client2 has just been shut down. WINS-A and WINS-B have the following parameters:

Before replication, WINS-A has two entries in its database. These entries are for Client1 and Client2, as shown in Table 7.8.

Table 7.8 WINS-A Database Before Replication

Name Address Flags Owner Version ID Time stamp
Client1 192.168.
22.101
Unique, active,
H-node
, dynamic
WINS-A 4B3 1/5/00 8:05:32 AM
Client2 192.168.
22.102
Unique, released,
H-node
, dynamic
WINS-A 4C2 1/5/00 8:23:43 AM

Before replications, WINSB has the two entries shown in Table 7.9, one each for Client3 and Client4.

Table 7.9 WINS-B Database Before Replication

Name Address Flags Owner Version ID Time stamp
Client3 192.168.
55.103
Unique, active,
H-node
, dynamic
WINS-B 78F 1/5/00 8:11:12 AM
Client4 192.168.
55.104
Unique, active,
H-node
, dynamic
WINS-B 79C 1/5/00 8:12:21 AM

Client1, Client3, and Client4 were time stamped with the sum of the current time and the renewal interval at the time they booted, and Client2 was time stamped with the sum of the current time and the extinction interval when it was released. The version IDs indicate the value of the registration counter at the time of registration. The registration counter is incremented by 1 (hexadecimal) each time it generates a new version ID in the database. Each WINS server has its own registration counter. The version ID jumps from 4B3 for Client1 to 4C2 for Client2. This indicates that 14 registrations (or extinctions or releases to active transition) took place between the registration of Client1 and Client2.

Replication takes place at 8:30:45 by WINS-A's clock. WINS-B's clock is 8:31:15 at this time. Of course replications will not all take place in the same second, but the servers use these times to generate the time stamps. Note that replication does not mean both pull at the same time—each pulls according to its own schedule. After replication, WINS-A's database contains the entries shown in Table 7.10.

Table 7.10 WINS-A Database After Replication

Name Address Flags Owner Version ID Time stamp
Client1 192.168.
22.101
Unique, active,
H-node
, dynamic
WINS-A 4B3 1/5/00 8:05:32 AM
Client2 192.168.
22.102
Unique, released,
H-node
, dynamic
WINS-A 4C2 1/5/00 8:23:43 AM
Client3 192.168.
55.103
Unique, active,
H-node
, dynamic
WINS-B 78F 1/25/00 8:30:45 AM
Client4 192.168.
55.104
Unique, active,
H-node
, dynamic
WINS-B 79C 1/25/00 8:30:45 AM

After replication, WINS-B's database contains the entries shown in Table 7.11.

Table 7.11 WINS-B Database Before Replication

Name Address Flags Owner Version ID Time stamp
Client1 192.168.
22.101
Unique, active,
H-node
, dynamic
WINS-A 4B3 1/25/00 8:31:15 AM
Client3 192.168.
55.103
Unique, active,
H-node
, dynamic
WINS-B 78F 1/5/00 8:11:12 AM
Client4 192.168.
55.104
Unique, active,
H-node
, dynamic
WINS-B 79C 1/5/00 8:12:21 AM

Client1 has been replicated to WINS-B, and Client3 and Client4 have been replicated to WINS-A. The replicas have all kept their original owner and version ID and have been time stamped with the sum of the current time and the verification interval. Client2 has not been replicated, because it is in the released state. This is a little unusual (but possible) because Client2 shut down before its first replication. If Client2 had not been shut down until after the replication, WINS-B would have a replica of Client2 in the active state. This replica would remain in the active state even after Client2 released, because the change in state would not be replicated.

Assuming Client2 remains shut down for the duration of the extinction interval, it is placed in the tombstone state. At the first scavenging after 8:23:43 AM on January 5, 2000 (assuming an extinction interval of four days), the database on WINS-A contains the entries shown in Table 7.12.

Table 7.12 WINS-A Database After Scavenging

Name Address Flags Owner Version ID Time stamp
Client1 192.168.
22.101
Unique, active,
H-node
, dynamic
WINS-A 4B3 1/9/00 6:35:26 AM
Client2 192.168.
22.102
Unique, tombstone,
H-node
, dynamic
WINS-A 657 1/6/00 9:50:53 AM
Client3 192.168.
55.103
Unique, active,
H-node
, dynamic
WINS-B 78F 1/25/00 8:30:45 AM
Client4 192.168.
55.104
Unique, active,
H-node
, dynamic
WINS-B 79C 1/25/00 8:30:45 AM

Note that Client2 has entered the tombstone state and that both its time stamp and its version ID have changed. The time stamp is now the sum of the current time and the extinction timeout, and the new version ID means that this entry is replicated at the next replication. Note also that Client1 has a new time stamp while retaining its version ID. It has been renewed throughout the last four days. The renewal rate depends the client stack.

After replication at 10:00:23 A.M., the database on WINS-B contains the entries shown (note that Client3 and 4 were renewed) as shown in Table 7.13.

Table 7.13 WINS-B Database After Replication

Name Address Flags Owner Version ID Time stamp
Client1 192.168.
22.101
Unique, active,
H-node
, dynamic
WINS-A 4B3 1/25/00 8:31:15 AM
Client2 192.168.
22.102
Unique, tombstone,
H-node
, dynamic
WINS-A 657 1/6/00 10:00:23 AM
Client3 192.168.
55.103
Unique, active,
H-node
, dynamic
WINS-B 78F 1/9/00 8:11:12 AM
Client4 192.168.
55.104
Unique, active,
H-node
, dynamic
WINS-B 79C 1/9/00 8:12:21 AM

If Client2 remains down for one more day, exceeding the extinction timeout, it will be removed from the databases when it is next scavenged.

Once Client2 is removed, the database on WINS-A contains the entries shown in Table 7.14.

Table 7.14 WINS-A Database After Client 2 Is Removed

Name Address Flags Owner Version ID Time stamp
Client1 192.168.
22.101
Unique, active,
H-node
, dynamic
WINS-A 4B3 1/11/00 9:45:56 AM
Client3 192.168.
55.103
Unique, active,
H-node
, dynamic
WINS-B 78F 1/25/00 8:30:45 AM
Client4 192.168.
55.104
Unique, active,
H-node
, dynamic
WINS-B 79C 1/25/00 8:30:45 AM

After Client2 is removed, the database on WINS-B contains the entries shown in Table 7.15.

Table 7.15 WINS-B Database After Client 2 Is Removed

Name Address Flags Owner Version ID Time stamp
Client1 192.168.
22.101
Unique, active,
H-node
, dynamic
WINS-A 4B3 1/25/00 8:31:15 AM
Client3 192.168.
55.103
Unique, active,
H-node
, dynamic
WINS-B 78F 1/11/00 9:44:27 AM
Client4 192.168.
55.104
Unique, active,
H-node
, dynamic
WINS-B 79C 1/11/00 9:46:44 AM

During the first scavenging after 8:30 A.M. on January 25, 2000, WINS-A verifies with WINS-B that Client3 and Client4 are still valid active names. WINS-B does the same for Client1 with WINS-A.

© 1985-2000 Microsoft Corporation. All rights reserved.