programming4us
programming4us
DESKTOP

Windows Server 2003 : Network Load-Balancing Clusters (part 1) - NLB Operation Styles and Modes, Port Rules

- How To Install Windows Server 2012 On VirtualBox
- How To Bypass Torrent Connection Blocking By Your ISP
- How To Install Actual Facebook App On Kindle Fire
8/10/2012 5:53:44 PM
NLB in Windows Server 2003 is accomplished by a special network driver that works between the drivers for the physical network adapter and the TCP/IP stack. This driver communicates with the NLB program (called wlbs.exe, for the Windows Load Balancing Service) running at the application layer—the same layer in the OSI model as the application you are clustering. NLB can work over FDDI- or Ethernet-based networks—even wireless networks—at up to gigabit speeds.

Why would you choose NLB? For a few reasons:

  • NLB is an inexpensive way to make a TCP/IP-dependent application somewhat fault tolerant, without the expense of maintaining a true server cluster with fault-tolerant components. No special hardware is required to create an NLB cluster. It's also cheap hardware-wise because you need only two network adapters to mitigate a single point of failure.

  • The "shared nothing" approach—meaning each server owns its own resources and doesn't share them with the cluster for management purposes, so to speak—is easier to administer and less expensive to implement, although there is always some data lag between servers while information is transferred among the members. (This approach also has its drawbacks, however, because NLB can only direct clients to back-end servers or to independently replicated data.)

  • Fault tolerance is provided at the network layer, ensuring that network connections are not directed to a server that is down

  • Performance is improved for your web or FTP resource because load is distributed automatically among all members of the NLB cluster

NLB works in a seemingly simple way: all computers in an NLB cluster have their own IP address just like all networked machines do these days, but they also share a single, cluster-aware IP address that allows each member to answer requests on that IP address. NLB takes care of the IP address conflict problem and allows clients who connect to that shared IP address to be directed automatically to one of the cluster members.

NLB clusters support a maximum of 32 cluster members, meaning that no more than 32 machines can participate in the load-balancing and sharing features. Most applications that have a load over and above what a single 32-member cluster can handle take advantage of multiple clusters and use some sort of DNS load-balancing technique or device to distribute requests to the multiple clusters individually.

When considering an NLB cluster for your application, ask yourself the following questions: how will failure affect application and other cluster members? If you are a running a high-volume e-commerce site and one member of your cluster fails, are the other servers in the cluster adequately equipped to handle the extra traffic from the failed server? A lot of cluster implementations miss this important concept and later see the consequence—a cascading failure caused by perpetually growing load failed over onto servers perpetually failing from overload. Such a scenario is entirely likely and also entirely defeats the true purpose of a cluster. Avoid this by ensuring that all cluster members have sufficient hardware specifications to handle additional traffic when necessary.

Also examine the kind of application you are planning on clustering. What types of resources does it use extensively? Different types of applications stretch different components of the systems participating in a cluster. Most enterprise applications have some sort of performance testing utility; take advantage of any that your application offers in a testing lab and determine where potential bottlenecks might lie.

Web applications, Terminal Services, and Microsoft's new ISA Server 2004 product can take advantage of NLB clustering.

It's important to be aware that NLB is unable to detect if a service on the server has crashed but not the machine itself, so it could direct a user to a system that can't offer the requested service.


1. NLB Terminology

Before we dig in deeper in our coverage of NLB, let's discuss a few terms that you will see. Some of the most common NLB technical terms are:


NLB driver

This driver resides in memory on all members of a cluster and is instrumental in choosing which cluster node will accept and process the packet. Coupled with port rules and client affinity (all defined on the following pages), the driver decides whether to send the packet up the TCP/IP stack to the application on the current machine, or to pass on the packet because another server in the cluster will handle it.


Unicast mode

In unicast mode , NLB hosts send packets to a single recipient.


Multicast mode

In multicast mode, NLB hosts send packets to multiple recipients at the same time.


Port rules

Port rules define the applications on which NLB will "work its magic," so to speak. Certain applications listen for packets sent to them on specific port numbers—for example, web servers usually listen for packets addressed to TCP port 80. You use port rules to instruct NLB to answer requests and load-balance them.


Affinity

Affinity is a setting which controls whether traffic that originated from a certain cluster member should be returned to that particular cluster node. Effectively, this controls which cluster nodes will accept what types of traffic.

2. NLB Operation Styles and Modes

An NLB cluster can operate in four different ways:

  • With a single network card in each server, using unicast mode

  • With multiple network cards in each server, using unicast mode

  • With a single network card in each server, using multicast mode

  • With multiple network cards in each server, using multicast mode

You cannot mix unicast and multicast modes among the members of your cluster. All members must be running either unicast or multicast mode, although the number of cards in each member can differ.

The following sections detail each mode of operation.

2.1. Single card in each server in unicast mode

A single network card in each server operating in unicast mode requires less hardware, so obviously it's less expensive than maintaining multiple NICs in each cluster member. However, network performance is reduced because of the overhead of using the NLB driver over only one network card—cluster traffic still has to pass through one adapter, which can be easily saturated, and is additionally run through the NLB driver for load balancing. This can create real hang-ups in network performance.

An additional drawback is that cluster hosts can't communicate with each other through the usual methods, such as pinging—it's not supported using just a single adapter in unicast mode. This has to do with MAC address problems and the Address Resolution Protocol (ARP) protocol. Similarly, NetBIOS isn't supported in this mode either.

This configuration is shown in Figure 1.

Figure 1. Single card in each server in unicast mode

2.2. Multiple cards in each server in unicast mode

This is usually the preferred configuration for NLB clusters because it enables the most functionality for the price in equipment. However, it is inherently more expensive because of the second network adapter in each cluster member. Having that second adapter, though, means there are no limitations among regular communications between members of the NLB cluster. Additionally, NetBIOS is supported through the first configured network adapter for simpler name resolution. All kinds and types and brands of routers support this method, and having more than one adapter in a machine removes bottlenecks found with only one adapter.

This configuration is shown in Figure 2.

Figure 2. Multiple cards in each server in unicast mode

2.3. Single card in each server in multicast mode

Using a single card in multicast mode allows members of the cluster to communicate with each other normally, but network performance is still reduced because you still are using only a single network card. Router support might be spotty because of the need to support multicast MAC addresses, and NetBIOS isn't supported within the cluster.

This configuration is shown in Figure 3.

Figure 3. Single card in each server in multicast mode

2.4. Multiple cards in each server in multicast mode

This mode is used when some hosts have one network card and others have more than one, and all require regular communications among themselves. In this case, all hosts need to be in multicast mode because all hosts in an NLB cluster must be running the same mode. You might run into problems with router support using this model, but with careful planning you can make it work.

This configuration is shown in Figure 4.

Figure 4. Multiple cards in each server in multicast mode

3. Port Rules

NLB clusters feature the ability to set port rules , which simply are ways to instruct Windows Server 2003 to handle each TCP/IP port's cluster network traffic. It does this filtering in three modes: disabled, where all network traffic for the associated port or ports will be blocked; single host mode, where network traffic from an associated port or ports should be handled by one specific machine in the cluster (still with fault tolerance features enabled); and multiple hosts mode (the default mode), where multiple hosts in the cluster can handle port traffic for a specific port or range of ports.

The rules contain the following parameters:

  • The virtual IP address to which the rule should be applied

  • The port range for which this rule should be applied

  • The protocols for which this rule should apply, including TCP, UDP, or both

  • The filtering mode that specifies how the cluster handles traffic described by the port range and protocols, as described just before this list

In addition, you can select one of three options for client affinity (which is, simply put, the types of clients from which the cluster will accept traffic): None, Single, and Class C. Single and Class C are used to ensure that all network traffic from a particular client is directed to the same cluster host. None indicates there is no client affinity, and traffic can go to any cluster host.

When using port rules in an NLB cluster, it's important to remember that the number and content of port rules must match exactly on all members of the cluster. When joining a node to an NLB cluster, if the number or content of port rules on the joining node doesn't match the number or content of rules on the existing member nodes, the joining member will be denied membership to the cluster. You need to synchronize these port rules manually across all members of the NLB cluster.

Other  
 
Top 10
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Finding containers and lists in Visio (part 2) - Wireframes,Legends
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Finding containers and lists in Visio (part 1) - Swimlanes
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Formatting and sizing lists
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Adding shapes to lists
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Sizing containers
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 3) - The Other Properties of a Control
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 2) - The Data Properties of a Control
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 1) - The Format Properties of a Control
- Microsoft Access 2010 : Form Properties and Why Should You Use Them - Working with the Properties Window
- Microsoft Visio 2013 : Using the Organization Chart Wizard with new data
REVIEW
- First look: Apple Watch

- 3 Tips for Maintaining Your Cell Phone Battery (part 1)

- 3 Tips for Maintaining Your Cell Phone Battery (part 2)
programming4us programming4us
programming4us
 
 
programming4us