4. Selecting a Quorum Model
Every
node in a server cluster maintains a copy of the cluster database in
its registry. The cluster database contains the properties of all the
cluster’s elements, including physical components such as servers,
network adapters, and shared storage devices, and cluster objects such
as applications and other logical resources. When a cluster node goes
offline for any reason, its cluster database is no longer updated as
the cluster’s status changes. When the node comes back online, it must
have a current copy of the database to rejoin the cluster, and it
obtains that copy from the cluster’s quorum resource.
A
cluster’s quorum contains all the configuration data needed for the
recovery of the cluster, and the quorum resource is the drive where the
quorum is stored. To create a cluster, the first node must be able to
take control of the quorum resource so that it can save the quorum data
there. Only one system can have control of the quorum resource at any
one time. Additional nodes must be able to access the quorum resource
so that they can create the cluster database in their registries.
Selecting
the location for the quorum is a crucial part of creating a cluster.
Server clusters running Windows Server 2003 support the following three
types of quorum models:
Single-node cluster
A cluster that consists of only one server. Because there is no need
for a shared storage solution, the application data store and the
quorum resource are located on the computer’s local drive. The primary
reason for creating single-node clusters is for testing and development.
Single-quorum device cluster
The cluster uses a single quorum resource, which is one of the shared
storage devices accessible by all the nodes in the cluster. This is the
quorum model that most server cluster installations use.
Majority node set cluster
A separate copy of the quorum is stored in each cluster node, with the
quorum resource responsible for keeping all copies of the quorum
consistent. Majority node set clusters are particularly well suited to
geographically dispersed server clusters and clusters that do not have
shared data storage devices.
5. Creating a Server Cluster
Before
you actually create the cluster, you must select, evaluate, and install
a shared storage resource and install the critical applications on the
computers running Windows Server 2003. All the computers that are to
become cluster nodes must have access to the shared storage solution
you have selected; you should know your applications’ capabilities with
regard to partitioning; and you should have decided how to deploy them.
Once you have completed these tasks, you will use the Cluster
Administrator tool to create and manage server clusters. (See Figure 6.)

To create a new cluster, you must have the following information available:
The name of the domain in which the cluster will be located
The host name to assign to the cluster
The static IP address to assign to the cluster
The name and password for a cluster service account
With this information in hand, you can proceed to deploy the cluster, taking the following basic steps:
1. | Start up the computer running Windows Server 2003 that will be the first node in the cluster.
At this time, the other servers you will later add to the cluster should not be running.
|
2. | Use the Cluster Administrator application on the first server to create a new cluster.
During this process, the New Server Cluster Wizard detects the storage
devices and network interfaces on the computer and determines whether
they are suitable for use by the cluster. You also supply the name and
IP address for the cluster and the name and password for the cluster
service account.
|
3. | Verify that the cluster is operational and that you can access the cluster disks.
At this point, you have created a single-node cluster.
|
4. | Start up the computers running Windows Server 2003 that will become the other nodes in the cluster.
|
5. | Use the Add Nodes Wizard in Cluster Administrator to make the other servers part of the cluster.
|
6. | Test
the cluster by using Cluster Administrator to stop the cluster service
on each node in turn, verifying that the cluster disks are still
available after each stoppage.
|
Once
you have added all the nodes to the cluster, you can view information
about the nodes in Cluster Administrator as well as manage the nodes
and their resources from a central location. In addition, there are
many clustering features you can use to configure how the cluster
behaves under various conditions.
When
managing a cluster, you frequently work with cluster resources. A
cluster resource is any physical or logical element the cluster service
can manage by bringing it online or offline and moving it to a
different node. By default, the cluster resources supported by server
clusters running Windows Server 2003 include storage devices,
configuration parameters, scripts, and applications. When you deploy a
third-party application on a server cluster, the application developer
typically includes resource types that are specific to that application.
|
Some configuration tasks you can perform in Cluster Administrator are as follows:
Create resource groups
A resource group is a collection of cluster resources that functions as
a single failover unit. When one resource in the group malfunctions,
the cluster service fails the entire group over to another node. You
use the New Group Wizard to create resource groups, after which you can
create new resources or move existing resources into the group.
Define resource dependencies
You can configure a specific cluster resource to be dependent on other
resources in the same resource group. The cluster service uses these
dependencies to determine the order in which it starts and stops the
resources on a node in the event of a failover. For example, when an
application is dependent on a particular shared disk where the
application is stored, the cluster service always brings down the
application on a node before bringing down the disk. Conversely, when
launching the application on a new node, the service will always start
the disk before the application so that the disk is available to the
application when it starts.
Configure the cluster network role
For each network to which a cluster is connected, you can specify
whether the cluster should use that network for client access only, for
internal cluster communications only, or for both.
Configure failover relationships For
each resource the cluster manages, you can specify a list of nodes that
are permitted to run that resource. With this capability, you can
configure a wide variety of failover policies for your applications.
Configuring Failover Policies
By
configuring the failover relationships of your cluster applications and
other resources, you can implement a number of different failover
policies that control which cluster nodes an application uses and when.
With small server clusters, failover is usually a simple affair because
you don’t have that many nodes to choose from. As server clusters grow
larger, however, their failover capabilities become more flexible. Some
failover policies you might consider using are as follows:
Failover pairs
In a large server cluster running several applications, each
application is running on one node and has one designated standby node.
This makes server capacity planning simple, as the servers are never
running more than one application. However, half of the cluster’s
processing capacity is not in use, and, in the event of multiple node
failures, some applications could go offline unless an administrator
intervenes.
Hot-standby server
A single node functions as the designated standby server for two or
more applications. This option uses the cluster’s processing capacity
more efficiently (fewer servers are idle), but might not handle
multiple node failures well. For capacity planning, the standby server
has to support only the most resource-intensive application it might
run, unless you want to plan for multiple node failures, in which case
the standby must be capable of running multiple applications at once.
N+I
An expanded form of the hot-standby server policy, in which you
configure a number of active nodes running different applications (N)
to fail over to any one of a number of idle servers (I). As an example,
you can create a six-node server cluster with four applications running
on four separate nodes, plus two standby nodes that are idle. When one
of the active nodes malfunctions, its application fails over to one of
the standby servers. This policy is better at handling multiple server
failures than failover pairs or hot-standby servers.
Failover ring
Each node in a server cluster runs an application, and you configure
each application to fail over to the next node. This policy is suitable
for relatively small applications because, in the event of a failure, a
server might have to run two or more applications at once. In the event
of multiple node failures, the application load could be unbalanced
across the active nodes. For example, in a four-node cluster, if Server
1 fails, Server 2 must run its own application and that of Server 1. If
Server 2 then fails, Server 3 must take on the Server 2 and Server 1
applications in addition to its own, while Server 4 continues to run
only one application. This makes server capacity planning difficult.
Random In
some cases, the best policy is for the administrator not to define any
specific failover relationships at all, and let the cluster service be
responsible for failing over resources to other nodes in the cluster.
This policy is usually preferable for smaller applications so that a
single node can conceivably run multiple applications if necessary.
Random failovers also place less of a burden on the cluster
administrator.