Guide To Creating and Configuring A Server Cluster Under Windows Server 2003 White Paper
Guide To Creating and Configuring A Server Cluster Under Windows Server 2003 White Paper
Abstract
Instructions for creating and configuring a server cluster with servers connected to a
shared cluster storage device and running Windows Server 2003 Enterprise Edition or
Windows Server 2003 Datacenter Edition.
Information in this document, including URL and other Internet Web site references, is
subject to change without notice. Unless otherwise noted, the example companies,
organizations, products, domain names, e-mail addresses, logos, people, places, and
events depicted herein are fictitious, and no association with any real company,
organization, product, domain name, e-mail address, logo, person, place, or event is
intended or should be inferred. Complying with all applicable copyright laws is the
responsibility of the user. Without limiting the rights under copyright, no part of this
document may be reproduced, stored in or introduced into a retrieval system, or
transmitted in any form or by any means (electronic, mechanical, photocopying,
recording, or otherwise), or for any purpose, without the express written permission of
Microsoft Corporation.
Microsoft, Active Directory, Windows, Windows logo, Windows NT, and Windows Server
are either registered trademarks or trademarks of Microsoft Corporation in the United
States and/or other countries.
Contents........................................................................................................................ .....3
Introduction (Guide to Creating and Configuring a Server Cluster under Windows Server
2003 White Paper) ................................................................................ .............5
Appendix (Guide to Creating and Configuring a Server Cluster under Windows Server
2003 White Paper) ............................................................................. ..............47
Advanced Testing......................................................................................................... .47
SCSI Drive Installations.............................................................................. ..................49
Configuring the SCSI Devices................................................................... ................49
Terminating the Shared SCSI Bus.................................................................. ...........49
Storage Area Network Considerations............................................................... ...........50
Arbitrated Loops (FC-AL)................................................................. .........................51
Switched Fabric (FC-SW)................................................................................. .........52
Using SANs with Server Clusters............................................................. .................53
SCSI Resets.............................................................................. ............................53
HBAs............................................................................................................. .........54
Zoning and LUN Masking................................................................................... ....54
Requirements for Deploying SANs with Windows Server 2003 Clusters...............57
Guidelines for Deploying SANs with Windows Server 2003 Server Clusters.........57
Related Links (Guide to Creating and Configuring a Server Cluster under Windows
Server 2003 White Paper) .................................................... ...........................58
5
Server clusters allow client access to applications and resources in the event of failures
and planned outages. If one of the servers in the cluster is unavailable because of a
failure or maintenance requirements, resources and applications move to other available
cluster nodes.
For Windows Clustering solutions, the term “high availability” is used rather than “fault
tolerant.” Fault-tolerant technology offers a higher level of resilience and recovery. Fault-
tolerant servers typically use a high degree of hardware redundancy plus specialized
software to provide near-instantaneous recovery from any single hardware or software
fault. These solutions cost significantly more than a Windows Clustering solution because
organizations must pay for redundant hardware that waits in an idle state for a fault.
Server clusters do not guarantee non-stop operation, but they do provide sufficient
availability for most mission-critical applications. The cluster service can monitor
applications and resources and automatically recognize and recover from many failure
conditions. This provides flexibility in managing the workload within a cluster. It also
improves overall system availability.
• Scalability: Cluster services can grow to meet increased demand. When the
overall load for a cluster-aware application exceeds the cluster’s capabilities,
additional nodes can be added.
This document provides instructions for creating and configuring a server cluster with
servers connected to a shared cluster storage device and running Windows Server 2003
Enterprise Edition or Windows Server 2003 Datacenter Edition. Intended to guide you
through the process of installing a typical cluster, this document does not explain how to
install clustered applications. Windows Clustering solutions that implement non-traditional
quorum models, such as Majority Node Set (MNS) clusters and geographically dispersed
clusters, also are not discussed. For additional information about server cluster concepts
as well as installation and configuration procedures, see the Windows Server 2003
Online Help.
Software Requirements
• Microsoft Windows Server 2003 Enterprise Edition or Windows Server 2003
Datacenter Edition installed on all computers in the cluster.
Hardware Requirements
• Clustering hardware must be on the cluster service Hardware Compatibility
List (HCL). To find the latest version of the cluster service HCL, go to the
Windows Hardware Compatibility List at
http://www.microsoft.com/whdc/hcl/default.mspx, and then search for cluster. The
entire solution must be certified on the HCL, not just the individual components.
For additional information, see the following article in the Microsoft Knowledge
Base:
309395 The Microsoft Support Policy for Server Clusters and the Hardware
Note
If you are installing this cluster on a storage area network (SAN) and plan to
have multiple devices and clusters sharing the SAN with a cluster, the
solution must also be on the “Cluster/Multi-Cluster Device” Hardware
Compatibility List. For additional information, see the following article in the
Microsoft Knowledge Base: 304415 Support for Multiple Clusters Attached to
the Same SAN Device
• Storage cables to attach the shared storage device to all computers. Refer to
the manufacturers instructions for configuring storage devices. See the appendix
that accompanies this article for additional information about specific
configuration needs when using SCSI or Fibre Channel.
• All hardware should be identical, slot for slot, card for card, BIOS, firmware
revisions, and so on, for all nodes. This makes configuration easier and
eliminates compatibility problems.
Network Requirements
• A unique NetBIOS name.
Note
Server Clustering does not support the use of IP addresses assigned from
Dynamic Host Configuration Protocol (DHCP) servers.
• Each node must have at least two network adapters—one for connection to
the client public network and the other for the node-to-node private cluster
network. A dedicated private network adapter is required for HCL certification.
• All nodes must have two physically independent LANs or virtual LANs for
public and private communication.
• All shared disks, including the quorum disk, must be physically attached to a
shared bus.
Note
The requirement above does not hold true for Majority Node Set (MNS)
clusters, which are not covered in this guide.
• Shared disks must be on a different controller then the one used by the
system drive.
• Verify that disks attached to the shared bus can be seen from all nodes. This
can be checked at the host adapter setup level. Refer to the manufacturer’s
documentation for adapter-specific instructions.
237853 Dynamic Disk Configuration Unavailable for Server Cluster Disk Resources
• All shared disks must be configured as master boot record (MBR) disks on
systems running the 64-bit versions of Windows Server 2003.
Cluster Installation
Installation Overview
During the installation process, some nodes will be shut down while others are being
installed. This step helps guarantee that data on disks attached to the shared bus is not
lost or corrupted. This can happen when multiple nodes simultaneously try to write to a
disk that is not protected by the cluster software. The default behavior of how new disks
are mounted has been changed in Windows 2003 Server from the behavior in the
Microsoft® Windows® 2000 operating system. In Windows 2003, logical disks that are
not on the same bus as the boot partition will not be automatically mounted and assigned
a drive letter. This helps ensure that the server will not mount drives that could possibly
belong to another server in a complex SAN environment. Although the drives will not be
10
mounted, it is still recommended that you follow the procedures below to be certain the
shared disks will not become corrupted.
Use the table below to determine which nodes and storage devices should be turned on
during each step.
The steps in this guide are for a two-node cluster. However, if you are installing a cluster
with more than two nodes, the Node 2 column lists the required state of all other nodes.
• Setting up networks.
• Setting up disks.
Perform these steps on each cluster node before proceeding with the installation of
cluster service on the first node.
To configure the cluster service, you must be logged on with an account that has
administrative permissions to all nodes. Each node must be a member of the same
domain. If you choose to make one of the nodes a domain controller, have another
domain controller available on the same subnet to eliminate a single point of failure and
enable maintenance on that node.
Before configuring the cluster service, you must be logged on locally with a domain
account that is a member of the local administrators group.
Note
The installation will fail if you attempt to join a node to a cluster that has a blank
password for the local administrator account. For security reasons, Windows
Server 2003 prohibits blank administrator passwords.
Setting Up Networks
Each cluster node requires at least two network adapters with two or more independent
networks, to avoid a single point of failure. One is to connect to a public network, and
one is to connect to a private network consisting of cluster nodes only. Servers with
multiple network adapters are referred to as “multi-homed.” Because multi-homed servers
can be problematic, it is critical that you follow the network configuration
recommendations outlined in this document.
12
Microsoft requires that you have two Peripheral Component Interconnect (PCI) network
adapters in each node to be certified on the Hardware Compatibility List (HCL) and
supported by Microsoft Product Support Services. Configure one of the network adapters
on your production network with a static IP address, and configure the other network
adapter on a separate network with another static IP address on a different subnet for
private cluster communication.
Communication between server cluster nodes is critical for smooth cluster operations.
Therefore, you must configure the networks that you use for cluster communication are
configured optimally and follow all hardware compatibility list requirements.
The private network adapter is used for node-to-node communication, cluster status
information, and cluster management. Each node’s public network adapter connects the
cluster to the public network where clients reside and should be configured as a backup
route for internal cluster communication. To do so, configure the roles of these networks
as either "Internal Cluster Communications Only" or "All Communications" for the
Cluster service.
Additionally, each cluster network must fail independently of all other cluster networks.
This means that two cluster networks must not have a component in common that can
cause both to fail simultaneously. For example, the use of a multiport network adapter to
attach a node to two cluster networks would not satisfy this requirement in most cases
because the ports are not independent.
To eliminate possible communication issues, remove all unnecessary network traffic from
the network adapter that is set to Internal Cluster communications only (this adapter is
also known as the heartbeat or private network adapter).
To verify that all network connections are correct, private network adapters must be on a
network that is on a different logical network from the public adapters. This can be
accomplished by using a cross-over cable in a two-node configuration or a dedicated
dumb hub in a configuration of more than two nodes. Do not use a switch, smart hub, or
any other routing device for the heartbeat network.
Note
Cluster heartbeats cannot be forwarded through a routing device because their
Time to Live (TTL) is set to 1. The public network adapters must be only
connected to the public network. If you have a virtual LAN, then the latency
between the nodes must be less then 500 milliseconds (ms). Also, in Windows
Server 2003, heartbeats in Server Clustering have been changed to multicast;
therefore, you may want to make a Madcap server available to assign the
multicast addresses. For additional information, see the following article in the
Microsoft Knowledge Base: 307962 Multicast Support Enabled for the Cluster
Heartbeat.
13
Figure 1 below outlines a four-node cluster configuration.
3. Click Rename.
5. Repeat steps 1 through 3, and then rename the public network adapter as
Public.
6. The renamed icons should look like those in Figure 2 above. Close the
Network Connections window. The new connection names will appear in Cluster
Administrator and automatically replicate to all other cluster nodes as they are
brought online.
3. In the Connections box, make sure that your bindings are in the following
order, and then click OK:
a. Public
b. Private
2. On the General tab, make sure that only the Internet Protocol (TCP/IP)
check box is selected, as shown in Figure 3 below. Click to clear the check boxes
for all other clients, services, and protocols.
Figure 3. Click to select only the Internet Protocol check box in the Private
Properties dialog box.
Note
Microsoft does not recommended that
you use any type of fault-tolerant
adapter or "Teaming" for the heartbeat.
If you require redundancy for your
heartbeat connection, use multiple
network adapters set to Internal
Communication Only and define their
network priority in the Cluster
configuration. Issues seen with early
multi-ported network adapters, verify
that your firmware and driver are at the
most current revision if you use this
technology. Contact your network
adapter manufacturer for information
about compatibility on a server cluster.
For more information, see the following
article in the Microsoft Knowledge Base:
254101 Network Adapter Teaming and
Server Clustering.
5. On the General tab, verify that you have selected a static IP address that is
not on the same subnet or network as any other public network adapter. It is
recommended that you put the private network adapter in one of the following
private network ranges:
Note
For more information about valid IP
addressing for a private network, see
the following article in the Microsoft
Knowledge Base: 142863 Valid IP
Addressing for a Private Network.
6. Verify that there are no values defined in the Default Gateway box or under
Use the Following DNS server addresses.
8. On the DNS tab, verify that no values are defined. Make sure that the
Register this connection's addresses in DNS and Use this connection's
DNS suffix in DNS registration check boxes are cleared.
9. On the WINS tab, verify that there are no values defined. Click Disable
NetBIOS over TCP/IP as shown in Figure 6 on the next page.
19
10. When you close the dialog box, you may receive the following prompt: “This
connection has an empty primary WINS address. Do you want to continue?” If
you receive this prompt, click Yes
11. Complete steps 1 through 10 on all other nodes in the cluster with different
static IP addresses.
To verify name resolution, ping each node from a client using the node’s machine name
instead of its IP address. It should only return the IP address for the public network. You
may also want to try a PING –a command to do a reverse lookup on the IP Addresses.
There are instances where the nodes may be deployed in an environment where there
are no pre-existing Microsoft® Windows NT® 4.0 domain controllers or Windows Server
2003 domain controllers. This scenario requires at least one of the cluster nodes to be
configured as a domain controller. However, in a two-node server cluster, if one node is a
domain controller, then the other node also must be a domain controller. In a four-node
cluster implementation, it is not necessary to configure all four nodes as domain
controllers. However, when following a “best practices” model and having at least one
backup domain controller, at least one of the remaining three nodes should be configured
as a domain controller. A cluster node must be promoted to a domain controller by using
the DCPromo tool before the cluster service is configured.
The dependence in Windows Server 2003 on the DNS further requires that every node
that is a domain controller also must be a DNS server if another DNS server that
supports dynamic updates and/or SRV records is not available (Active directory
integrated zones recommended).
The following issues should be considered when deploying cluster nodes as domain
controllers:
• If the cluster nodes are the only domain controllers, then each must be a
DNS server as well. They should point to each other for primary DNS resolution
and to themselves for secondary resolution.
• The first domain controller in the forest/domain will take on all Operations
Master Roles. You can redistribute these roles to any node. However, if a node
fails, the Operations Master Roles assumed by that node will be unavailable.
Therefore, it is recommended that you do not run Operations Master Roles on
any cluster node. This includes Scheme Master, Domain Naming Master,
Relative ID Master, PDC Emulator, and Infrastructure Master. These functions
cannot be clustered for high availability with failover.
Note
The cluster service account does not need to be a member of the Domain
Administrators group. For security reasons, granting domain administrator rights
to the cluster service account is not recommended.
The cluster service account requires the following rights to function properly on all nodes
in the cluster. The Cluster Configuration Wizard grants the following rights automatically:
For additional information, see the following article in the Microsoft Knowledge Base:
2. Click the plus sign (+) to expand the domain if it is not already expanded.
3. Right-click Users, point to New, and then click User.
4. Type the cluster name, as shown in Figure 7 below, and then click Next.
Note
If your administrative security policy
does not allow the use of passwords
that never expire, you must renew the
password and update the cluster
service configuration on each node
before password expiration. For
additional information, see the following
article in the Microsoft Knowledge Base:
305813 How to Change the Cluster
Service Account Password.
6. Right-click Cluster in the left pane of the Active Directory Users and
Computers snap-in, and then click Properties on the shortcut menu.
8. Click Administrators, and then click OK. This gives the new user account
administrative privileges on this computer.
To proceed, turn off all nodes. Turn on the shared storage devices, and then turn
on node 1.
Important
A quorum disk failure could cause the
entire cluster to fail; therefore, it is
strongly recommended that you use a
volume on a hardware RAID array. Do
not use the quorum disk for anything
other than cluster management.
The quorum resource plays a crucial role in the operation of the cluster. In every
cluster, a single resource is designated as the quorum resource. A quorum resource
can be any Physical Disk resource with the following functionality:
• It replicates the cluster registry to all other nodes in the server cluster. By
default, the cluster registry is stored in the following location on each node:
%SystemRoot%\Cluster\Clusdb. The cluster registry is then replicated to the
MSCS\Chkxxx.tmp file on the quorum drive. These files are exact copies of each
other. The MSCS\Quolog.log file is a transaction log that maintains a record of all
changes to the checkpoint file. This means that nodes that were offline can have
these changes appended when they rejoin the cluster.
During the cluster service installation, you must provide the drive letter for the quorum
disk. The letter Q is commonly used as a standard, and Q is used in the example.
Note
The wizard automatically sets the disk
to dynamic. To reset the disk to basic,
right-click Disk n (where n specifies the
disk that you are working with), and
then click Revert to Basic Disk.
9. The default is set to maximum size for the partition size. Click Next. (Multiple
logical disks are recommended over multiple partitions on one disk.)
10. Use the drop-down box to change the drive letter. Use a drive letter that is
farther down the alphabet than the default enumerated letters. Commonly, the
drive letter Q is used for the quorum disk, then R, S,and so on for the data disks.
For additional information, see the following article in the Microsoft Knowledge
Base:
Note
If you are planning on using volume
mount points, do not assign a drive
letter to the disk. For additional
information, see the following article in
the Microsoft Knowledge Base: 280297
How to Configure Volume Mount Points
on a Clustered Server.
11. Format the partition using NTFS. In the Volume Label box, type a name for
the disk. For example, Drive Q, as shown in Figure 8 below. It is critical to assign
26
drive labels for shared disks, because this can dramatically reduce
troubleshooting time in the event of a disk recovery situation.
If you are installing a 64-bit version of Windows Server 2003, verify that all disks are
formatted as MBR. Global Partition Table (GPT) disks are not supported as clustered
disks. For additional information, see the following article in the Microsoft Knowledge
Base:
Verify that all shared disks are formatted as NTFS and designated as MBR Basic.
2. Right-click one of the shared disks (such as Drive Q:\), click New, and then
click Text Document.
3. Verify that you can successfully write to the disk and that the file was created.
4. Select the file, and then press the Del key to delete it from the clustered disk.
5. Repeat steps 1 through 4 for all clustered disks to verify they can be correctly
accessed from the first node.
6. Turn off the first node, turn on the second node, and repeat steps 1 through 4
to verify disk access and functionality. Assign drive letters to match the
27
corresponding drive labels. Repeat again for any additional nodes. Verify that all
nodes can read and write from the disks, turn off all nodes except the first one,
and then continue with this white paper.
As seen in the flow chart, the form (Create a new Cluster) and the Join (Add nodes) take
a couple different paths, but they have a few of the same pages. Namely, Credential
Login, Analyze, and Re-Analyze and Start Service are the same. There are minor
differences in the following pages: Welcome, Select Computer, and Cluster Service
Account. In the next two sections of this lesson, you will step through the wizard pages
presented on each of these configuration paths. In the third section, after you follow the
step-through sections, this white paper describes in detail the Analyze, ,Re-Analyze and
Start Service pages, and what the information provided in these screens means.
28
Note
During Cluster service configuration on node 1, you must turn off all other nodes.
All shared storage devices should be turned on.
3. Verify that you have the necessary prerequisites to configure the cluster, as
shown in Figure 10 below. Click Next.
Figure 10. A list of prerequisites is part of the New Server Cluster Wizard
Welcome page.
4. Type a unique NetBIOS name for the cluster (up to 15 characters), and then
click Next. In the example shown in Figure 11 below, the cluster is named
MyCluster.) Adherence to DNS naming rules is recommended. For additional
information, see the following articles in the Microsoft Knowledge Base:
Figure 11. Adherence to DNS naming rules is recommended when naming the
cluster.
5. If you are logged on locally with an account that is not a Domain Account
with Local Administrative privileges, the wizard will prompt you to specify an
account. This is not the account the Cluster service will use to start.
Note
If you have appropriate credentials, the
prompt mentioned in step 5 and shown
in Figure 12 below may not appear.
31
Figure 12. The New Server Cluster Wizard prompts you to specify an account.
Note
The Install wizard verifies that all nodes
can see the shared disks the same. In
a complex storage area network the
target identifiers (TIDs) for the disks
may sometimes be different, and the
Setup program may incorrectly detect
that the disk configuration is not valid
for Setup. To work around this issue
you can click the Advanced button, and
then click Advanced (minimum)
configuration. For additional
information, see the following article in
the Microsoft Knowledge Base: 331801
Cluster Setup May Not Work When You
Add Nodes
7. Figure 14 below illustrates that the Setup process will now analyze the node
for possible hardware or software problems that may cause problems with the
installation. Review any warnings or error messages. You can also click the
Details button to get detailed information about each one.
33
Figure 14. The Setup process analyzes the node for possible hardware or
software problems.
8. Type the unique cluster IP address (in this example 172.26.204.10), and then
click Next.
Figure 15. The New Server Cluster Wizard automatically associates the cluster
IP address with one of the public networks.
9. Type the user name and password of the cluster service account that was
created during pre-installation. (In the example in Figure 16 below, the user name
is “Cluster”). Select the domain name in the Domain drop-down list, and then
click Next.
At this point, the Cluster Configuration Wizard validates the user account and
password.
35
Figure 16. The wizard prompts you to provide the account that was created
during pre-installation.
10. Review the Summary page, shown in Figure 17 below, to verify that all the
information that is about to be used to create the cluster is correct. If desired, you
can use the quorum button to change the quorum disk designation from the
default auto-selected disk.
The summary information displayed on this screen can be used to reconfigure the
cluster in the event of a disaster recovery situation. It is recommended that you save
and print a hard copy to keep with the change management log at the server.
Note
The Quorum button can also be used
to specify a Majority Node Set (MNS)
quorum model. This is one of the major
configuration differences when you
create an MNS cluster
36
12. Click Finish to complete the installation. Figure 19 below illustrates the final
step.
Note
To view a detailed summary, click the View Log button or view the text file stored
in the following location:
%SystemRoot%\System32\LogFiles\Cluster\ClCfgSrv.Log
Figure 20. The Cluster Administer verifies that all resources came online
successfully.
Note
As general rules, do not put anything in the cluster group, do not take anything
out of the cluster group, and do not use anything in the cluster group for anything
other than cluster administration.
Note
For this section, leave node 1 and all shared disks turned on. Then turn on all
other nodes. The cluster service will control access to the shared disks at this
point to eliminate any chance of corrupting the volume.
5. Enter the machine name for the node you want to add to the cluster. Click
Add. Repeat this step, shown in Figure 21 below, to add all other nodes that you
want. When you have added all nodes, click Next.
6. The Setup wizard will perform an analysis of all the nodes to verify that they
are configured properly.
7. Type the password for the account used to start the cluster service.
9. Review any warnings or errors encountered during cluster creation, and then
click Next.
Post-Installation Configuration
Heartbeat Configuration
Now that the networks have been configured correctly on each node and the Cluster
service has been configured, you need to configure the network roles to define their
functionality within the cluster. Here is a list of the network configuration options in
Cluster Administrator:
• Enable for cluster use: If this check box is selected, the cluster service uses
this network. This check box is selected by default for all networks.
• Client access only (public network): Select this option if you want the
cluster service to use this network adapter only for external communication with
other clients. No node-to-node communication will take place on this network
adapter.
• All communications (mixed network): Select this option if you want the
cluster service to use the network adapter for node-to-node communication and
for communication with external clients. This option is selected by default for all
networks.
This white paper assumes that only two networks are in use. It explains how to configure
these networks as one mixed network and one private network. This is the most common
configuration. If you have available resources, two dedicated redundant networks for
internal-only cluster communication are recommended.
4. Click OK.
6. Click to select the Enable this network for cluster use check box.
7. Click the All communications (mixed network) option, and then click OK.
42
2. In the left pane, right-click the cluster name (in the upper left corner), and
then click Properties.
4. Verify that the Private network is listed at the top. Use the Move Up or Move
Down buttons to change the priority order.
5. Click OK.
Note
By default, all disks not residing on the same bus as the system disk will have
Physical Disk Resources created for them, and will be clustered. Therefore, if the
node has multiple buses, some disks may be listed that will not be used as
shared storage, for example, an internal SCSI drive. Such disks should be
removed from the cluster configuration. If you plan to implement Volume Mount
points for some disks, you may want to delete the current disk resources for
those disks, delete the drive letters, and then create a new disk resource without
a drive letter assignment.
44
2. Right-click the cluster name in the upper-left corner, and then click
Properties.
4. In the Quorum resource list box, select a different disk resource. In Figure
25 below, Disk Q is selected in the Quorum resource list box.
5. If the disk has more than one partition, click the partition where you want the
cluster-specific data to be kept, and then click OK.
For additional information, see the following article in the Microsoft Knowledge Base:
45
Q280353 How to Change Quorum Disk Designation
Test Installation
There are several methods for verifying a cluster service installation after the Setup
process is complete. These include:
• Services Applet: Use the services snap-in to verify that the cluster service is
listed and started.
• Event Log: Use the Event Viewer to check for ClusSvc entries in the system
log. You should see entries confirming that the cluster service successfully
formed or joined a cluster.
• Cluster service registry entries: Verify that the cluster service installation
process wrote the correct entries to the registry. You can find many of the registry
settings under HKEY_LOCAL_MACHINE\Cluster
• Click Start, click Run, and then type the Virtual Server name. Verify that you
can connect and see resources.
46
Test Failover
2. Right-click the Disk Group 1 group, and then click Move Group. The group
and all its resources will be moved to another node. After a short period of time,
the Disk F: G: will be brought online on the second node. Watch the window to
see this shift. Quit Cluster Administrator.
Congratulations! You have completed the configuration of the cluster service on all
nodes. The server cluster is fully operational. You are now ready to install cluster
resources such as file shares, printer spoolers, cluster aware services like Distributed
Transaction Coordinator, DHCP, WINS, or cluster-aware programs such as Exchange
Server or SQL Server.
47
Advanced Testing
Now that you have configured your cluster and verified basic functionality and failover,
you may want to conduct a series of failure scenario tests that will demonstrate expected
results and ensure the cluster will respond correctly when a failure occurs. This level of
testing is not required for every implementation, but may be insightful if you are new to
clustering technology and are unfamiliar how the cluster will respond or if you are
implementing a new hardware platform in your environment. The expected results listed
are for a clean configuration of the cluster with default settings, this does not take into
consideration any user customization of the failover logic. This is not a complete list of all
tests, nor should successfully completing these tests be considered “certified” or ready
for production. This is simply a sample list of some tests that can be conducted. For
additional information, see the following article in the Microsoft Knowledge Base:
Test: Start Cluster Administrator, right-click a resource, and then click “Initiate Failure”.
The resource should go into an failed state, and then it will be restarted and brought back
into an online state on that node.
Expected Result: Resources should come back online on the same node
Test: Conduct the above “Initiate Failure” test three more times on that same resource.
On the fourth failure, the resources should all failover to another node in the cluster.
Test: Move all resources to one node. Start Computer Management, and then click
Services under Services and Applications. Stop the Cluster service. Start Cluster
Administrator on another node and verify that all resources failover and come online on
another node correctly.
Test: Move all resources to one node. On that node, click Start, and then click
Shutdown. This will turn off that node. Start Cluster Administrator on another node, and
then verify that all resources failover and come online on another node correctly.
Important
Performing the Emergency Shutdown test may cause data corruption and data
loss. Do not conduct this test on a production server
Test: Move all resources to one node, and then pull the power cables from that server to
simulate a hard failure. Start Cluster Administrator on another node, and then verify that
all resources failover and come online on another node correctly
Important
Performing the hard failure test may cause data corruption and data loss. This is
an extreme test. Make sure you have a backup of all critical data, and then
conduct the test at your own risk. Do not conduct this test on a production server
Test: Move all resources to one node, and then remove the public network cable from
that node. The IP Address resources should fail, and the groups will all failover to
another node in the cluster. For additional information, see the following articles in the
Microsoft Knowledge Base:
286342 Network Failure Detection and Recovery in Windows Server 2003 Clusters
Test: Remove the network cable for the Private heartbeat network. The heartbeat traffic
will failover to the public network, and no failover should occur. If failover occurs, please
see the “Configuring the Private Network Adaptor” section in earlier in this document
The SCSI bus listed in the hardware requirements must be configured prior to cluster
service installation. Configuration applies to:
• The SCSI controllers and the hard disks so that they work properly on a
shared SCSI bus.
• Properly terminating the bus. The shared SCSI bus must have a terminator
at each end of the bus. It is possible to have multiple shared SCSI buses
between the nodes of a cluster.
In addition to the information on the following pages, refer to documentation from the
manufacturer of your SCSI device or to the SCSI specifications, which can be ordered
from the American National Standards Institute (ANSI). The ANSI Web site includes a
catalog that can be searched for the SCSI specifications.
• SCSI controllers
SCSI controllers have internal soft termination that can be used to terminate the bus,
however this method is not recommended with the cluster server. If a node is turned
off with this configuration, the SCSI bus will be terminated improperly and will not
operate correctly.
• Storage enclosures
Storage enclosures also have internal termination, which can be used to terminate
the SCSI bus if the enclosure is at the end of the SCSI bus. This should be turned off.
50
• Y cables
Y cables can be connected to devices if the device is at the end of the SCSI bus. An
external active terminator can then be attached to one branch of the Y cable in order
to terminate the SCSI bus. This method of termination requires either disabling or
removing any internal terminators that the device may have.
Note
Any devices that are not at the end of the shared bus must have their internal
termination disabled. Y cables and active terminator connectors are the
recommended termination methods because they will provide termination even
when a node is not online.
Important
When evaluating both types of Fibre Channel implementation, read the vendor’s
documentation and be sure you understand the specific features and restrictions
of each.
Although the term Fibre Channel implies the use of fiber-optic technology, copper coaxial
cable is also allowed for interconnects.
FC-ALs provide a solution for two nodes and a small number of devices in relatively static
configurations. All devices on the loop share the media, and any packet traveling from
one device to another must pass through all intermediate devices.
If your high-availability needs can be met with a two-node server cluster, an FC-AL
deployment has several advantages:
When a node or device communicates with another node or device in an FC-SW, the
source and target set up a point-to-point connection (similar to a virtual circuit) and
communicate directly with each other. The fabric itself routes data from the source to the
target. In an FC-SW, the media is not shared. Any device can communicate with any
other device, and communication occurs at full bus speed. This is a fully scalable
enterprise solution and, as such, is highly recommended for deployment with server
clusters.
A SAN is a set of interconnected devices (such as disks and tapes) and servers that are
connected to a common communication and data transfer infrastructure (FC-SW, in the
case of Windows Server 2003 clusters). A SAN allows multiple server access to a pool of
storage in which any server can potentially access any storage unit.
The information in this section provides an overview of using SAN technology with your
Windows Server 2003 clusters. For additional information about deploying server clusters
on SANs, see the Windows Clustering: Storage Area Networks link on the Web
Resources page at http://www.microsoft.com/windows/reskits/webresources.
Note
Vendors that provide SAN fabric components and software management tools
have a wide range of tools for setting up, configuring, monitoring, and managing
the SAN fabric. Contact your SAN vendor for details about your particular SAN
solution.
SCSI Resets
Earlier versions of Windows server clusters presumed that all communications to the
shared disk should be treated as an isolated SCSI bus. This behavior may be somewhat
disruptive, and it does not take advantage of the more advanced features of Fibre
Channel to both improve arbitration performance and reduce disruption.
One key enhancement in Windows Server 2003 is that the Cluster service issues a
command to break a RESERVATION, and the StorPort driver can do a targeted or device
reset for disks that are on a Fibre Channel topology. In Windows 2000 server clusters, an
entire bus-wide SCSI RESET is issued. This causes all devices on the bus to be
disconnected. When a SCSI RESET is issued, a lot of time is spent resetting devices that
may not need to be reset, such as disks that the CHALLENGER node may already own.
2. Targeted SCSI ID
Note
Targeted resets require functionality in the host bus adapter (HBA) drivers. The
driver must be written for StorPort and not SCSIPort. Drivers that use SCSIPort
will use the Challenge and Defense the same as it is currently in Windows 2000.
Contact the manufacturer of the HBA to determine if it supports StorPort.
SCSI Commands
The Cluster service uses the following SCSI commands:
• SCSI release: This command is issued by the owning host bus adapter; it
frees a SCSI device for another host bus adapter to reserve.
• SCSI reset: This command breaks the reservation on a target device. This
command is sometimes referred to globally as a "bus reset."
The same control codes are used for Fibre Channel as well. These parameters are
defined in this partner article:
309186 How the Cluster Service Takes Ownership of a Disk on the Shared Bus
The following sections provide an overview of SAN concepts that directly affect a server
cluster deployment.
HBAs
Host bus adapters (HBAs) are the interface cards that connect a cluster node to a SAN,
similar to the way that a network adapter connects a server to a typical Ethernet network.
HBAs, however, are more difficult to configure than network adapters (unless the HBAs
are preconfigured by the SAN vendor). All HBAs in all nodes should be identical and be
at the same driver and firmware revision
Figure 30 is a logical depiction of two SAN zones (Zone A and Zone B), each containing a
storage controller (S1and S2, respectively).
Figure 30 Zoning
In this implementation, Node A and Node B can access data from the storage controller
S1, but Node C cannot. Node C can access data from storage controller S2.
Zoning needs to be implemented at the hardware level (with the controller or switch) and
not through software. The primary reason is that zoning is also a security mechanism for
a SAN-based cluster, because unauthorized servers cannot access devices inside the
zone (access control is implemented by the switches in the fabric, so a host adapter
56
cannot gain access to a device for which it has not been configured). With software
zoning, the cluster would be left unsecured if the software component failed.
In addition to providing cluster security, zoning also limits the traffic flow within a given
SAN environment. Traffic between ports is routed only to segments of the fabric that are
in the same zone.
LUN Masking
A LUN is a logical disk defined within a SAN. Server clusters see LUNs and think they are
physical disks. LUN masking, performed at the controller level, allows you to define
relationships between LUNs and cluster nodes. Storage controllers usually provide the
means for creating LUN-level access controls that allow access to a given LUN to one or
more hosts. By providing this access control at the storage controller, the controller itself
can enforce access policies to the devices.
LUN masking provides more specific security than zoning, because LUNs provide a
means for zoning at the port level. For example, many SAN switches allow overlapping
zones, which enable a storage controller to reside in multiple zones. Multiple clusters in
multiple zones can share the data on those controllers. Figure 31 illustrates such a
scenario.
LUNs used by Cluster A can be masked, or hidden, from Cluster B so that only
authorized users can access data on a shared storage controller.
57
Requirements for Deploying SANs with Windows Server 2003
Clusters
The following list highlights the deployment requirements you need to follow when using
a SAN storage solution with your server cluster. For a white paper that provides more
complete information about using SANs with server clusters, see the Windows
Clustering: Storage Area Networks link on the Web Resources page at
http://www.microsoft.com/windows/reskits/webresources.
Each cluster on a SAN must be deployed in its own zone. The mechanism the cluster
uses to protect access to the disks can have an adverse effect on other clusters that are
in the same zone. By using zoning to separate the cluster traffic from other cluster or
noncluster traffic, there is no chance of interference.
All HBAs in a single cluster must be the same type and have the same firmware version.
Many storage and switch vendors require that all HBAs on the same zone—and, in some
cases, the same fabric—share these characteristics.
All storage device drivers and HBA device drivers in a cluster must have the same
software version.
Never allow multiple nodes access to the same storage devices unless they are in the
same cluster.
Never put tape devices into the same zone as cluster disk storage devices. A tape device
could misinterpret a bus rest and rewind at inappropriate times, such as during a large
backup.
In a highly available storage fabric, you need to deploy clustered servers with multiple
HBAs. In these cases, always load the multipath driver software. If the I/O subsystem
sees two HBAs, it assumes they are different buses and enumerates all the devices as
though they were different devices on each bus. The host, meanwhile, is seeing multiple
paths to the same disks. Failure to load the multipath driver will disable the second
device because the operating system sees what it thinks are two independent disks with
the same signature.
Do not expose a hardware snapshot of a clustered disk back to a node in the same
cluster. Hardware snapshots must go to a server outside the server cluster. Many
controllers provide snapshots at the controller level that can be exposed to the cluster as
58
a completely separate LUN. Cluster performance is degraded when multiple devices
have the same signature. If the snapshot is exposed back to the node with the original
disk online, the I/O subsystem attempts to rewrite the signature. However, if the snapshot
is exposed to another node in the cluster, the Cluster service does not recognize it as a
different disk and the result could be data corruption. Although this is not specifically a
SAN issue, the controllers that provide this functionality are typically deployed in a SAN
environment.
For additional information, see the following articles in the Microsoft Knowledge Base:
304415 Support for Multiple Clusters Attached to the Same SAN Device
280743 Windows Clustering and Geographically Separate Sites
For the latest information about Windows Server 2003, see the Windows 2003 Server
Web site at http://www.microsoft.com/windowsserver2003/default.mspx
59