Dell-Equallogic-Hardware and Troubleshooting Guide
Dell-Equallogic-Hardware and Troubleshooting Guide
CAUTION: A CAUTION indicates potential damage to hardware or loss of data if instructions are not followed.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
2012 - 02
Rev. A05
Contents
1 Introduction..................................................................................................................................5
Cluster Solution.........................................................................................................................................................5
Cluster Hardware Requirements..............................................................................................................................6
Cluster Nodes.....................................................................................................................................................6
Cluster Storage...................................................................................................................................................6
Network Configuration Recommendations........................................................................................................8
Supported Cluster Configurations............................................................................................................................9
iSCSI SAN-Attached Cluster..............................................................................................................................9
Other Documents You May Need.............................................................................................................................9
Cluster Solution
Your cluster supports a minimum of two nodes to a maximum of either eight nodes (with Windows Server 2003 operating
systems) or sixteen nodes (with Windows Server 2008 operating systems) and provides the following features:
The iSCSI protocol encapsulates iSCSI frames that include commands, data, and status into Transmission Control
Protocol/Internet Protocol (TCP/IP) packets to be transported over Ethernet networks. The iSCSI frames are sent
between the Microsoft iSCSI Initiator that resides in the host and the iSCSI target, which is a storage device.
Implementing iSCSI in a cluster provides the following advantages:
Geographic distribution Wider coverage of Ethernet technology allows cluster nodes and storage arrays to be
located in different sites.
Low cost for availability Redundant connections provide multiple data paths that are available through inexpensive
TCP/IP network components.
Connectivity A single technology for connection of storage array(s), cluster nodes, and clients.
5
Cluster Hardware Requirements
Your cluster requires the following hardware components:
• Cluster nodes
• Cluster storage
Cluster Nodes
The following section lists the hardware requirements for the cluster nodes.
Cluster nodes A minimum of two identical Dell PowerEdge systems are required. The maximum number of
nodes that are supported depend on the variant of the Windows Server operating system used in
your cluster.
RAM The variant of the Windows Server operating system that is installed on your cluster nodes
determines the minimum required amount of system RAM.
Microsoft iSCSI Help initialize the array(s), configure and manage host access to the array(s). The Host
Software Initiator Integration Tools also includes Microsoft iSCSI Software Initiator.
Network Interface Two iSCSI NICs or two iSCSI NIC ports per node. Configure the NICs on separate PCI buses to
Cards (NICs) for improve availability and performance. TCP/IP Offload Engine (TOE) NICs are also supported for
iSCSI access iSCSI traffic.
NICs (public and At least two NICs: one NIC for the public network and another NIC for the private network.
private networks)
NOTE: If your configuration requires more than two iSCSI NIC ports, contact Dell Services.
NOTE: It is recommended that the NICs on each public network are identical and that the
NICs on each private network are identical.
Internal disk One controller connected to at least two internal hard drives for each node. Use any supported
controller Redundant Array of Independent Disks (RAID) controller or disk controller.
Two hard drives are required for mirroring (RAID 1) and at least three hard drives are required for
disk striping with parity (RAID 5).
NOTE: It is highly recommended that you use hardware-based RAID or software-based disk-
fault tolerance for the internal drives.
Cluster Storage
Cluster nodes can share access to external storage array(s). However, only one of the nodes can own any volume in the
external storage array(s) at any time. Microsoft Cluster Services (MSCS) controls which node has access to each
volume.
The following section lists the configuration requirements for the storage system, cluster nodes, and stand-alone
systems connected to the storage system.
6
Cluster Storage Requirements
Hardware Components Requirement
Storage system One or more Dell EqualLogic PS Series groups. Each Dell EqualLogic PS5000/PS5500/
PS6000/PS6010/PS6100/PS6110/PS6500/PS6510 group supports up to sixteen storage
arrays (members) and each PS4000/PS4100/PS4110 group supports up to two storage
arrays. For specific storage array requirements, see Dell EqualLogic PS Series Storage
Array Requirements
Storage interconnect All nodes must be attached to one or more storage arrays through an iSCSI SAN.
Multiple clusters and Can share one or more supported storage arrays.
stand-alone systems
A Dell EqualLogic PS series storage array includes redundant, hot-swappable disks, fans, power supplies, and control
modules. A failure in one of these components does not cause the array to be offline. The failed component can be
replaced without bringing the storage array down.
The following section lists hardware requirements for the Dell EqualLogic PS series storage arrays.
NOTE: Ensure that the storage array(s) are running a supported firmware version. For specific firmware version
requirements, see the Dell Cluster Configuration Support Matrices at dell.com/ha.
7
Network Configuration Recommendations
It is recommended that you follow the guidelines in this section. In addition to these guidelines, all the usual rules for
proper network configuration apply to group members.
Recommendation Description
Network connections between Connect array(s) and hosts to a switched network and ensure that all network
array(s) and hosts connections between hosts and array(s) are Gigabit or 10 Gigabit Ethernet.
A reliable and adequately sized For effective and predictable replication, ensure that the network link between the
network link for replication primary and secondary groups is reliable and provides sufficient bandwidth for
copying data.
No Spanning-Tree Protocol Do not use STP on switch ports that connect end nodes (iSCSI initiators or storage
(STP) functionality on switch array network interfaces). However, if you want to use STP or Rapid Spanning-Tree
ports that connect end nodes Protocol (RSTP) (preferable to STP), enable the port settings available on some
switches that let the port immediately transition into STP forwarding state upon link
up. This functionality can reduce network interruptions that occur when devices
restart and should only be enabled on switch ports that connect end nodes.
NOTE: It is recommended that you use Spanning-Tree and trunking for multi-
cable connections between switches.
Connectivity between switches The iSCSI switches must be connected together. Use stacking ports or port trunking
to create a high-bandwidth link between the switches. If a single non-stacking link is
used, it can become the bottle-neck and negatively affect the performance of the
storage system.
Enable Flow Control on Enable Flow Control on each switch port and NIC that handles iSCSI traffic. Dell
switches and NICs EqualLogic PS Series arrays correctly respond to Flow Control.
Disable Unicast storm control Disable Unicast storm control on each switch that handles iSCSI traffic, if the switch
on switches provides this feature. However, the use of broadcast and multicast storm control is
encouraged on switches.
Enable Jumbo Frames on Enable Jumbo Frames on each switch and NIC that handles iSCSI traffic to obtain
switches and NICs performance benefit and ensure consistent behavior.
Disable iSCSI Optimization on Disable iSCSI Optimization, if the switch provides this feature, to avoid blocking
switches internal communication between the array members.
8
Supported Cluster Configurations
iSCSI SAN-Attached Cluster
In an iSCSI switch-attached cluster, all the nodes are attached to a single storage array or to multiple storage arrays
through redundant iSCSI SANs for high-availability. iSCSI SAN-attached clusters provide superior configuration
flexibility, expandability, and performance.
• The Rack Installation Guide included with your rack solution describes how to install your system into a rack.
• The Getting Started Guide provides an overview of initially setting up your system.
• The Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide provides
more information on deploying your cluster with the Windows Server 2003 operating system.
• The Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide provides
more information on deploying your cluster with the Windows Server 2008 operating system.
9
• The Dell Cluster Configuration Support Matrices provides a list of supported operating systems, hardware
components, and driver or firmware versions for your Failover Cluster.
• Operating system documentation describes how to install (if necessary), configure, and use the operating
system software.
• Documentation for any hardware and software components you purchased separately provides information to
configure and install those options.
• The Dell PowerVault tape library documentation provides information for installing, troubleshooting, and
upgrading the tape library.
• Dell EqualLogic documentation:
– Release Notes — Provides the latest information about the Dell EqualLogic PS Series arrays and/or Host
Integration Tools.
– QuickStart — Describes how to set up the array hardware and create a Dell EqualLogic PS Series
group.
– Group Administration — Describes how to use the Group Manager graphical user interface (GUI) to
manage a Dell EqualLogic PS Series group.
– CLI Reference — Describes how to use the Group Manager command line interface (CLI) to manage a
Dell EqualLogic PS Series group and individual arrays.
– Hardware Maintenance — Provides information about maintaining the array hardware.
– Host Integration Tools Installation and User Guide — Provides information on creating and expanding
the Dell EqualLogic PS Series groups, configuring multipath I/O, performing manual transfer replication,
and backing up and restoring data.
– Host Integration Tools EqualLogic Auto-Snapshot Manager/Microsoft Edition User Guide — Provides
information for creating and managing copies of storage objects (such as volumes or databases)
located on the Dell EqualLogic PS series groups.
– SAN HeadQuarters User Guide — Provides centralized monitoring, historical performance trending, and
event reporting for multiple PS series groups.
– Online Help — In the Group Manager GUI, expand Tools in the far left panel and then click Online Help
for help on both the GUI and the CLI.
• Release notes or readme files may be included to provide last-minute updates to the system or documentation,
or advanced technical reference material intended for experienced users or technicians.
10
2
Cluster Hardware Cabling
This section provides information on cluster hardware cabling.
• For cluster nodes and storage arrays with multiple power supplies, plug each power supply into a separate AC
circuit.
• Use uninterruptible power supplies (UPS).
• For some environments, consider having backup generators and power from separate electrical substations.
The following figures illustrate recommended methods for power cabling a cluster solution consisting of two Dell
PowerEdge systems and one storage array. To ensure redundancy, the primary power supplies of all the components
are grouped into one or two circuits and the redundant power supplies are grouped into a different circuit.
Figure 2. Power Cabling Example With One Power Supply in PowerEdge Systems
1. cluster node 1
2. cluster node 2
3. primary power supplies on one AC power strip
4. EqualLogic PS series storage array
11
5. redundant power supplies on one AC power strip
Figure 3. Power Cabling Example With Two Power Supplies in the PowerEdge Systems
1. cluster node 1
2. cluster node 2
3. primary power supplies on one AC power strip
4. EqualLogic PS series storage array
5. redundant power supplies on one AC power strip
Public network
– All connections to the client LAN.
– At least one public network must be configured for both client access and cluster
communications.
Private network A dedicated connection for sharing cluster health and status information only.
The following figure shows an example of cabling in which dedicated network adapters in each node are connected to
each other (for the private network) and the remaining network adapters are connected to the public network.
12
Figure 4. Example of Network Cabling Connection
Point-to-Point (two-node Copper Gigabit network adapters Connect a CAT5e or better (CAT6, CAT6a, or CAT7)
clusters only) with RJ-45 connectors Ethernet cable between the network adapters in
both nodes.
Copper 10 Gigabit Ethernet network Connect a CAT6 or better (CAT6a or CAT7) Ethernet
adapters with RJ-45 connectors cable between the network adapters in the nodes.
Copper 10 Gigabit Ethernet network Connect a twinax cable between the network
adapters with SFP+ connectors adapters in both nodes.
13
Method Hardware Components Connection
Optical Gigabit or 10 Gigabit Ethernet Connect a multi-mode optical cable between the
network adapters with LC network adapters in both nodes.
connectors
NIC Teaming
NIC teaming combines two or more NICs to provide load balancing and fault tolerance. Your cluster supports NIC
teaming, but only for the public network; NIC teaming is not supported for the private network or an iSCSI network.
NOTE: Use the same brand of NICs in a team, and do not mix brands of teaming drivers.
NOTE: The connections listed in this section are a representative of one proven method of ensuring redundancy in
the connections between the cluster nodes and the storage array(s). Other methods that achieve the same type of
redundant connectivity may be acceptable.
14
Figure 5. Two-Node iSCSI SAN-Attached Cluster
Gigabit NICs can access the 10 Gigabit iSCSI ports on the EqualLogic PS4110/PS6010/PS6110/PS6510 storage systems if
any one of the following conditions exist:
15
Figure 6. Sixteen-Node iSCSI SAN-Attached Cluster
1. public network
2. private network
3. cluster nodes (2-16)
4. Gigabit or 10 Gigabit Ethernet switches
5. storage system
Cabling One iSCSI SAN-Attached Cluster To The Dell EqualLogic PS Series Storage Array(s)
16
NOTE: You can use only one of the two 10 Gb Ethernet ports on each control module at a time. With the 10GBASE-T
port (left Ethernet 0 port), use CAT6 or better cable. With the SFP+ port (right Ethernet 0 port), use fiber optic cable
acceptable for 10GBASE-SR or twinax cable.
Figure 7. Cabling an iSCSI SAN-Attached Cluster to a Dell EqualLogic PS4110 Storage Array
Figure 8. Cabling an iSCSI SAN-Attached Cluster to a Dell EqualLogic PS6110 Storage Array
17
1. cluster node 1 5. Dell EqualLogic PS6110 storage system
2. cluster node 2 6. control module 0
3. switch 0 7. control module 1
4. switch 1
Figure 9. Cabling an iSCSI SAN-Attached Cluster to a Dell EqualLogic PS4000 Storage Array
18
Figure 10. Cabling an iSCSI SAN-Attached Cluster to a Dell EqualLogic PS4100 Storage Array
Figure 11. Cabling an iSCSI SAN-Attached Cluster to a Dell EqualLogic PS6010 Storage Array
19
7. control module 0
Figure 12. Cabling an iSCSI SAN-Attached Cluster to a Dell EqualLogic PS6510 Storage Array
20
Figure 13. Cabling an iSCSI SAN-Attached Cluster to a Dell EqualLogic PS5000 Storage Array
Figure 14. Cabling an iSCSI SAN-attached cluster to a Dell EqualLogic PS5500 Storage Array
21
Cabling The Dell EqualLogic PS6000/PS6100/PS6500 Storage Arrays
1. Connect a network cable from the network switch 0 to Ethernet 0 on the control module 1.
2. Connect a network cable from the network switch 0 to Ethernet 1 on the control module 1.
3. Connect a network cable from the network switch 1 to Ethernet 2 on the control module 1.
4. Connect a network cable from the network switch 1 to Ethernet 3 on the control module 1.
5. Connect a network cable from the network switch 1 to Ethernet 0 on the control module 0.
6. Connect a network cable from the network switch 1 to Ethernet 1 on the control module 0.
7. Connect a network cable from the network switch 0 to Ethernet 2 on the control module 0.
8. Connect a network cable from the network switch 0 to Ethernet 3 on the control module 0.
9. Repeat steps 1 to 8 to connect the additional Dell EqualLogic PS6000/PS6100/PS6500 storage array(s) to the iSCSI
switches.
NOTE: For PS6100 storage array, having all eight cables in steps 1 through 4 provides highest level of cable
redundancy. It works fine with only four cables. You can skip either step 1 or 5, either step 2 or 6, either step 3 or 7,
and either step 4 or 8.
Figure 15. Cabling an iSCSI SAN-Attached Cluster to a Dell EqualLogic PS6000 Storage Array
22
Figure 16. Cabling an iSCSI SAN-Attached Cluster to a Dell EqualLogic PS6100 Storage Array
Figure 17. Cabling an iSCSI SAN-Attached Cluster to a Dell EqualLogic PS6500 Storage Array
23
5. Dell EqualLogic PS6500 storage system 7. control module 0
6. control module 1
Cabling Multiple iSCSI SAN-Attached Clusters To The Dell EqualLogic PS Series Storage Array(s)
To cable multiple clusters to the storage array(s), connect the cluster nodes to the appropriate iSCSI switches and then
connect the iSCSI switches to the control modules on the Dell EqualLogic PS Series storage array(s).
NOTE: The following procedure uses the figure titled Cabling an iSCSI SAN-Attached Cluster to a Dell EqualLogic
PS5000 Storage Array as an example for cabling additional clusters.
Cabling Multiple iSCSI SAN-Attached Clusters For Dell EqualLogic PS4110/PS6110 Storage Arrays
1. Connect a network cable from the network switch 0 to Ethernet 0 on the control module 0.
2. Connect a network cable from the network switch 1 to Ethernet 0 on the control module 1.
3. Repeat steps 1 and 2 to connect the additional Dell EqualLogic PS4110/PS6110 storage array(s) to the iSCSI
switches.
NOTE: You can use only one of the two 10 Gb Ethernet ports on each control module at a time. With the 10GBASE-T
port (left Ethernet 0 port), use CAT6 or better cable. With the SFP+ port (right Ethernet 0 port), use fiber optic cable
acceptable for 10GBASE-SR or twinax cable.
Cabling Multiple iSCSI SAN-Attached Clusters For Dell EqualLogic PS4000/PS4100/PS6010/PS6510 Storage
Arrays
1. Connect a network cable from the network switch 0 to Ethernet 0 on the control module 1.
2. Connect a network cable from the network switch 0 to Ethernet 0 on the control module 0.
3. Connect a network cable from the network switch 1 to Ethernet 1 on the control module 1.
4. Connect a network cable from the network switch 1 to Ethernet 1 on the control module 0.
5. Repeat steps 1 to 4 to connect the additional Dell EqualLogic PS4000/PS4100/PS6010/PS6510 storage array(s) to the
iSCSI switches.
NOTE: For PS4100 storage array, having all 4 cables in steps 1 through 4 provides highest level of cable
redundancy. It works fine with only 2 cables. You can skip either step 1 or 2, and either step 3 or 4.
Cabling Multiple iSCSI SAN-Attached Clusters For Dell EqualLogic PS5000/PS5500 Storage Arrays
1. Connect a network cable from the network switch 0 to Ethernet 0 on the control module 1.
2. Connect a network cable from the network switch 0 to Ethernet 1 on the control module 1.
3. Connect a network cable from the network switch 1 to Ethernet 2 on the control module 1.
4. Connect a network cable from the network switch 1 to Ethernet 0 on the control module 0.
5. Connect a network cable from the network switch 1 to Ethernet 1 on the control module 0.
6. Connect a network cable from the network switch 0 to Ethernet 2 on the control module 0.
7. Repeat steps 1 to 6 to connect the additional Dell EqualLogic PS5000/PS5500 storage array(s) to the iSCSI switches.
24
Cabling Multiple iSCSI SAN-Attached Clusters For Dell EqualLogic PS6000/PS6100/PS6500 Storage Arrays
1. Connect a network cable from the network switch 0 to Ethernet 0 on the control module 1.
2. Connect a network cable from the network switch 0 to Ethernet 1 on the control module 1.
3. Connect a network cable from the network switch 1 to Ethernet 2 on the control module 1.
4. Connect a network cable from the network switch 1 to Ethernet 3 on the control module 1.
5. Connect a network cable from the network switch 1 to Ethernet 0 on the control module 0.
6. Connect a network cable from the network switch 1 to Ethernet 1 on the control module 0.
7. Connect a network cable from the network switch 0 to Ethernet 2 on the control module 0.
8. Connect a network cable from the network switch 0 to Ethernet 3 on the control module 0.
9. Repeat steps 1 to 8 to connect the additional Dell EqualLogic PS6000/PS6100/PS6500 storage array(s) to the iSCSI
switches.
NOTE: For PS6100 storage array, having all eight cables in steps 1 through 4 provides highest level of cable
redundancy. It works fine with only four cables. You can skip either step 1 or 5, either step 2 or 6, either step 3 or 7,
and either step 4 or 8.
25
26
3
Preparing Your Systems For Clustering
CAUTION: Many repairs may only be done by a certified service technician. You should only perform
troubleshooting and simple repairs as authorized in your product documentation, or as directed by the online or
telephone service and support team. Damage due to servicing that is not authorized by Dell is not covered by your
warranty. Read and follow the safety instructions that came with the product.
Configuring A Cluster
The following instructions give you an overview on how to configure a cluster.
1. Ensure that your site can handle the power requirements of the cluster. Contact your sales representative for
information about your region's power requirements.
2. Install the systems, the shared storage array(s), and the interconnect switches (for example, in an equipment rack)
and ensure that all the components are turned on.
3. Deploy the operating system (including any relevant service packs and hotfixes) and network adapter drivers on
each cluster node. Depending on the deployment method that is used, it may be necessary to provide a network
connection to successfully complete this step.
NOTE: To help in planning and deployment of your cluster, record the relevant cluster configuration information in
the Cluster Data Form and the iSCSI configuration information in the iSCSI Configuration Worksheet.
4. Establish the physical network topology and the TCP/IP settings for network adapters on each cluster node to
provide access to the cluster public and private networks.
5. Configure each cluster node as a member in the same Microsoft Active Directory Domain.
NOTE: You can configure the cluster nodes as Domain Controllers. For more information, see Selecting a Domain
Model in the Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide or
Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide at
support.dell.com/manuals.
6. Establish the physical storage topology and any required storage array(s) settings to provide connectivity between
the storage array(s) and the systems that you are configuring as cluster nodes. Configure the storage array(s) as
described in your storage array documentation.
7. Use storage array management tools to create at least one volume and then assign the volume(s) to all cluster
nodes. The volume is used as a cluster Quorum disk for Microsoft Windows Server 2003 Failover Cluster or as a
Witness disk for Windows Server 2008 Failover Cluster.
8. Select one of the systems and form a new Failover Cluster by configuring the cluster name, cluster management IP,
and quorum resource. For more information, see Preparing Your Systems For Clustering.
NOTE: For Failover Clusters configured with Windows Server 2008, run the Cluster Validation Wizard to ensure that
your system is ready to form the cluster.
9. Join the remaining node(s) to the Failover Cluster. For more information, see Preparing Your Systems For Clustering.
10. Configure roles for cluster networks. Take any network interfaces that are used for iSCSI storage (or for other
purposes outside the cluster) out of the control of the cluster.
11. Test the failover capabilities of your new cluster.
NOTE: For Failover Clusters configured with Windows Server 2008, you can also use the Cluster Validation Wizard.
27
12. Configure highly-available applications and services on your Failover Cluster. Depending on your configuration, this
may also require providing additional volumes to the cluster or creating new cluster resource groups. Test the
failover capabilities of the new resources.
13. Configure client systems to access the highly-available applications and services that are hosted on your Failover
Cluster.
Installation Overview
Each cluster node in the Failover Cluster must have the same release, edition, service pack, and processor architecture
of the Windows Server operating system installed. For example, all nodes in your cluster may be configured with the
Windows Server 2003 R2, Enterprise x64 Edition operating system. If the operating system varies among nodes, it may
not be possible to configure a Failover Cluster successfully. It is recommended that you establish server roles prior to
configuring a Failover Cluster, depending on the operating system configured on your cluster.
For a list of Dell PowerEdge systems, iSCSI NICs, supported list of operating system variants, and specific driver and
firmware revisions, see the Dell Cluster Configuration Support Matrices at dell.com/ha.
The following sub-sections describe steps that enable you to establish communication between the cluster nodes and
your shared Dell EqualLogic PS series storage array(s), and to present disks from the storage array(s) to the cluster:
Remote Setup Wizard Enables you to initialize an EqualLogic PS Series storage array and to set up and configure
access to a Dell EqualLogic PS Series group. The wizard also enables you to configure
multipath I/O on the Windows Server 2003 and Windows Server 2008 operating system.
Multipath I/O Device Enables you to configure multiple redundant network paths between a system running the
Specific Module Windows operating system and EqualLogic PS Series group volumes for high availability and
(DSM) high performance.
Auto-Snapshot Enables you to implement Microsoft Volume Snapshot Service (VSS) to create snapshots,
Manager/Microsoft clones, and replicas to provide point-in-time protection of critical data for supported
Edition (ASM/ME) applications, including Microsoft SQL Server, Exchange Server, Hyper-V, and NTFS file shares.
The Auto-Snapshot Manager is a VSS Requestor and includes a VSS Provider.
28
NOTE: For more information on using ASM in the cluster, see the Host Integration Tools EqualLogic Auto-Snapshot
Manager/Microsoft Edition User Guide at support.dell.com/manuals.
NOTE: ASM is supported in the cluster environment with Host Integration Tools version 3.2 or later.
• VDS Provider — Enables you to use the Microsoft Virtual Disk Service (VDS) and Microsoft Storage
Manager for SANs to create and manage volumes in a EqualLogic PS Series group.
• Microsoft iSCSI Software Initiator — Includes the iSCSI port driver, Initiator Service, and Software Initiator
to help connect to iSCSI devices via the Windows TCP/IP stack using NICs. For information about using the
Microsoft iSCSI Software Initiator, see the documentation at microsoft.com.
1. Obtain the Host Integration Tools kit from the Technical Support website or the Host Integration Tools media
shipped with a EqualLogic PS Series array.
2. Start the installation by downloading the kit from the website or inserting the EqualLogic Host Integration Tools for
Microsoft Windows media in the system.
3. Click View Documentation in the installer screen to display the Host Integration Tools User Guide. It is highly
recommended that you read the User Guide and the Host Integration Tools Release Notes before continuing with
the installation.
4. For Host Integration Tools version 3.1.1, run setup.exe. For Host Integration Tools version 3.2 or later, run setup.exe
-cluster.
5. Specify information about the installation when prompted. You can choose the Typical installation, which installs all
the tools supported by the operating system, or you can choose the Custom installation, which allows you to select
the tools you want to install.
Member Unique name (up to 63 numbers, letters, or hyphens) used to identify the array in the group. The first
name character must be a letter or number.
29
Prompt Description
Netmask Combines with the IP address to identify the subnet on which the Ethernet 0 network interface resides.
Default Network address for the device used to connect subnets and forward network traffic beyond the local
gateway network. A default gateway is used to allow the Ethernet 0 network interface to communicate outside
the local network (for example, to allow access to volumes from computers outside the local network).
NOTE: The default gateway must be on the same subnet as the Ethernet 0 network interface.
RAID policy RAID policy configured on the first member of the group:
– RAID 10 — Striping on top of multiple RAID 1 (mirrored) sets, with one or two spare disks.
– RAID 10 provides good performance for random writes, in addition to the highest availability.
– RAID 50 — Striping on top of multiple RAID 5 (distributed-parity) sets, with one or two spare
disks. RAID 50 provides a good balance of performance (especially for sequential writes),
availability, and capacity.
– RAID 5 — One RAID 5 set, with one spare disk. RAID 5 is similar to RAID 50, with more capacity
(two additional disks) but lower availability and performance.
– RAID 6 — Of the total number of disks installed in the array, two disks are used for parity and
one disk is a spare. The remainder are data disks.
Group Configuration
Prompt Description
Group name Unique name (up to 63 letters, numbers, or hyphens) used to identify the group. The first
character must be a letter or number.
Group IP address Network address for the group. The group IP address is used for group administration and
for discovery of the resources (i.e. volumes). Access to the resources is through a physical
port IP address.
Password for managing Password required when adding members to the group. The password must have 3 to 16
group membership alphanumeric characters and is case-sensitive.
Password for the default Password that overrides the factory-set password (grpadmin) for the default grpadmin
group administration account. The password must have 3 to 16 alphanumeric characters and is case-sensitive.
account
Microsoft service user CHAP user name and password used to enable Microsoft service (VSS or VDS) access to
name and password the group. The user name must have between 3 and 54 alphanumeric characters. The
password must have 12 to 16 alphanumeric characters, and is case-sensitive.
Microsoft services running on a computer must be allowed access to the group in order to
create VSS snapshots in the group or use VDS.
30
5. Enter the array configuration in the Initialize an Array dialog box. For more information, see Array Configuration.
6. Click a field name link to display help on the field. In addition, choose the option to create a new group. Then, click
Next.
7. Enter the group configuration in the Creating a Group dialog box and then, click Next. For more information, see
Group Configuration.
A message is displayed when the array has been initialized.
8. Click OK.
9. When the Finish dialog box is displayed, you can do one of the following:
NOTE: To view the VSS/VDS access control record, click Group Configuration → VSS/VDS.
NOTE: To display the local CHAP account in the group, click Group Configuration → iSCSI tab.
After you create a group, use the Group Manager GUI or CLI to create and manage volumes.
Initializing An Array And Expanding A Group
Using the Remote Setup Wizard, you can initialize a Dell EqualLogic PS Series array and add the array to an existing
group. In addition, the wizard configures the group IP address as an iSCSI discovery address on the computer.
Before you initialize an array and expand a group, make sure you have the following information:
Follow the steps below to initialize an array and expand an existing group:
31
– Click Finish to complete the configuration and exit the wizard.
When you exit the wizard, it configures the group IP address as an iSCSI discovery address on the computer, if not
already present.
After you join a group, you can use the Group Manager GUI or CLI to create and manage volumes.
When you use the Remote Setup Wizard to enable computer access to a group, the wizard performs the following:
1. Configures the group IP address as an iSCSI target discovery address. This enables the computer to discover
volumes and snapshots (iSCSI targets) in the group.
2. Stores the CHAP user name and password that allow Microsoft services (VDS or VSS) access to the group.
NOTE: To use the Group Manager GUI to display the VSS/VDS access control record, click Group Configuration →
VSS/VDS .
NOTE: To display the local CHAP account in the group, click Group Configuration → iSCSI.
2. Launch the Remote Setup Wizard.
3. In the Remote Setup Wizard - Welcome window, select Configure this computer to access a PS Series SAN, and
click Next.
4. In the Configuring Group Access dialog box, you can:
– Click Add Group to add a group that the computer can access.
– Select a group and click Modify Group to modify existing group access.
5. In the Add or Modify Group Information dialog box:
a) Specify or modify the group name and IP address, as needed.
b) If the group is configured to allow Microsoft service (VDS or VSS) access to the group through CHAP, specify
the CHAP user name and password that matches the VSS/VDS access control record and local CHAP account
already configured in the group.
c) If the PS Series group is configured to restrict discovery based on CHAP credentials, click the check box next
to Use CHAP credentials for iSCSI discovery.
d) Click Save.
6. In the Configuring Group Access dialog box, click Finish.
32
1. Start the Remote Setup Wizard.
2. In the Welcome dialog box, select Configure MPIO settings for this computer, and click Next.
The Configure MPIO Settings dialog box is displayed.
3. By default, all host adapters on all subnets that are accessible by the PS Series group are configured for multipath
I/O. If you want to exclude a subnet, move it from the left panel to the right panel. Also, select whether you want to
enable balancing the I/O load across the adapters.
NOTE: To exclude a specific IP address on a subnet, manually edit the registry variables. For more information, see
the Host Integration Tools Release Notes.
4. Click Finish to complete the multipath I/O configuration. Click Back to make changes, if required.
Changes to the list of included or excluded subnets are effective immediately for new connections, while changes to
existing connections may take several minutes.
Configuring Firewall To Allow ICMP Echo Requests
If you are using Windows firewall on your system, configure your firewall to allow Internet Control Message Protocol
(ICMP) echo requests for ICMPv4.
To configure your firewall:
NOTE: This procedure is applicable for Windows Server 2008 only. For other versions of Windows operating
systems, see the documentation that is shipped with your system.
– Domain
– Private
– Public
12. Click Next.
13. Enter a name for this rule and an optional description.
14. Click Finish.
33
1. Use a web browser and go to the Microsoft Download Center website at microsoft.com/downloads.
2. Search for iscsi initiator.
3. Select and download the latest supported initiator software and related documentation for your operating system.
4. Double-click the executable file. The installation wizard launches.
5. In the Welcome screen, click Next.
6. In the following screens, select the Initiator Service, Software Initiator, and Microsoft MPIO Multipathing Support
for iSCSI options.
7. Click Next to continue with the installation.
8. Read and accept the license agreement and click Next to install the software.
9. In the completion screen, click b to complete the installation.
10. Select the Do not restart now option to reboot the system after modifying the registry settings in the section
Modifying the Registry Settings.
1. To configure the registry values, perform the following steps for each cluster host:
2. Go to the EqualLogic\bin directory. The default location is c:\Program Files\EqualLogic\bin.
3. For a Windows Server 2003 host, run EqlSetupUtil.exe -PRKey <PR key> .
If it is a Windows Server 2008 host, run EqlSetupUtil.exe.
4. Reboot the system.
You can run EqlSetupUtil.exe to configure the following registry values:
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Disk]
"TimeOutValue"=dword:0000003c
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-
E325-11CE-BFC1-08002BE10318}\<Instance Number>\Parameters
Additionally, each Windows Server 2003 cluster host is required to have the following registry values:
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\msiscdsm
\PersistentReservation]
"UsePersistentReservation"=dword:00000001
"PersistentReservationKey"=hex:<PR key>
NOTE: <PR Key> is a unique 8-byte binary value that is composed of a 6-byte part that is specific to the cluster and
a 2-byte part that is specific to the node. For example if you have a three node cluster you can assign
0xaabbccccbbaa as the cluster specific part. The nodes can then have the following PR keys:
– Node 1: 0xaabbccccbbaa0001
– Node 2: 0xaabbccccbbaa0002
– Node 3: 0xaabbccccbbaa0003
34
• Creating Access Control Records.
• Connecting Hosts to Volumes.
• Advanced Storage Features.
• Use a Web browser through a standard web connection using HTTP (port 80).
• Install the GUI on a local system and run it as a standalone application.
• Group Configuration — Modifies the group configuration and set up accounts, event notification, network
services, authentication, and SNMP.
• Group Monitoring — Monitors iSCSI connections to the group, snapshot and replication schedules, volume
replication configurations and activity, administrative sessions and login history, and in-progress member and
volume move operations.
• Events — Displays events in the group.
• Storage Pools — Creates and manages pools in the group.
• Members — Monitors and manages group members, including configuring network interfaces.
• Volumes — Monitors and manages volumes, snapshots, replicas, and schedules.
• Volume Collections — Creates and manages collections of volumes. Organizing multiple, related volumes into a
collection enables you to create snapshots or replicas of the volumes in a single operation or schedule.
• Replication Partners — Monitors and manages replication partners.
Creating Volumes
To access storage in a PS Series group, you must create one or more storage pools and then allocate portions of the
storage pool(s) to volumes. Each volume is assigned a size, storage pool, and access controls. Volumes are seen on the
network as iSCSI targets. Only hosts with an iSCSI initiator and the correct access credentials can access a volume.
The group automatically generates an iSCSI target name for each volume. This is the name that iSCSI initiators use to
access the volume.
To create a volume:
– Volume name —Unique name, up to 64 alphanumeric characters (including periods, hyphens, and colons),
used to identify the volume for administrative purposes. The volume name is displayed at the end of the
35
iSCSI target name that is generated for the volume. Host access to the volume is always through the iSCSI
target name, not the volume name.
– Description — The volume description is optional.
– Storage pool — All volume data is restricted to the members that make up the pool. By default, the volume
is assigned to the default pool. If multiple pools exist, you can assign the volume to a different pool.
2. Click Next.
The Create Volume – Space Reserve dialog box is displayed. Specify information in the following fields:
– Volume size — This is the reported size of the volume as seen by iSCSI initiators. The minimum volume size
is 15 MB. In addition, volume sizes are rounded up to the next multiple of 15.
– Thin provisioned volume — Select this check box to enable thin provisioning on the volume. When
selected, slider bars appear in the Reported volume size panel. Use them to modify the following default
thin provisioning values:
* Minimum volume reserve — This is the minimum amount of space to allocate to the volume. As the
volume is used, more space is allocated to the volume, and the volume reserve is increased. The
default is 10% of the volume size.
* In-use space warning value — When in-use space reaches this value, as a percentage of the
volume size, a warning event message is generated. The warning informs you that volume space is
being used, enabling you to make adjustments, as needed. The default is the group-wide volume
setting.
* Maximum in-use space value — When the in-use space reaches this value, as a percentage of the
volume size, the volume is set offline. The default is the group-wide volume setting.
– Snapshot reserve — If you want to create snapshots of the volume, specify the amount of pool space to
reserve for snapshots, as a percentage of the volume reserve.
3. Click Next.
The Create Volume – iSCSI Access Policy dialog box is displayed, which enables you to create an access control
record for the volume.
1. In the Create Volume – iSCSI Access Policy dialog box, click Restricted access and one or more of the following:
– Authenticate using CHAP user name — Restricts access to hosts that supply the specified CHAP user
name and the associated password. The user name must match a local CHAP account or an account on an
external RADIUS server.
– Limit access by IP address — Restricts access to hosts with the specified initiator IP address (for example,
12.16.22.123). Use asterisks for “wildcards,” if desired (for example, 12.16.*.*). An asterisk can replace an
entire octet, but not a digit within an octet.
– Limit access to iSCSI initiator name — Restricts access to hosts with the specified iSCSI initiator.
NOTE: When using IP addresses or iSCSI initiator names to restrict access, ensure to create an access control
record for each IP address or iSCSI initiator name presented by an authorized host. Additional records can be
added if you do not want to create an access control record at this time, select No access. No host is allowed
access to the volume until you create a record.
2. Select the Enable shared access to the iSCSI target from multiple initiators check box.
3. After specifying the access information, click Next to display the Create Volume - Summary dialog box.
4. Click Finish to create the volume.
5. Verify that the volume is enabled for shared access by multiple initiators:
– Right click on the volume that has just been created, and select Modify Volume Settings.
36
– Select the Advanced tab.
– Ensure that the Enable shared access to the iSCSI target from multiple initiators check box is selected.
This section discusses features provided by the Group Manager GUI. You can also use ASM for backing up and
restoring data in the Windows cluster environment. ASM uses Microsoft VSS to provide a framework for creating fast,
coordinated copies of application database volumes on your PS Series group, ensuring that the backed-up data is easy
to restore and use for recovery.
37
CAUTION: If you want to mount the volume of a snapshot, clone, or replica using the Group Manager GUI, mount it
to a standalone node or a cluster node in a different cluster. Do not mount the snapshot, clone, or replica of a
clustered disk to a node in the same cluster because it has the same disk signature as the original clustered disk.
Windows detects two disks of the same disk signature and changes the disk signature on one of them. Most of the
time, Windows tries to change the disk signature of the snapshot, clone, or replica. If its access type is Read-Only,
Windows is unable to change the signature and thus the volume is not mounted. If its access type is Read-Write,
Windows is able to change the disk signature. When you try to restore the disk later, the cluster’s physical
resource fails due to a different disk signature. Although it is rare, under some conditions, Windows can change
the disk signature on the original disk because it misidentifies the snapshot, clone, or replica as the cluster disk.
That situation may result in data loss or an inaccessible snapshot, clone, or replica.
NOTE: For more information on using ASM in the cluster, see the Host Integration Tools EqualLogic Auto-Snapshot
Manager/Microsoft Edition User Guide at equallogic.com.
NOTE: You can use ASM to mount a snapshot, clone, or replica to the same node or another node in the same
cluster. VSS changes the disk signature of the snapshot, clone, or replica before mounting it.
Snapshots
A snapshot is a point-in-time copy of volume data that can protect against mistakes, viruses, or database corruption.
Snapshot creation does not disrupt access to the volume. Snapshots appear on the network as iSCSI targets and can be
set online and accessed by hosts with iSCSI initiators. You can recover volume data by restoring a volume from a
snapshot or by cloning a snapshot, which creates a new volume.
Creating Snapshots
To create a snapshot of the current time:
Restoring Snapshots
To restore a volume from a snapshot:
1. Use Cluster Administrator or Failover Cluster Management to bring the cluster resource group containing the
volume offline.
2. Use Group Administration to:
– Bring the volume and the snapshot offline.
– Select the volume, perform a restore, and select the snapshot to restore the volume from.
– Bring the volume online.
3. Use Microsoft iSCSI Initiator Service GUI to log back into the volume from each cluster node.
4. Use Cluster Administrator or Failover Cluster Management to bring the cluster group online.
NOTE: Do not use the Group Manager GUI to mount the snapshot of a clustered disk to a node in the same cluster.
38
Volumes
Cloning a volume creates a new volume with a new name and iSCSI target, having the same size, contents, and Thin
Provisioning setting as the original volume. The new volume is located in the same pool as the original volume and is
available immediately. Cloning a volume does not affect the original volume, which continues to exist after the cloning
operation. A cloned volume consumes 100% of the original volume size from free space in the pool in which the original
volume resides. If you want to create snapshots or replicas of the new volume, you require additional pool space.
Cloning Volumes
To clone a volume:
Restoring Volumes
To restore a volume from a clone:
1. Use Cluster Administrator or Failover Cluster Management to bring the cluster resource group containing the
volume offline.
2. Use Microsoft iSCSI Initiator Service GUI from each cluster node to:
– Log off the volume.
– Delete the volume from the list of Persistent Targets.
3. Use Group Administration to:
– Bring the volume offline.
– Ensure that the clone has Read-Write access and its access control list contains all cluster nodes.
– Bring the clone online.
4. Use the Microsoft iSCSI Initiator Service GUI to log into the clone from each cluster node.
5. Use the Cluster Administrator or Failover Cluster Management to bring the cluster group online.
NOTE: Do not use the Group Manager GUI to mount the clone of a clustered disk to a node in the same cluster.
39
Replication
Replication enables you to copy volume data across groups, physically located in the same building or separated by
some distance. Replication protects the data from failures ranging from destruction of a volume to a complete site
disaster, with no impact on data availability or performance. Similar to a snapshot, a replica represents the contents of a
volume at a specific point in time. There must be adequate network bandwidth and full IP routing between the groups.
The volume is located in the primary group and the volume replicas are stored in the secondary group, in the space that
is delegated to the primary group. Mutual authentication provides security between the groups. The first replica of a
volume is a complete transfer of the volume data from the primary group to the secondary group over the network. For
subsequent replicas, only the data that changed since the previous replica is transferred to the secondary group. If you
need to transfer a large amount of volume data for the first replication, you can use manual transfer replication.
This enables you to copy the volume data to external media and then load the data from the media to the replica set on
the secondary group. After the first data transfer is complete, replication continues, as usual, over the network.
NOTE: To copy the data remotely when you are setting up replication, use the manual transfer utility.
Replicating Volumes
To replicate a volume from one group to another:
40
Volume Collections
NOTE: Do not use the Group Manager GUI to mount the clone of a clustered disk to a node in the same cluster.
A volume collection consists of one or more volumes from any pool and simplifies the creation of snapshots and
replicas. Volume collections are useful when you have multiple, related volumes. In a single operation, you can create
snapshots of the volumes (a snapshot collection) or replicas of the volumes (a replica collection).
3. Click Next.
The Create Volume Collection – Components window is displayed.
4. Select up to eight volumes for a collection. A volume collection can contain volumes from different pools.
5. Click Next.
The Create Volume Collection – Summary window is displayed.
Thin Provisioning
You can use thin provisioning technology to provision the storage more efficiently, while still meeting application and
user storage needs. A thin-provisioned volume is initially allocated only a portion of the volume size. As data is written to
the volume, more space is automatically allocated (if available) from the free pool, and the volume reserve increases up
to the user-defined limit. If space is not available, the auto-grow operation fails. If in-use space, typically the volume
size, consumes all the volume reserve, the volume is set offline. Thin provisioning is not always appropriate or desirable
in an IT environment. It is most effectively used when you know how a volume grows over time, the growth is
predictable, and users do not need immediate, guaranteed access to the full volume size. Regular event messages are
generated as space is used, giving the administrator the opportunity to make adjustments, as needed. With thin-
provisioned volumes, you can utilize storage resources more efficiently, while eliminating the need to perform difficult
resize operations on the host.
41
42
4
Troubleshooting
The following section describes general cluster problems you may encounter and the probable causes and solutions for
each problem.
Volumes are not assigned to the Verify that all volumes are assigned to
hosts. the hosts.
One of the nodes takes a long time to The node-to-node network has failed • Check the network cabling
join the cluster. due to a cabling or hardware failure. and verify that the multi-
Or initiator check box is
selected.
One of the nodes fail to join the
cluster. • Ensure that the node-to-node
interconnection and the
public network are connected
to the correct NICs.
Attempts to connect to a cluster using • The Cluster Service has not • Verify that the Cluster Service
Cluster Administrator fail. been started. is running and that a cluster
• A cluster has not been formed has been formed.
on the system.
43
Problem Probable Cause Corrective Action
• The system has just been • Use the Event Viewer and
booted and services are still look for the following events
starting. logged by the Cluster Service:
Microsoft Cluster
Service
successfully formed
a cluster on this
node.
Or
Microsoft Cluster
Service
successfully joined
the cluster.
• If these events do not appear
in Event Viewer, see the
Microsoft Cluster Service
Administrator’s Guide for
instructions on setting up the
cluster on your system and
starting the Cluster Service.
The private (point-to-point) network is Ensure that all systems are powered
disconnected. on so that the NICs in the private
network are available.
Using Microsoft Windows NT 4.0 to This is a known issue. Some It is strongly recommended that you
remotely administer a Windows resources in Windows Server 2003 use Windows XP Professional or
Server 2003 cluster generates error are not supported in Windows NT 4.0. Windows Server 2003 for remote
messages. administration of a cluster running
Windows Server 2003.
Unable to add a node to the cluster. The new node cannot access the Using Windows Disk Administration
shared disks. The shared disks are ensure that the new cluster node can
enumerated by the operating system enumerate the cluster disks. If the
differently on the cluster nodes. disks do not appear in Disk
Administration:
44
Problem Probable Cause Corrective Action
One or more nodes may have the Configure the Internet Connection
Internet Connection Firewall enabled, Firewall to allow communications that
blocking RPC communications are required by the MSCS and the
between the nodes. clustered applications or services.
For more information, see the article
KB883398 at support.microsoft.com.
Unable to use Microsoft iSCSI Initiator The Enable shared access to the 1. In the Group Manager GUI, right
to connect to the PS Series array(s) iSCSI target from multiple initiators click the volume having the
from the second node and the check box is not selected. connection problem.
following error message is displayed: 2. Select Modify Volume Settings.
Authorization Failure. 3. Click the Advanced tab and
ensure that the volume is
enabled for shared access from
multiple initiators.
The disks on the shared cluster • This issue occurs if you stop No action required.
storage are unreadable or the Cluster Service.
uninitialized in Windows Disk • On systems running Windows
Administration. Server 2003, this issue occurs
if the cluster node does not
own the cluster disk.
Cluster Services does not operate The Windows Internet Connection Perform the following steps:
correctly on a cluster running Firewall is enabled, which may
1. On the Windows desktop, right-
Windows Server 2003 and with conflict with Cluster Services. click My Computer and click
Internet Firewall enabled. Manage.
2. In the Computer Management
window, double-click Services.
3. In the Services window, double-
click Cluster Services.
4. In the Cluster Services window,
click the Recovery tab.
5. Click the First Failure dropdown
arrow and select Restart the
Service.
6. Click the Second Failure
dropdown arrow and select
Restart the Service.
7. Click OK.
45
Problem Probable Cause Corrective Action
The storage array firmware upgrade The Telnet program sends an extra User serial connection for the array
process using Telnet, exits without line after you press <Enter> firmware upgrade.
allowing you to enter y to the To clear the extra linefeed in
following message: Windows Telnet:
Do you want to proceed 1. Enter ^](control, right bracket).
(y/n)[n]:
2. In the Microsoft Telnet prompt,
type unset crlf.
3. Press <Enter> to return to
Telnet.
While running the Cluster Validation The two iSCSI NICs are configured in No action required.
Wizard, the Validate IP Configuration the same subnet by design.
test detects that two iSCSI NICs are
on the same subnet and a warning is
displayed.
46
5
Cluster Data Form
You can attach the following form in a convenient location near each cluster node or rack to record information about
the cluster. Use the form when you call for technical support.
Table 1. Cluster Configuration Information
Server type
Installer
Date installed
Applications
Location
Notes
Additional Networks
Array Array Service Tag Group IP Address Group Name Member Name Volume Names
47
48
6
iSCSI Configuration Worksheet
49
50