Pscale Leaf Spine Installation Guide
Pscale Leaf Spine Installation Guide
Pscale Leaf Spine Installation Guide
Cluster Installation
November 2021
• Leaf-Spine topology.................................................................................................................................................................................................2
• Workflow for installing a new Leaf-Spine cluster.............................................................................................................................................7
• Workflow for Leaf-Spine cluster growth........................................................................................................................................................... 8
• Upgrade OneFS........................................................................................................................................................................................................ 9
• Upgrade the switch operating system................................................................................................................................................................ 9
• Install rails for the Dell Z9100 switches............................................................................................................................................................ 10
• Install rails for the Dell Z9264 switches............................................................................................................................................................ 12
• Install Dell Z9264 switches in the rack..............................................................................................................................................................13
• Racking guidelines for Leaf-Spine clusters...................................................................................................................................................... 14
• Install Dell switches in the rack........................................................................................................................................................................... 15
• Cable management for Leaf-Spine clusters.....................................................................................................................................................18
• Where to get help................................................................................................................................................................................................... 21
• Additional options for getting help..................................................................................................................................................................... 21
Leaf-Spine topology
OneFS 9.0.0.0 and later releases support Leaf-Spine network topology for internal networks that communicate with the nodes
that form clusters up to 252 nodes. For large clusters that are intended to grow significantly over time, the Leaf-Spine topology
is recommended.
NOTE: To connect 252 nodes on a OneFS 8.2.2 Leaf-Spine cluster, you must update the cluster with the latest OneFS
8.2.2 rollup patch. For more information, see the Current OneFS patches guide.
Architecture
In a Leaf-Spine topology, Z9264, and Z9100-ON switches are arranged in a two-level hierarchy. The bottom level switches
with the nodes connected are called Leaf switches. Leaf switches are connected to the top level switches called Spine
switches. These switches are, in turn, connected to additional Spine switches for networking the entire cluster of nodes. OneFS
requires two independent Leaf-Spine networks for intracluster communication. These networks are known as Int-A and Int-B
respectively.
NOTE: Using both Z9100 and Z9264 in the same Leaf-Spine configuration is not supported currently. The two networks
must be built using only Z9100 or Z9264 switches.
The following table lists the main Leaf-Spine components in a cluster.
2
Table 1. Leaf-Spine network components (continued)
Component Description Connection considerations
Uplink Leaf to Spine connection There must be the same number of
uplinks on every Leaf switch. That
Downlink Leaf to node connection number should be the number of uplinks
that are required by the Leaf switch with
the most downlinks.
* If equipped with 100 GbE NIC
Z9100-ON
NOTE: The Dell Z9100-ON switches arrive autoconfigured for installation. Do not make any configuration changes to the
switches or to the firmware before cabling them to the back-end network.
The Dell Z9100-ON is:
● A 1U, 32-port (QSFP28) switch supporting up to 100 GbE connections
● The method through which PowerScale nodes communicate with one another for intracluster traffic
3
2. Power supply units
4
Dell Z9264 switches
The Dell Z9264 switch is supported for Leaf-Spine network topology and is compatible with all Generation 6 and PowerScale
nodes.
Z9264
NOTE: The Dell Z9264 switches arrive autoconfigured for installation. Do not make any configuration changes to the
switches or to the firmware before cabling them to the back-end network.
The Dell Z9264 is:
● A 2U, 64-port (QSFP28) switch supporting up to 100 GbE connections
● The method through which PowerScale nodes communicate with one another for intracluster traffic
Figure 3.
1. MicroUSB-B console port 2. RJ45 console port
3. Sixty-four QSFP28 ports 4. Two SFP+ ports
5. USB Type A port 6. RJ45 management port
7. Luggage tag
5
Figure 4.
1. Fan modules
2. Power supply units
6
Table 5. Z9264 switch requirements for Leaf-Spine clusters
Maximum number of nodes Number of spine switches Number of leaf switches Number of cables between
each pair of leaf and spine
switches
All 40GbE ports
88 1 2 18
132 1 3 18
160 1 4 16
176 2 4 9
220 2 5 9
252 3 6 6
All 100GbE ports
64 1 2 32
128 2 4 16
150 3 5 10
180 3 6 10
252 4 8 8
7
NOTE: The events reported can be related to links introduced between two or more Leaf switch to node connections
(downlinks) or between two or more Leaf-to-Spine switch connections (uplinks). Incorrect cabling is also reported in
events.
The PowerScale OneFS Event Reference Guide provides instructions on how to view events.
10. If this network is only Int-A, repeat these steps on Int-B.
The Leaf-Spine cluster is installed.
Best practices and examples for Leaf-Spine clusters are available in the white paper Best Practices For a Dell EMC Isilon
Leaf-Spine Network.
CAUTION: If you are reusing switches that were previously used in another cluster, reimage them before adding them
to a new fabric. Reimage them with the same version that is deployed in the existing fabric.
NOTE: To connect 252 nodes on a OneFS 8.2.2 Leaf-Spine cluster, you must update the cluster with the latest OneFS
8.2.2 rollup patch. For more information, see the Current OneFS patches guide.
Best practices and examples for Leaf-Spine clusters are available in the White Paper, Best Practices For a Dell EMC Isilon
Leaf-Spine Network.
1. Determine the number of Leaf or Spine switches required for adding the new nodes to the cluster. The table, Z9100-ON and
Z9264 switch requirements for Leaf-Spine clusters, lists the switch requirements.
2. If additional Leaf or Spine switches are not required, connect the new nodes to free ports on the Leaf switches. Ensure that
you connect the new nodes by using ports 11 through 32 as recommended.
3. If additional Leaf or Spine switches are required:
a. Install the new switches in the rack.
b. All switches must have the same operating system version. The Dell switch operating system upgrade for OneFS 8.2 and
later upgrade guide provides instructions.
c. Determine the number of cables required to connect between each pair of Leaf and Spine switches by using the tables.
NOTE: The number of cables that are needed between each pair of Leaf and Spine switches reduces when a new
Spine switch is added. As a result, some of the existing cables can be moved to connect to the new Spine.
d. If a new Spine switch is being added, connect the existing Leaf switches to the new Spine switch. Ensure that you
connect the new Leaf switch to each Spine switch as recommended.
4. Check the personality on newly installed switches to ensure they are the same, using the command: show smartfabric
personality
Example output:
5. Power on the new nodes and join them to the cluster by using the Configuration Wizard. The Run the Configuration Wizard
section of the PowerScale F200 and F600 node installation Guide provides detailed instructions.
8
Upgrade OneFS
You can use the command-line interface to upgrade OneFS on the cluster. OneFS 8.2 or later is required for Leaf-Spine cluster
configurations.
Follow the pre-upgrade steps in the OneFS Upgrade Planning and Process Guide to confirm cluster health, and resolve any
compatibility issues before upgrading OneFS.
NOTE: To connect 252 nodes on a OneFS 8.2.2 Leaf-Spine cluster, you must update the cluster with the latest OneFS
8.2.2 roll up patch. For more information, see the Current OneFS patches guide.
Follow these steps to upgrade OneFS from the command-line interface. Download the OneFS installation image from the Dell
EMC Product Support site. The OneFS Upgrade Planning and Process Guide provides complete details.
1. Open a secure shell (SSH) connection to the lowest-numbered node in the cluster, and log in with the root account.
2. Verify the version of OneFS that is currently installed on the cluster.
isi version
If OneFS 8.2 or later is installed on the cluster, skip steps 3 and 4 as the cluster already supports Leaf-Spine clusters.
3. To perform the upgrade, run the following command, where <install-image-path> is the file path of the upgrade install
image. The file path must be accessible in an /ifs directory.
NOTE: The –simultaneous option takes all nodes in the cluster out of service at the same time. The cluster is
unavailable until the upgrade completes. The upgrade completes one node at a time and is nondisruptive if you omit the
--simultaneous option from the command.
isi upgrade cluster start <install-image-path> --simultaneous
The isi upgrade cluster command runs asynchronously, sets up the upgrade process, and returns quickly. To view
the progress of the upgrade, run the following command:
isi upgrade view
4. Commit the upgrade by running the following command:
isi upgrade cluster commit
The progress of the upgrade can be monitored by running the following command:
isi upgrade view
NOTE: If there is an issue, contact Support.
9
Architecture: x86_64
Up Time:1 day 00:02:03
The OS Version row displays the version. The version should be 10.5.0.6.C2. If the version is 10.5.0.6.685 or later, skip
steps 2, 3, and 4.
2. Save the license file and the configuration.
See the Dell switch operating system upgrade guide for OneFS 8.2 and later upgrade guide for instructions.
3. Access ONIE.
See the Dell switch operating system upgrade guide for OneFS 8.2 and later upgrade guide for instructions.
4. Install the upgrade.
See the Dell switch operating system upgrade guide for OneFS 8.2 and later, the section Install configuration - DNOS
10.5.0.6.C2 for instructions.
5. To install the configuration file on a switch, run the following commands, depending on the role for which you are configuring
the switch:
● For all flat TOR setups of switches S4112, S4148, Z9100, and Z9264, run the following commands to configure all
switches to have the Leaf role:
configure terminal
smartfabric l3fbric enable role LEAF
● For Z9100 and Z9264 Leaf-Spine setups, run the following commands to configure the switch for the spine role:
configure terminal
SW(config)# smartfabric l3fabric enable role SPINE
● For Leaf-Spine setups, run the following commands to configure the leaf switches:
configure terminal
smartfabric l3fabric enable role LEAF
10
a. Align the support screws with the correct location on the rack.
b. Press down on the label that reads PUSH to open the rail clip.
c. Guide the support screws at the end of the rail into the rack holes until the clip snaps to the rack and holds the rail in
place.
11
3. Extend the rail until the back end of the rail clips to the back of the rack.
4. Repeat these steps to install the second rail on the other side of the rack.
5. Repeat until all switch rails are installed.
12
Figure 5. Outer rail installation
a. Pull the rails forward from the rear NEMA after clipping them in to meet the front NEMA.
b. Insert a screw in the second and forth to the top hole positions on the front of the rail to secure it in place during the
installation process.
3. Repeat these steps to install the second rail on the other side of the rack.
4. Repeat until all switch rails are installed.
NOTE: The rails have a distinct left and right, which correspond to their orientation when viewed from the front of the
rack when installed.
There are Left and Right labels on the front of the inner rails. Slide the switch into the rack from the front so that the labels
are at the front of the rack.
13
Figure 6. Installation step sequence 1
3. From the front of the rack, slide the switch with inner rails onto the outer rails until the front ears of the inner rails meet the
front NEMAs.
4. If installing a bezel, any clips or pods that are used to mount the bezel should be installed at this step. After the clips or pods
are installed, check that all remaining exposed mounting holes on the front of the switch rails are secured to the rack with
screws.
14
● Try to place the nodes in the same rack as the Leaf switches to which they are connecting. If that is not possible, place the
nodes in an adjacent rack.
If you plan to expand the cluster, additional Spine switches and extra cabling are required. Expanding from 1 Spine switch to 2
Spine switches, half of the existing connections to Spine 1 are moved to Spine 2. To simplify the expansion, it is best practice
to add switches within the same rack. For example, to add 3 nodes to the rack, Gen6 nodes are 4U, so enough space for 12U is
required.
2. Slide the switch into the rails that are mounted in the rack.
15
3. Slide the switch into the rack until you secure the back ears of the rail to the rack with the two black screws on the rail.
Secure the rail to the rack by tightening the black thumbscrew on each rail.
16
Sequence of tasks
Complete the tasks for adding a node to a cluster in the following sequence to ensure a more orderly installation.
If you are only adding a new node into an existing cluster, you can skip all steps related to installing the rails and chassis.
1. If you are going to be racking nodes with all drives installed, confirm that a mechanical lift is available.
The mechanical lift must be rated for at least 300 lbs (136 kg). If a mechanical lift is not available, label the drive sleds when
you remove them to return them safely to the original slot. The Isilon Generation 6 Installation Guide provides more details.
NOTE: Node enclosures with all drives installed can weigh up to 300 lbs (60 x 4 lbs) or 136 kg.
Requirement Category
Cabinet 44 inches minimum rack depth without rear door.
52 inches minimum rack depth with a rear door.
The node attached cable management arms will extend approximately 7 inches out the
back of a 44 inch deep rack.
The 2U PDU will extend approximately 3 inches out the back of a 44 inch deep rack.
The standard server rack depth is 37 inches.
The server rack depth with the 2U PDU is 44 inches.
Recommended 24 inches wide cabinet to provide room for cable routing on the sides of
the cabinet.
If you are installing nodes in a 24 inch wide rack, you are required to install 2U horizontal
PDUs.
If you are installing nodes in a 30 inch wide rack, you can install vertical PDUs, but the
PDUs must be rear-facing.
Sufficient contiguous space anywhere in the rack to install the components in the
required relative order.
If you install a front door, it must maintain a minimum of 1.2 inches of clearance to the
bezels. It must be perforated with 50% or more evenly distributed air opening. It should
enable easy access for service personnel and allow the LEDs to be visible through it.
The cable management arms may extend out the back of the cabinet and do not allow for
rear doors if the cabinet is not 52" deep.
If you install a rear door, it must be perforated with 50% or more evenly distributed air
opening.
Blanking panels should be used as required to prevent air circulation inside the cabinet.
There is a recommended minimum of 42 inches of clearance in the front and 36 inches
of clearance in the rear of the cabinet to allow for service area and proper airflow. A
minimum of 60 inches of clearance is required in the front of the cabinet, and a minimum
17
Requirement Category
of 36 inches of clearance is recommended in the rear of the cabinet to allow for service
area and proper airflow.
NOTE: Make note of the significant clearance that is required at the front of a node.
The node will slide out roughly two floor tiles away from the rack when servicing
drives.
NEMA rails NEMA round and square hole rails are supported.
NEMA threaded hole rails are NOT supported.
NEMA round holes must accept M5 size screws.
The optimal spacing for the front to rear NEMA rail spacing is 29 inches, with a minimum
of 27 inches and maximum of 34 inches.
The optimal spacing for the front to rear NEMA rail spacing is 29 inches, with a minimum
of 23.25 inches and maximum of 34 inches.
Power The customer rack must have redundant power zones, one on each side of the rack with
separate PDU power strips. Each redundant power zone should have capacity for the
maximum power load.
Use the power calculator to refine the power requirements based on the hardware
configuration and customer provided PDU.
For customer provided PDU’s, the Dell EMC power cords on the servers and switches
expect C13/C14 connections.
NOTE: Dell EMC is not responsible for any failures, issues, or outages resulting from
failure of the customer provided PDUs.
Cabling Cables for the product must be routed in such a way that it mimics the standard offering
coming from the factory. This includes dressing cables to the sides to prevent drooping
and interfering with service of field replaceable units (FRUs).
Optical cables should be dressed to maintain a 1.5 inches bend radius.
Cables for third-party components in the rack cannot cross or interfere with components
in such a way that they block front to back air flow or individual FRU service activity.
Weight Customer rack and data center floor must be capable of supporting the weight the
equipment.
Use the power and weight calculator to refine the weight requirements based on the
hardware configuration and customer provided cabinet and PDU.
18
● 40 GbE cable options for downlinks:
○ Copper - 1, 3, and 5 meters
○ Optical - 1, 3, 5, 10, 30, 50, 100, and 150 meters
● Breakout cables 4x10GbE and 4x25GbE:
○ Copper - 1, 3, and 5 meters
○ Optical - breakout cables do not require optics.
NOTE: Dell EMC does not recommend using 4 Spine switches since it requires using just 8 uplinks. The Leaf switches are
limited to support 22 or less ports for downlinks. Any more than 22 ports over-subscribes the back-end networks. This
requires re-cabling some of the nodes to different leaf switches to grow the cluster. For example, if to grow from 3 Spine
switches to 4 Spine switches, you must re-cable the nodes to different Leaf switches (unless nodes had never connected
to more than 22 ports).
40 node configuration
It is assumed that this cluster will not grow beyond 44 Performance nodes with 40GbE, or 176 Archive nodes with 10 GbE by
using breakout cables. Although initially, this configuration does not require a Leaf-Spine architecture, the target growth of the
cluster exceeds what a single Z9100-ON switch supports.
The configuration with 40 node (20 performance nodes and 20 archive nodes) includes:
● Six Dell Z9100-ON switches (3 per side)
○ 2 Spine switches
○ 4 Leaf switches
● 36 QSFP, 28 100GbE uplink cables (9 uplink cables per Leaf)
● 40 QSFP+ twin-ax or MPO backend cables
● 80 Optics (if 40 MPO cables are used, one optic for each end of the cable)
● 10 QSFP to SFP+ breakout cables
19
Figure 8. 40 node configuration
20
Where to get help
The Dell Technologies Support site (https://www.dell.com/support) contains important information about products and
services including drivers, installation packages, product documentation, knowledge base articles, and advisories.
A valid support contract and account might be required to access all the available information about a specific Dell Technologies
product or service.
21
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the
problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2018 - 2021 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Other trademarks may be trademarks of their respective owners.