H18914 Dell Openstack Ra

Download as pdf or txt
Download as pdf or txt
You are on page 1of 71

Dell Technologies Reference Architecture for

Red Hat OpenStack Platform


Version 16.1
H18914

Dell Technologies

October 2021
Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

© 2017 – 2021 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Other trademarks may be trademarks of their respective owners.
Contents

Chapter 1: Overview...................................................................................................................... 6
Dell EMC Reference Architecture Guide 16.1............................................................................................................... 6
New features.................................................................................................................................................................. 6
Key benefits.................................................................................................................................................................... 6
Hardware options.................................................................................................................................................................7
Networking and network services................................................................................................................................... 7
JetPack automation toolkit............................................................................................................................................... 8
OpenStack architecture.....................................................................................................................................................8

Chapter 2: BIOS and firmware compatibility..................................................................................9


Tested BIOS and firmware................................................................................................................................................ 9

Chapter 3: Server options............................................................................................................ 13


Dell EMC PowerEdge R650 server................................................................................................................................13
Dell EMC PowerEdge R750 server................................................................................................................................ 13
Dell EMC PowerEdge XR11 server................................................................................................................................. 13
Dell EMC PowerEdge XR12 server................................................................................................................................ 14
Dell EMC PowerEdge R640 server................................................................................................................................14
Dell EMC PowerEdge R740xd.........................................................................................................................................14
Dell EMC PowerEdge R6515 server.............................................................................................................................. 14
Dell EMC PowerEdge R7515 server.............................................................................................................................. 14
Dell EMC VxFlex R740xd servers...................................................................................................................................14
Dell EMC Dell EMC PowerEdge XE2420 server.........................................................................................................15

Chapter 4: Configuring PowerEdge R-Series hardware................................................................ 16


Configuring the SAH node............................................................................................................................................... 16
iDRAC settings..............................................................................................................................................................16
SAH BIOS specification.............................................................................................................................................. 16
Configuring Overcloud nodes..........................................................................................................................................17
Configuring server network settings....................................................................................................................... 17

Chapter 5: Storage options.......................................................................................................... 18


Storage options overview................................................................................................................................................ 18
Local storage.......................................................................................................................................................................19
Dell EMC PowerFlex storage.......................................................................................................................................... 19
Red Hat Ceph storage..................................................................................................................................................... 20
Dell EMC Unity storage................................................................................................................................................... 20
Dell EMC SC series storage............................................................................................................................................20
Dell EMC PowerMax storage.......................................................................................................................................... 21
Dell EMC PowerStore....................................................................................................................................................... 21

Chapter 6: Network architecture.................................................................................................22


Network architecture overview..................................................................................................................................... 22
Infrastructure layouts....................................................................................................................................................... 22

Contents 3
Network components....................................................................................................................................................... 22
Server nodes.................................................................................................................................................................22
Leaf switches............................................................................................................................................................... 23
Spine switches............................................................................................................................................................. 23
Layer-2 and Layer-3 switching................................................................................................................................ 23
VLANs.............................................................................................................................................................................23
Management network services................................................................................................................................ 24
Dell EMC OpenSwitch solution................................................................................................................................ 24

Chapter 7: Network Function Virtualization (NFV) support..........................................................26


NUMA Optimization and CPU pinning......................................................................................................................... 26
Hugepages.......................................................................................................................................................................... 26
OVS-DPDK.......................................................................................................................................................................... 27
SR-IOV................................................................................................................................................................................. 27
SR-IOV OVS-hardware-offload with VF-LAG............................................................................................................ 28
Distributed Virtual Router (DVR).................................................................................................................................. 29

Chapter 8: Additional Red Hat/OpenStack features.....................................................................30


Barbican...............................................................................................................................................................................30
Octavia.................................................................................................................................................................................30
Satellite................................................................................................................................................................................ 30

Chapter 9: Distributed Compute Nodes (DCN).............................................................................31


Overview.............................................................................................................................................................................. 31
DCN topology example.................................................................................................................................................... 32
Red Hat OpenStack Platform and DCN further reading......................................................................................... 33
Hardware options.............................................................................................................................................................. 34
Service layout.....................................................................................................................................................................34
Deployment overview.......................................................................................................................................................34
DCN solution architecture...............................................................................................................................................35
Solution expansion............................................................................................................................................................ 35

Chapter 10: Operational notes..................................................................................................... 36


High availability (HA)........................................................................................................................................................36
Service layout.....................................................................................................................................................................36
Deployment overview....................................................................................................................................................... 37

Chapter 11: Solution architecture................................................................................................ 39


Solution common settings...............................................................................................................................................39
Solution Admin Host (SAH) networking................................................................................................................ 42
Node type 802.1q tagging information................................................................................................................... 42
Solution Red Hat Ceph storage configuration......................................................................................................44
Solution with 25GbE/100GbE networking overview................................................................................................ 44
Solution 25GbE/100GbE with Ceph rack layout..................................................................................................44
Solution 25GbE with PowerFlex rack layout........................................................................................................ 46
Solution 25GbE/100GbE network configuration................................................................................................. 48

Chapter 12: Bill of materials (BOM)............................................................................................. 53


Nodes overview................................................................................................................................................................. 53

4 Contents
Bill of Materials for Dell EMC PowerEdge R-Series solution .................................................................................53
Bill of Materials for Dell EMC PowerEdge R-Series — DCN................................................................................. 55
Edge compute node configuration with 25GbE networking.............................................................................55
Subscriptions and network switches in the solution................................................................................................ 56
Default network switch - Dell EMC Networking S3048-ON switch...............................................................56
Dell EMC Networking S4048-ON optional switch.............................................................................................. 56
Dell EMC Networking S5232F-ON switch............................................................................................................ 56
Dell EMC Networking S5224F-ON switch............................................................................................................ 57

Chapter 13: Bill of materials - legacy Dell EMC servers................................................................ 58


Bill of Materials for Dell EMC PowerEdge R-Series - Mellanox ............................................................................58
Base configuration - Broadcom, Mellanox, or Intel NIC's in Dell EMC PowerEdge R7515 compute
nodes...........................................................................................................................................................................61
Bill of Materials for Dell EMC PowerEdge R-Series solution — Intel® NICs......................................................62
Bill of Materials for Dell EMC PowerEdge R-Series solution — Intel NICs...................................................63
Bill of Materials for Dell EMC PowerEdge R740xd — PowerFlex........................................................................ 65
Base configuration - PowerFlex...............................................................................................................................65
Bill of Materials for Dell EMC PowerEdge R-Series solution — Hyper-Converged Infrastructure.............. 66
Base configuration — Intel ® NICs.......................................................................................................................... 67

Chapter 14: References............................................................................................................... 69

Chapter 15: Glossary................................................................................................................... 70

Contents 5
1
Overview
Topics:
• Dell EMC Reference Architecture Guide 16.1
• Hardware options
• Networking and network services
• JetPack automation toolkit
• OpenStack architecture

Dell EMC Reference Architecture Guide 16.1


An OpenStack based cloud is now a common need by many organizations. Dell Technologies and Red Hat have worked together
to build a jointly engineered and validated architecture that details software, hardware, and integration points of all solution
components. This Reference Architecture Guide provides prescriptive guidance and recommendations for:
● Hardware design
○ Infrastructure nodes
○ Compute nodes
○ Storage nodes
○ Hyper-Converged Infrastructure (HCI) nodes (compute node and storage node combined)
○ Distributed Compute Nodes (DCN)
● Network design
● Software layout
● Offers suggestion for other system configurations

New features
● Added support for Dell EMC PowerEdge R650 dual-socket, 1U 3rd Generation Intel® Xeon® scalable processors .
Reference https://i.dell.com/sites/csdocuments/Product_Docs/en/poweredge-r650-spec-sheet.pdf for more information.
● Added support for Dell EMC PowerEdge R750 dual-socket, 2 U rack 3rd Generation Intel® Xeon® scalable processors.
Reference https://i.dell.com/sites/csdocuments/Product_Docs/en/poweredge-R750-spec-sheet.pdf for more information.
● Added support for Dell EMC PowerEdge XR11 single-socket, 1U 3rd Generation Intel® Xeon® scalable processors .
Reference https://i.dell.com/sites/csdocuments/Product_Docs/en/xr11-spec-sheet.pdf for more information.
● Added support for Dell EMC PowerEdge XR12 single-socket, 2 U rack 3rd Generation Intel® Xeon® scalable processors.
Reference https://i.dell.com/sites/csdocuments/Product_Docs/en/xr12-spec-sheet.pdf for more information.
● Added support for Dell EMC PowerStore.
Reference https://www.delltechnologies.com/asset/en-us/products/storage/technical-support/h18143-dell-emc-
powerstore-family-spec-sheet.pdf for more information.
● Support for the latest release of 16.1 including the latest updates.
● Support for Red Hat Enterprise Linux RHEL 8.2 including the latest updates.

Key benefits
The Dell Technologies offers several benefits to help service providers and high-end enterprises rapidly implement Dell EMC
hardware and software:

6 Overview
● Ready-to-use solution: The Reference Architecture Guide has been fully engineered, validated, tested in Dell Technologies
laboratories and documented by Dell Technologies. This decreases your investment and deployment risk, and it enables
faster deployment time.
● Long lifecycle deployment: Dell EMC PowerEdge R-Series, VxFlex R740xd, Dell EMC PowerEdge XR11, Dell EMC PowerEdge
XR12, and Dell EMC PowerEdge XE2420 servers, recommended in the Reference Architecture Guide, include long-life Intel
Xeon processors which reduce your investment risk and protect your investment for the long-term.
● World-class professional services: The Reference Architecture Guide includes Dell Technologies professional services that
span consulting, deployment, and design support to guide your deployment needs.
● Customizable solution: The Reference Architecture Guide is prescriptive, but it can be customized to address each
customer’s unique virtual network function (vNF) or other workload requirements.
● Co-engineered and Integrated: OpenStack depends upon Linux for performance, security, hardware enablement, networking,
storage, and other primary services. The delivers an OpenStack distribution with the proven performance, stability, and
scalability of RHEL 8.2 enabling you to focus on delivering the services your customers want, instead of focusing on the
underlying operating platform.
● Deploy with confidence: The provides hardened and stable branch releases of OpenStack and Linux. The is a long life release
product supported by Red Hat for a four-year “production phase” life cycle, well beyond the six-month release cycle of
unsupported, community OpenStack. life cycle support policies can be found at https://access.redhat.com/support/policy/
updates/openstack/platform
● Take advantage of broad application support: Red Hat Enterprise Linux, running as guest Virtual Machines (VM), provides
a stable application development platform with a broad set of ISV certifications. You can therefore rapidly build and deploy
your cloud applications.
● Avoid vendor lock-in: By moving to open technologies, while maintaining your existing infrastructure investments.
● Benefit from the world’s largest partner ecosystem: Red Hat has assembled the world’s largest ecosystem of certified
partners for OpenStack compute, storage, networking, ISV software, and services for deployments. This ensures the same
level of broad support and compatibility that customers enjoy today in the Red Hat Enterprise Linux ecosystem.
● Upgrade of: Red Hat OpenStack Director-based installations.
● Bring security to the cloud: Rely upon the SELinux military-grade security and container technologies of Red Hat Enterprise
Linux to prevent intrusions and protect your data, when running in public or private clouds. For more information regarding
security in the 16.1 release, see https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/
security_and_hardening_guide/index. Additional information pertaining to Red Hat vulnerabilities can be found at https://
access.redhat.com/security/.

Hardware options
To reduce time spent on specifying hardware for an initial system, this Architecture Guide offers a full solution using validated
Dell EMC PowerEdge and Dell EMC VxFlex R740xd server hardware designed to allow a wide range of configuration options,
including optimized configurations for:
● Compute nodes (both Core and DCN)
● Hyper-Converged Infrastructure nodes
● Infrastructure nodes
● Storage nodes
Dell Technologies recommends starting with OpenStack software using components from this Reference Architecture Guide
- Version 16.1. These hardware and operations processes comprise a flexible foundation upon which to expand as your cloud
deployment grows, so your investment is protected.
As noted throughout this Architecture Guide - Version 16.1, Dell Technologies constantly adds capabilities to expand this
offering, and other hardware may be available.

Networking and network services


Network configuration is based upon using the Neutron-based options supported by the code base, and does not rely upon
third-party drivers. This reference configuration is based upon the Neutron networking services using the ML2 drivers for Open
vSwitch with the VLAN option.
Networking includes:
● Core and layered networking capabilities
● Network Function Virtualization (NFV)
● 25GbE and 100GbE networking

Overview 7
● NIC bonding
● Redundant trunking top-of-rack (ToR) switches into core routers
This enables the Dell Technologies to operate in a full production environment.
See Network architecture for guidelines. Detailed designs are available through Dell Technologies consulting services.

JetPack automation toolkit


Dell Technologies has updated its opensourced JetPack automation toolkit 1 , which is an innovative automation package that
is used to configure the infrastructure hardware and OpenStack software in a fully automated fashion. The toolkit includes
programs from Dell Technologies that work with Red Hat OpenStack Director to automate the deployment process, not only
saving time, but also ensuring the process is reliable and repeatable.
The JetPack automation toolkit was used to deploy 16.1 on the documented hardware in this Architecture Guide. The JetPack
automation toolkit is used by Dell Technologies, Red Hat professional service teams, and system integrator partners, and is
available on GitHub to customers who prefer to do self-deployment.
This release uses the concept of profiles. There are two different validated profiles, CSP or xSP, that can be used
for a deployment. The CSP profile is designed for Telecommunications Service Providers, Cable TV Operators, Satellite
TV, Internet Service Providers, etc. whereas the xSP profile is designed for Business and IT Services Providers such
as Hosting Service Providers, Cloud Service Providers, Software-as-a-Service/Platform-as-a-Service Providers, Application
Hosting Service Providers and Private Managed Cloud Service Providers.

OpenStack architecture
While OpenStack has many configurations and capabilities, Dell Technologies Reference Architecture for Red Hat OpenStack
Platform 16.1 primary components are a containerized version.

NOTE: For a complete overview of the OpenStack software, visit Red Hat OpenStack Platform and the OpenStack Project.

1 The toolkit can be found at https://github.com/dsp-jetpack/JetPack/tree/JS-16.1.21-10

8 Overview
2
BIOS and firmware compatibility
This chapter documents the versions of BIOS and firmware that are used to create this Dell Technologies Reference
Architecture for Red Hat OpenStack Platform.
Topics:
• Tested BIOS and firmware

Tested BIOS and firmware


The following tables list the BIOS and firmware versions that are tested for Dell Technologies Reference Architecture for Red
Hat OpenStack Platform on Dell EMC PowerEdge R650, Dell EMC PowerEdge R750, Dell EMC PowerEdge XR11, Dell EMC
PowerEdge XR12, Dell EMC PowerEdge R640/Dell EMC PowerEdge R740xd, Dell EMC PowerEdge R6515/Dell EMC PowerEdge
R7515, VxFlex R740xd, and Dell EMC PowerEdge XE2420.
The leaf switches tested in this release are the S5232F-ON and the S5224-ON S5224F-ON.
CAUTION: The versions that are listed below were used during the development of this Reference Architecture
Guide. Newer versions may be available since the time of this release. Ensure that the firmware on all
servers, storage devices, and switches, are at minimum, up to date with the versions listed below. Otherwise,
unexpected results may occur.

Table 1. Dell EMC PowerEdge R650 tested BIOS and firmware versions
Product Version
BIOS 1.2.4
iDRAC with Lifecycle controller 4.40.29.00
Mellanox ConnectX-5 16.28.45.12
PERC H755 Front 52.14.0-3901

Table 2. Dell EMC PowerEdge R750 tested BIOS and firmware versions
Product Version
BIOS 1.2.4
iDRAC with Lifecycle controller 4.40.29.00
Mellanox ConnectX-6 - Used for NFV functions for Compute 22.27.61.06
Role
Mellanox ConnectX-5 16.26.60.00
PERC 755 Front- Used for Compute Role 52.14.0-3901
HBA 335i Fnt - Used for Ceph Storage Role 15.15.15.00
BOSS S2 - Used for Ceph Storage Role 2.5.13.4008

Table 3. Dell EMC PowerEdge XR11 tested BIOS and firmware versions
Product Version
BIOS 1.3.8
iDRAC with Lifecycle controller 4.40.35.00

BIOS and firmware compatibility 9


Table 3. Dell EMC PowerEdge XR11 tested BIOS and firmware versions (continued)
Product Version
Mellanox ConnectX-5 16.28.45.12
PERC 755 Front- Used for Compute Role 52.14.0-3901

Table 4. Dell EMC PowerEdge XR12 tested BIOS and firmware versions
Product Version
BIOS 1.3.8
iDRAC with Lifecycle controller 4.40.35.00
Mellanox ConnectX-5 16.26.60.00
PERC 755 Front- Used for Compute Role 52.14.0-3901
HBA 335i Fnt - Used for Ceph Storage Role 15.15.15.00
BOSS S2 - Used for Ceph Storage Role 2.5.13.4008

Table 5. Dell EMC PowerEdge R640/Dell EMC PowerEdge R740xd tested BIOS and firmware versions
Product Version
BIOS 2.11.2
iDRAC with Lifecycle controller 5.00.00.00
Intel XXV710 NIC 20.0.17
Mellanox ConnectX-4 14.26.60.00
Mellanox ConnectX-5 16.26.60.00
PERC H730P Mini controller (Dell EMC PowerEdge R640) 25.5.8.001
PERC H740P Mini controller (Dell EMC PowerEdge R640) 51.14.0-3900
HBA330 Mini (Dell EMC PowerEdge R740xd) 16.17.01.00
BOSS-S1 (Dell EMC PowerEdge R740xd) 2.5.13.3024

Table 6. Dell EMC PowerEdge R6515/Dell EMC PowerEdge R7515 tested BIOS and firmware versions
Product Version
BIOS 2.0.3
iDRAC with Lifecycle controller 4.40.00.00
Intel XXV710 NIC 19.5.12
Mellanox ConnectX-5 16.27.61.20
Broadcom Adv. Dual 25Gb Ethernet 21.60.22.11
PERC H740P Mini controller (Dell EMC PowerEdge R640) 51.13.2-3714

Table 7. VxFlex R740xd tested BIOS and firmware versions


Product Version
Hardware ISO VxFlex-Ready-Node-Hardware-Update-for-
Dell_14G_2020_December_A01.iso
PowerFlex Core_version: 3.5-1100.107 (sds, sdc, mdm, lia & gateway)
PowerFlex Mgmt_version: 3.5-1100.105

10 BIOS and firmware compatibility


NOTE: Dell EMC VxFlex Ready Nodes deployments require specific versions of drivers, BIOS, and firmware qualified by Dell
EMC. If the servers do not have the correct versions, you must update them.

Various factors can influence a mismatch between the required versions and the versions that are installed on the servers,
such as firmware updates after the server shipment, or a FRU replacement with a different firmware version than in the
warehouse. For example, if you have replaced the server system board, the FRU's BIOS and iDRAC firmware versions will be
different.

You are required to verify that all server drivers, BIOS, and firmware meet the required versions, as published in the
Dell EMC VxFlex Ready Nodes driver and firmware matrix, which is located at https://www.dell.com/support/home/en-us/
product-support/product/scaleio-ready-node--poweredge-14g/docs, before deploying a server in the Dell EMC VxFlex
Ready Nodes environment.

To perform any updates needed to meet Dell EMC VxFlex Ready Nodes requirements, use the Dell EMC VxFlex Ready
Nodes hardware Update Bootable ISO ("Hardware ISO"). The Hardware ISO is based on the Dell EMC OpenManage
Deployment Toolkit (DTK). The DTK provides a framework of tools necessary for the configuration of Dell EMC VxFlex
Ready Nodes servers. For PowerFlex, a custom script has been injected, along with specific qualified BIOS/firmware update
packages.

The Hardware ISO has been designed to make the firmware update process consistent and simple. You can use it to update
firmware, BIOS, and configuration settings two different ways, depending on how many Dell EMC VxFlex Ready Nodes
servers you are updating:

To update the firmware, BIOS, and configuration settings on a single server, use the iDRAC Virtual KVM. If several servers
in the Dell EMC VxFlex Ready Nodes deployment require updates, it is recommended that you use remote RACADM.
RACADM allows for the simultaneous update of the versions on multiple Dell EMC VxFlex Ready Nodes servers, with
minimal steps.

Table 8. Dell EMC PowerEdge XE2420 tested BIOS and firmware versions
Product Version
BIOS 1.4.1
iDRAC with Lifecycle controller 4.40.10.00
Intel XXV710 NIC 19.5.12
PERC H740P Adapter 50.9.4-3025

Dell EMC SC, Unity and PowerMax tested software and firmware versions lists the Dell EMC Storage Center, SC series storage
software and Dell EMC Unity firmware versions that were tested for the Dell Technologies Reference Architecture for Red Hat
OpenStack Platform.

Table 9. Dell EMC SC, Unity and PowerMax PowerStore tested software and firmware versions
Product Version
Dell EMC SC storage center software 2016 R2 Build 16.2.1.228
SC series storage firmware 6.6.11.9
Dell EMC Unity 380F firmware 5.0.6.0.5.008
Dell EMC PowerMax 2000 firmware 5978.221.221
Dell EMC PowerStore 1000T 1.0.3.0.5.007

Dell EMC Network Switches tested firmware versions lists the default S3048-ON, and optional S4048-ON management switch
firmware versions that were tested for the Dell Technologies Reference Architecture for Red Hat OpenStack Platform.

Table 10. Dell EMC Network Switches tested firmware versions


Product Version
S3048-ON firmware Dell EMC Networking OS10 Enterprise OS Version: 10.5.2.7
Build Version: 10.5.2.7.374

BIOS and firmware compatibility 11


Table 10. Dell EMC Network Switches tested firmware versions (continued)
Product Version
S4048-ON firmware (optional) Dell EMC Networking OS10 Enterprise OS Version: 10.5.2.7
Build Version: 10.5.2.7.374
S5232F-ON firmware Dell EMC Networking OS10 Enterprise OS Version: 10.5.2.7
Build Version: 10.5.2.7.374
S5224F-ON firmware Dell EMC Networking OS10 Enterprise OS Version: 10.5.2.7
Build Version: 10.5.2.7.374

12 BIOS and firmware compatibility


3
Server options
The solution supports the Dell EMC PowerEdge R650, Dell EMC PowerEdge R750, Dell EMC PowerEdge XR11, Dell EMC
PowerEdge XR12, Dell EMC PowerEdge R640, Dell EMC PowerEdge R740xd, Dell EMC PowerEdge R6515, Dell EMC
PowerEdge R7515, Dell EMC VxFlex R740xd, and Dell EMC PowerEdge XE2420 server lines.

NOTE: Contact your Dell EMC sales representative for detailed parts lists.

Topics:
• Dell EMC PowerEdge R650 server
• Dell EMC PowerEdge R750 server
• Dell EMC PowerEdge XR11 server
• Dell EMC PowerEdge XR12 server
• Dell EMC PowerEdge R640 server
• Dell EMC PowerEdge R740xd
• Dell EMC PowerEdge R6515 server
• Dell EMC PowerEdge R7515 server
• Dell EMC VxFlex R740xd servers
• Dell EMC Dell EMC PowerEdge XE2420 server

Dell EMC PowerEdge R650 server


The Dell EMC PowerEdge R650 powered by 3rd Generation Intel® Xeon® scalable processors is designed to optimize
workloads performance and data center density.
The dual-socket, 1U Dell EMC PowerEdge R650 is the ideal rack server to address performance, high scalability, and density.
Supports eight channels per CPU, up to 32 DDR4 DIMMs at 3200 MT/s DIMM speed. Address substantial throughput
improvements with PCIe Gen 4 and up to 10 NVMe drives. Ideal for traditional corporate IT, database and analytics, VDI,
and AI/ML and Inferencing. Optional Direct Liquid Cooling support to address high wattage processors.

Dell EMC PowerEdge R750 server


The Dell EMC PowerEdge R750 is Dell Technologies latest dual-socket, 2 U rack servers that are designed to run complex
workloads using highly scalable memory, I/O, and network options. The systems feature the 3rd Generation Intel® Xeon®
Scalable Processor, up to 16 DIMMs, PCI Express® (PCIe) 4 .0 enabled expansion slots, and a choice of network interface
technologies to cover NIC.
The Dell EMC PowerEdge R750 is a general-purpose platform capable of handling demanding workloads and applications, such
as data warehouses, ecommerce, databases, and high-performance computing (HPC).

Dell EMC PowerEdge XR11 server


Dell EMC PowerEdge XR11 server, which is powered by 3rd Generation Intel® Xeon® Scalable processors, is a high-
performance, high-capacity server for demanding workloads at the edge. This is one of Dell Technologies' latest 1U, single-
socket servers designed to run complex workloads using highly scalable memory, I/O, and network options, with up to eight
DDR4 DIMMs, up to three PCIe Gen4 enabled expansion slots, up to four storage bays, and four integrated 25 GbE LAN on
Motherboard (LoM) ports. It has a reduced form factor and is NEBS3 compliant, which makes it ideal for more challenging
deployment models at the far edge, where space and environmental conditions become more demanding.

Server options 13
Dell EMC PowerEdge XR12 server
Dell EMC PowerEdge XR12 server, which is powered by 3rd Generation Intel® Xeon® Scalable processors, is a high-
performance, high-capacity server for demanding workloads at the edge. This is one of Dell Technologies' latest 2U single-
socket servers designed to run complex workloads using highly scalable memory, I/O, and network options, with up to eight
DDR4 DIMMs, up to three PCIe Gen4 enabled expansion slots, up to four storage bays, and four integrated 25 GbE LAN on
Motherboard (LoM) ports. It has a reduced form factor and is NEBS3 compliant, which makes it ideal for more challenging
deployment models at the far edge, where space and environmental conditions become more demanding.

Dell EMC PowerEdge R640 server


The Dell EMC PowerEdge R640 is the ideal dual-socket, 1U platform for dense scale-out cloud computing. The scalable business
architecture of the Dell EMC PowerEdge R640 is designed to maximize application performance and provide the flexibility to
optimize configurations based on the application and use case.
With the Dell EMC PowerEdge R640 you can create an NVMe cache pool and use either 2.5” or 3.5” drives for data storage.
Combined with up to 24 DIMM’s, 12 of which can be NVDIMM’s, you have the resources to create the optimum configuration to
maximize application performance in only a 1U chassis. This can simplify and speed up deployments of the Red Hat OpenStack
Platform.

Dell EMC PowerEdge R740xd


The Dell EMC PowerEdge R740xd delivers a perfect balance between storage scalability and performance. The 2U dual-socket
platform is ideal for software defined storage. The R740xd versatility is highlighted with the ability to mix any drive type to
create the optimum configuration of SSD and HDD for either performance, capacity or both.
The Dell EMC PowerEdge R740xd is the platform of choice for Red Hat Ceph storage for this Architecture Guide - Version 16.1.
This platform has also been validated for compute and HCI roles.

Dell EMC PowerEdge R6515 server


The Dell EMC PowerEdge R6515 (1U rack system) is a single-socket, 1U server designed to run complex workloads, using highly
scalable memory, I/O, and network.The system is based on the second generation AMD EPYC processor, supports up to 64
Zen2 x86 cores, 16 DIMMs, PCI Express (PCIe) 4.0 enabled expansion slots, and includes a choice of LOM Riser technologies.
The Dell EMC PowerEdge R6515 is a general-purpose platform capable of handling demanding workloads and applications, such
as data warehouses, ecommerce, databases, and high-performance computing (HPC). Also, the server provides extraordinary
storage capacity options, making it well-suited for data-intensive applications without sacrificing I/O performance.

Dell EMC PowerEdge R7515 server


The Dell EMC PowerEdge R7515 (2U rack system) is a single-socket, 2U server that is designed to run complex workloads
using highly scalable memory, I/O, and network. The system is based on the second Generation AMD EPYC processor, which
supports up to 64 Zen2 x86 cores, up to 16 DIMMs, PCI Express (PCIe) 4.0 enabled expansion slots, and a choice of LOM Riser
technologies.
The Dell EMC PowerEdge R7515 is a general-purpose platform capable of handling demanding workloads and applications, such
as data warehouses, ecommerce, databases, and high-performance computing (HPC). Also, the server provides extraordinary
storage capacity options, making it well-suited for data-intensive applications without sacrificing I/O performance.

Dell EMC VxFlex R740xd servers


The VxFlex R740xd delivers optimal balance between storage scalability and performance. The dual-socket, 2U platform is ideal
for software defined storage. The VxFlex R740xd versatility is highlighted with the ability to mix any drive type to create the
optimum configuration of SSD and HDD for performance, capacity, or both.

14 Server options
The VxFlex R740xd is the platform of choice for Red Hat Ceph storage for this Architecture Guide - Version 16.1. This platform
has also been validated for compute and HCI roles.

Dell EMC Dell EMC PowerEdge XE2420 server


The Dell EMC PowerEdge XE2420 is a configurable, dual-socket, 2U rack server that delivers powerful 2S performance in a
short-depth form-factor. With a scalable rack option, it is ideal for low-latency, large storage edge applications. Its performance
can be further boosted by its support of up to four accelerators. A wide and flexible range of storage options (SSD, NVMe) and
large capacity up to 132TB gives it the flexibility to tackle a diverse set of demanding workloads at the edge.
The Dell EMC PowerEdge XE2420 is a specialty edge server that is specifically engineered to deliver powerful performance for
harsh environments. It is a dual-socket, 2U, short-depth, front-accessible server that is designed to support demanding edge
applications such as streaming analytics, manufacturing logistics, 5G cell processing.

Server options 15
4
Configuring PowerEdge R-Series hardware
This section describes manually configuring PowerEdge R-Series server hardware for the Dell Technologies Reference
Architecture for Red Hat OpenStack Platform:
● IPMI configuration
● BIOS configuration
● RAID configuration
Topics:
• Configuring the SAH node
• Configuring Overcloud nodes

Configuring the SAH node


The Solution Admin Host (SAH) is a physical server that hosts one VM, the Red Hat OpenStack Director, as the Undercloud.
Configure the SAH BIOS configuration before deploying OpenStack. Refer to network_c_sah_bios_specification.

iDRAC settings
NOTE: Duplicate these settings on the SAH nodes BIOS configuration before deployment.

iDRAC Specification for SAH Nodes lists and describes iDRAC settings that need to be set in BIOS.

Table 11. iDRAC specification for SAH nodes


Menu choice iDRAC setting
iDRAC.IPMILan.Enable Enabled
iDRAC.IPMILan.PrivLimit 4
iDRAC.IPv4.Enable Enabled
iDRAC.Users.2.Enable Enabled
iDRAC.Users.2.IpmiLanPrivilege 4
iDRAC.Users.2.Privilege 0x1ff
iDRAC.WebServer.Enable Enabled

SAH BIOS specification


SAH BIOS Specification lists and describes the default BIOS settings for the SAH that will be set by the OS-HCTK.

Table 12. SAH BIOS specification


Display name Attribute Settings
Boot mode BootMode UEFI
Boot sequence retry BootSeqRetry Enabled
DCU IP prefetcher DcuIpPrefetcher Enabled
DCU streamer prefetcher DcuStreamerPrefetcher Enable

16 Configuring PowerEdge R-Series hardware


Table 12. SAH BIOS specification (continued)
Display name Attribute Settings
Logical processor idling DynamicCoreAllocation Disabled
Integrated RAID controller IntegratedRaid Enabled
Internal SD card InternalSdCard Off
I/OAT DMA engine IoatEngine Enabled
Logical processor LogicalProc Enabled
Memory operating mode MemOpMode OptimizerMode
System memory testing MemTest Disabled
Node interleaving NodeInterleave Disabled
OS watchdog timer OsWatchdogTimer Disabled
Adjacent cache line prefetch ProcAdjCacheLine Enabled
Number of cores per processor ProcCores all
Hardware prefetcher ProcHwPrefetcher Enabled
CPU power management ProcPwrPerf MaxPerf
Turbo mode ProcTurboMode Enabled
Virtualization technology ProcVirtualization Enabled
CPU interconnect bus speed CpuInterconnectBusSpeed MaxDataRate
SR-IOV global enable SriovGlobalEnable Enabled
System profile SysProfile PerfOptimized

Configuring Overcloud nodes


The Overcloud hardware configurations and the settings for the Intelligent Platform Management Interface, (IPMI) are done
through Ironic which is included in the JetPack automation toolkit.

Configuring server network settings


1. Set the iDRAC IP address source:
a. If the Overcloud nodes were ordered with the iDRACs configured for DHCP, or are currently configured for DHCP, then
no further configuration is necessary.
b. If you wish to use static IP addresses, then configure the Overcloud nodes' iDRAC IP address, subnet mask, default
gateway IP, and default VLAN (ID = 110, if required) using the iDRAC GUI.

Configuring PowerEdge R-Series hardware 17


5
Storage options
OpenStack has several storage services, including:
● Cinder
● Glance
● Manila
● Ephemeral
● Swift 2
Together these services provide virtual machines (VMs) with block, image, file share, and object storage. In turn, the services
employ block, file share, and object storage subsystems. Since the service design has a mechanism to replace some or all of the
implementation of these services, this solution can provide alternate implementations of these services that better serve your
needs.
Topics:
• Storage options overview
• Local storage
• Dell EMC PowerFlex storage
• Red Hat Ceph storage
• Dell EMC Unity storage
• Dell EMC SC series storage
• Dell EMC PowerMax storage
• Dell EMC PowerStore

Storage options overview


Cinder virtualizes storage enabling CinderVMs to use persistent block storage through Nova. OpenStack consumers should
write data that must exist beyond the lifecycle of the guest to volumes. The volume can be accessed afterwards by a different
guest.
Glance provides images to VMs. Generally, the images are block devices containing DVDs or virtual machines. VMs can be
booted from these images or have the images attached to them. Glance storage now can use Red Hat Ceph storage, Dell EMC
Unity or Dell EMC SC series storage.

NOTE: Dell EMC Unity or SC series storage provides the Glance support through Cinder.

Dell EMC Manila driver framework (EMCShareDriver) delivers a shared filesystem in OpenStack. The plugin-based Dell EMC
Manila driver design is compatible with various plugins to control Dell EMC Unity storage products.
The Dell EMC Unity plugin manages the Dell EMC Unity storage system for shared filesystems.
The Dell EMC Unity driver is a REST API. Dell EMC Unity storage system Manila backends are a one-to-one managed
storage system. Each Manila backend configures a Unity storage system. See https://docs.openstack.org/manila/train/admin/
emc_unity_driver.html
Swift provides an object storage interface to VMs and other OpenStack consumers. Unlike block storage where the guest is
provided a block device of a given format and is accessible within the cluster, object storage is not provided through the guest.
Object storage is generally implemented as a HTTP/HTTPS-based service through a web server. Client implementations within
the guest or external OpenStack clients would interact with Swift without any configuration required of the guest other than
providing the requisite network access. For example, a VM within OpenStack can put data into Swift, and later external clients
could pull that data for additional processing.

NOTE: Swift in this document refers to the Swift interfaces, not the Swift implementation, of the protocol.

2 Available through a Ceph cluster with the Swift API enabled or through a custom Professional Services engagement.

18 Storage options
As with other OpenStack services, there are client and server components for each storage service. The server component
can be modified to use a particular type of storage rather than the default. For example, Cinder uses local disks as the storage
back-end by default. The Dell Technologies Reference Architecture for Red Hat OpenStack Platform modifies the default
configuration for these services.
All virtual machines will need a virtual drive that is used for the OS. Two options are available:
● Ephemeral disks
● Boot from volume or snapshot, hosted on Red Hat Ceph storage, Dell EMC Unity storage or SC series storage arrays.
Ephemeral disks are virtual drives that are created when a VM is created, and destroyed when the VM is removed. The virtual
drives can be stored on the local drives of the Nova host or on a shared file system, such as Ceph Rados Block Device RBD).
During the planning process, place ephemeral on local or shared storage with a shared backend for live migration.
Boot from volume/snapshot will use one of the Cinder backends.
The Dell Technologies Reference Architecture for Red Hat OpenStack Platform includes alternate implementations of Cinder
that enable the cluster to fit many needs. Cinder has been validated using each of the back-ends independently. Multi back-ends
have been validated, consisting of two or all of the following:
● Local storage
● Red Hat Ceph storage
● Dell EMC Unity storage

Local storage
When using local storage, each compute node will host the ephemeral volumes associated with each virtual machine. Cinder
will utilize LVMs that are shared by NFS for independent volumes, utilizing the local storage subsystem. With the hardware
configuration for the compute nodes see Compute node Dell EMC PowerEdge R740 using the eight 600GB disks in a RAID 10,
there will be approximately 2 TB of storage available.

Dell EMC PowerFlex storage


PowerFlex software-defined storage empowers organizations to harness the power of software, so they can embrace change
while achieving consistent, predictable outcomes.
PowerFlex is designed to deliver flexibility, elasticity, and simplicity with predictable performance and resiliency at scale, by
combining compute and high-performance storage resources in a managed unified fabric.
In addition to delivering high-performance block storage with rich data services, PowerFlex offers a simple yet comprehensive
tool-set for IT operations and lifecycle management of the entire infrastructure, helping automate infrastructure workflows.
PowerFlex software components are installed on the application servers and communicate via a standard LAN to handle the
application I/O requests sent to PowerFlex block volumes. An extremely efficient decentralized block I/O flow, combined with a
distributed, sliced volume layout, results in a massively parallel I/O system that can scale up to thousands of nodes.
PowerFlex software components are:
● Meta Data Manager (MDM) - configures and monitors the system. The MDM can be configured in redundant cluster mode,
with three members. MDMs are installed on the controller nodes providing high availability and redundancy.
● Storage Data Server (SDS) - manages the capacity of a single server and acts as a back-end for data access. The SDSs are
installed on the Dell EMC VxFlex Ready Nodes.
● Storage Data Client (SDC) - A lightweight device driver that exposes PowerFlex volumes as block devices to the application
that resides on the same server on which the SDC is installed. SDCs are installed on the compute nodes for providing volume
access to the instances and also on the controllers to enable Glance to access the volumes.
● Storage Data Replication (SDR) - SDRs are not supported or tested in Dell Technologies Reference Architecture for Red Hat
OpenStack Platform.
The PowerFlex Gateway is critical for PowerFlex users in an OpenStack environment, is the PowerFlex Gateway. The gateway is
utilized for configuration updates, deployment, and various lifecycle management activities, and also acts as an endpoint for API
calls (passed to the MDM) made through the Cinder driver over HTTPS.
A graphical user interface, also known as the PowerFlex dashboard, is installed as a VM on the Solution Admin Host (SAH). Do
not run the PowerFlex dashboard for any storage configuration to avoid bypassing Cinder control. Instead, monitor and analyze
Key performance indicators (KPIs) in the PowerFlex dashboard. Dell EMC PowerFlex Cinder driver gateway is deployed as a VM
on the Solution Admin Host (SAH) PowerFlex interface with Cinder through API calls.

Storage options 19
Red Hat Ceph storage
The Dell Technologies Reference Architecture for Red Hat OpenStack Platform includes Red Hat Ceph storage, which is a
scale-out, distributed, software-defined storage system. Red Hat Ceph storage is used as backend storage for Nova, Cinder,
and Glance. Storage nodes run the Red Hat Ceph storage software, and compute nodes and controller nodes run the Red Hat
Ceph storage block client.
Red Hat Ceph storage also provides object storage for OpenStack VMs and other clients external to OpenStack. The object
storage interface is an implementation of:
● The OpenStack Swift RESTful API (basic data access model)
● The Amazon S3 RESTful API
The object storage interface is provided by Red Hat Ceph storage RADOS Gateway software running on the controller nodes.
Client access to the object storage is distributed across all controller nodes in order to provide High availability (HA) and I/O
load balancing.
Red Hat Ceph storage is containerized in RHOSP16.1 (Mon, Mgr, Object Gateway, and Object Storage Daemon). Each OSD has
an associated physical drive where the data is stored, and a journal where write operations are staged prior to being committed.
● When a client reads data from the Red Hat Ceph storage cluster the OSDs fetch the data directly from the drives.
● When a client writes data to the storage cluster, the OSDs write the data to their journals prior to committing the data.
OSD journals can be located in a separate partition on the same physical drive where the data is stored, or they can be located
on a separate high-performance drive, such as an SSD optimized for write-intensive workloads. For the Architecture Guide, a
recommended ratio of one SSD to four hard disks is used to achieve optimal performance. It should be noted that as of this
writing, using a greater ratio will result in server performance degradation.
In a cost-optimized solution, the greatest storage density is achieved by separate SSD journal drives, and populating every
available physical drive bay with a high-capacity HDD.
In a throughput-optimized solution, a few drive bays can be populated with high performance SSDs that will host the journals for
the OSD HDDs. For example, in an Dell EMC PowerEdge R740xd system with 20 drive bays available for Red Hat Ceph storage,
four bays are used for SSD journal drives and 16 bays for HDD data drives. This is based upon the current industry guideline of a
4:1 HDD to SSD journal ratio.

Dell EMC Unity storage


Dell EMC Unity storage, a supported and validated storage option, provides Glance (Image), Cinder (Block) and Manila (Shared
FileSystems) storage support.

NOTE: Other storage options are available upon request from Professional Services.

The Dell EMC Unity platforms accelerate application infrastructure with All-Flash unified storage platforms that simplify
operations while reducing cost and datacenter footprint.
The following completely refreshed systems deliver the next-generation Dell EMC Unity XT family -- the no-compromise
midrange storage built new from the ground up to deliver unified storage speed and efficiency and built for a multi-cloud world.
● Dell EMC Unity 350F, 450F, 550F, 650F (This RA has validated the Unity 350F and the 550F platforms)
● Dell EMC Unity XT 380F,480F,680F,880F (This RA has validated the Unity XT 380F platform)
Reference https://www.delltechnologies.com/en-us/storage/unity.htm for more information about Dell EMC Unity storage.
Reference https://access.redhat.com/ecosystem/software/3172821 for more information about certifications for Unity.
Reference https://docs.openstack.org/cinder/train/configuration/block-storage/drivers/dell-emc-unity-driver.html for more
information about Dell EMC Unity Cinder driver.
Reference https://docs.openstack.org/manila/train/admin/emc_unity_driver.html for more information about the Dell EMC
Unity Manila driver.

Dell EMC SC series storage


Dell EMC SC series storage, another supported and validated storage option, now supports Glance (Image) storage via cinder in
addition to Cinder (Block) Storage.

20 Storage options
NOTE: Other storage options are available upon request from Professional Services.

Reference https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/
dell_storage_center_back_end_guide/index for more information.
Reference https://access.redhat.com/ecosystem/software/1610473 for more information about certification for Dell EMC SC
series storage.

Dell EMC PowerMax storage


One of the supported and validated Dell EMC storage options is Dell EMC PowerMax storage NVMe Data Storage which can
provide Glance (Image) and Cinder (Block) Storage support.

NOTE: Other storage options are available upon request provided by Professional Services.

With end-to-end NVMe, storage class memory (SCM) for persistent storage, real-time machine learning and up to 350GB per
second data transfer, PowerMax features high-speed to power your most critical workloads.
● VMAX2 Series: VMAX 40K, 20K, 10K
● VMAX3 Series: VMAX 400K, 200K, 100K
● VMAX All Flash Series: VMAX 250F, 450F and 850F
● PowerMax Series 2000 and 8000 (This RA has validated the PowerMax Series 2000 platforms)
Reference https://www.delltechnologies.com/en-us/storage/powermax.htm for more information about Dell EMC PowerMax
storage NVMe Data Storage.
Reference https://access.redhat.com/ecosystem/software/1572113 for more information about certifications for Dell EMC
VMAX and PowerMax.
Reference https://docs.openstack.org/cinder/train/configuration/block-storage/drivers/dell-emc-powermax-driver.html for
more information about Dell EMC Unity Cinder driver.

Dell EMC PowerStore


The ground-breaking Dell EMC PowerStore achieves new levels of operational simplicity and agility, utilizing a containerbased
architecture, advanced storage technologies, and intelligent automation to unlock the power of your data.
Based on a scale-out architecture and hardware-accelerated advanced data reduction, PowerStore is designed to deliver
enhanced resource utilization and performance that keeps pace with application and system growth.
Utilizing the proven capabilities of VMware ESXi, PowerStore X models with AppsON provide the unique ability to host data-
intensive and storage applications directly on the PowerStore system with a storage-based virtualization environment, with the
flexibility of seamless movement of applications between the storage system and external VMware servers.
PowerStore T models provide organizations with all the benefits of an enterprise unified storage platform for block, file and vVol
data, while enabling flexible growth with the intelligent scale-up AND scale-out capability of appliance clusters.

Storage options 21
6
Network architecture
This Reference Architecture Guide supports consistency in rapid deployments through minimal network configuration.
Topics:
• Network architecture overview
• Infrastructure layouts
• Network components

Network architecture overview


The Dell Technologies Reference Architecture for Red Hat OpenStack Platform uses two x S5232F-ONs as Leaf switches and
the S3048-ON (S4048-ON optional) as the management switch.

Infrastructure layouts
● Core network infrastructure - The connectivity of aggregation switches to the core for external connectivity.
● Data network infrastructure - The server NICs, Leaf switches, and the aggregation switches.
● Management network infrastructure - The BMC management network, consisting of iDRAC ports and the out-of-band
management ports of the switches, is aggregated into a 1-rack unit (RU) S3048-ON switch in one of the three racks in the
cluster. This 1-RU switch in turn can connect to one of the aggregation or core switches to create a separate network with a
separate VLAN.

Network components
The data network is primarily composed of the ToRToR and the aggregation switches. The following component blocks make up
this network:
● Server nodes
● Leaf Switches
● Spine switches
● Layer-2 and Layer-3 Switching
● VLANs
● Management network services
● Dell EMC OpenSwitch solution

Server nodes
In order to create a highly-available solution, the network must be resilient to loss of a single network switch, NIC, or bad cable.
To achieve this, the network configuration uses bonding across the servers and switches.
There are several types (or modes) of bonding, but only one is recommended for the solution. The , compute nodes, Red Hat
Ceph storage nodes, and Solution Admin Host (SAH) can use 802.3ad or LACP (mode = 4).
NOTE: Other modes, such as balance-rr (mode=0), balance-xor (mode=2), broadcast (mode=3), balance-tlb
(mode=5), and balance-alb (mode=6), are not supported. Please check with your technician for current support status
of active-backup (mode = 1).

All nodes' endpoints are terminated to switch ports that have been configured for LACP bonding mode across two S5232F-ON
ToR switches for 25GbE/100GbE configured with a Virtual Link Trunking interconnect (VLTi) across them.

22 Network architecture
Please contact your Dell EMC sales representative for other viable options.

Table 13. Bonding nodes supported


Node type Bonding type
Solution Admin Host (SAH) 802.3ad (LACP mode 4)
OpenStack controller nodes Yes (solution default)
OpenStack compute nodes Yes (solution default)
Red Hat Ceph storage nodes Yes (solution default)
OpenStackHyper-Converged Infrastructure nodes Yes (solution default)

A single port is an option when bonding is not required. However, it is neither used nor validated in the Dell Technologies
Reference Architecture for Red Hat OpenStack Platform. The need to eliminate single points of failure is taken into
consideration as part of the design, and this option has been eliminated wherever possible.
Please contact your Dell EMC sales representative for other configurations.

Leaf switches
Dell EMC’s recommended architecture uses VLT for HA between the two Leaf switches, which enables the servers to terminate
their Link Aggregation Group (LAG) interfaces (or bonds) into two different switches instead of one. This configuration enables
active-active bandwidth utilization and provides redundancy within the rack if one Leaf switch fails or requires maintenance. Dell
EMC recommended Leaf switch is 10/25/100GbE connectivity – S5232F-ON.
The Leaf switches are responsible for providing the different network connections such as tenant networks and storage
networks between the compute, controller, and storage nodes of the OpenStack deployment.

Spine switches
NOTE: Please contact your Dell EMC sales representative for aggregation switch recommendations.

Layer-2 and Layer-3 switching


The Reference Architecture Guide uses layer-2 as the reference up to the Leaf layer, which is why VLT is used on the Leaf
switches.
The network links - Provisioning, Storage Clustering , Storage, and Management - can have uplinks to a gateway device. The
Provisioning network can use the Red Hat OpenStack Director as a proxy for pulling packages from a subscription server, or a
gateway can be added. The Red Hat Ceph storage, Dell EMC Unity, SC series storage or the Dell EMC PowerMax storage arrays
on the may need access from metrics and monitoring tools to enable management and updates.
NOTE: For Dell EMC Unity shared file systems storage (Manila), the Dell EMC Unity storage array requires additional
access to networks. Options are external network VLAN (floating network) or internal network VLAN for tenants (tenant
network) based on the use case. To support this functionality, administrators need to configure the Dell EMC Networking
S5232F-ON or Dell EMC Networking S5224F-ON ethernet ports to allow access for the storage (untagged) and floating/
tenant (tagged) VLANs.
After you add the gateway to the network and update the iDRAC, you can use many tools for OOB management.
These are connected to a gateway device, usually a router or firewall. This device will handle routing for all networks external to
the cluster. The required networks are:
● The floating IP range used by virtual machines.
● A network for all external public API and access.

VLANs
This Reference Architecture Guide implements at a minimum nine (9) separate Layer 2 VLANs:

Network architecture 23
● External network VLAN for tenants—Sets up a network that will support the floating IPs and default external gateway
for tenants and virtual machines. This connection is through a router external to the cluster.
● Internal networks VLAN for tenants—Sets up the backend networks for OpenStack Nova Compute and the VMs to use.
● Management/Out of Band (OOB) network—iDRAC connections can be routed to an external network. All OpenStack HA
controllerss need direct access to this network for IPMI operations.
● Private API network VLAN—Used for communication between s, the Red Hat OpenStack Director, and compute nodes for
Private API and cluster communications.
● Provisioning network VLAN—Connects a from all nodes into the fabric, used for setup and provisioning of the OpenStack
servers and access to the Red Hat Ceph storage dashboard.
● Public API network VLAN—Sets up the network connection to a router that is external to the cluster. The network is
used by the front-end network for routable traffic to individual VMs, access to the OpenStack API, RADOS Gateway, and
the Horizon . Depending upon the network configuration these networks may be either shared or routed, as needed. The Red
Hat OpenStack Director requires access to the Public API network.
● Storage Clustering network VLAN—Used by all Storage nodess for replication and data checks (Red Hat Ceph storage
clustering).
● Storage network VLAN—Used by all nodes for the data plane reads/writes to communicate to OpenStack Storage; setup,
and provisioning of the Red Hat Ceph storage cluster; and when included, the Dell EMC Unity storage or SC series storage
arrays.
● Tenant tunnel network VLAN—Used by tenants for encapsulated networks such as, or tunnels, in place of the Internal
networks VLAN for tenants.

Management network services


The management network and the provisioning network for all the servers and switches aggregate into a Dell EMC Networking
S3048-ON switch.
NOTE: Both the default management switch, S3048-ON, and the optional S4048-ON switch is valid for use in the Dell
Technologies Reference Architecture for Red Hat OpenStack Platform Reference Architecture Guide Version 16.1.
The management network services are used for several functions:
● The highly available software uses it to reboot and partition servers.
● An uplink to a router and an iDRAC configure a gateway monitoring the servers and gathering metrics.
NOTE: Discussion of this topic is beyond the scope of this document.

Dell EMC OpenSwitch solution


In addition to the Dell EMC switch-based architecture, Dell EMC provides an open standard that enables you to choose other
brands and configurations of switches for your OpenStack environment.
The following list of requirements will enable other brands of switches to properly operate with Dell EMC's required tools and
configurations:
NOTE: You are expected to ensure that the switches conform to these requirements, and that they are configured
according to this Reference Architecture Guide’s guidelines. These requirements are the minimum needed for a successful
deployment and should be treated as such. Performance of the solution is directly affected by the performance of the
Network.
● Support for IEEE 802.1Q VLAN traffic and port tagging
● Support for using one untagged, and multiple tagged VLANs, on the same port
● Support for using bonded interfaces as a single interface for TFTP/DHCP booting.
● Support for the ability to provide a minimum of 96 x 25GbEs Ethernet ports in a non-blocking configuration within the
provisioning VLAN
○ Configuration can be a single switch or a combination of stacked switches to meet the additional requirements
● Support for the ability to create LAGs with a minimum of two physical links in each LAG
● Support for multiple switches are stacked:
○ The ability to create a LAG across stacked switches
○ Full-bisection bandwidth
○ Support for VLANs to be available across all switches in the stack
● Support for 250,000 packets-per-second capability per switch

24 Network architecture
● Support for a managed switch that supports SSH and serial line configuration
● Support for SNMP v3 support

Network architecture 25
7
Network Function Virtualization (NFV)
support
This Reference Architecture Guide supports the following Network Function Virtualization (NFV) features:
● NUMA
● Hugepages
● OVS-DPDK
● SR-IOV
● SR-IOV with Open vSwitch (OVS) offload
● DVR
NOTE: All the NFV features on AMD computes are supported with processor mode NPS=1 (NUMA nodes per Socket ). So,
before deployments, make sure AMD servers are set to NPS Mode 1 in the BIOS

NOTE: All of the above NFV features can be enabled by using the JetPack version 16.1 automation toolkit.

NOTE: Red Hat OpenStack Platform version 16.1 has Open Virtual Network (OVN) controller as the default software
defined networking solution for supplying network services to OVN OpenStack. However, OVN has limited NFV feature
support at this time. JetPack version 16.1 automation toolkit installs Red Hat OpenStack Platform version 16.1 with OVS
controller as default, to fully support all the NFV features.

Topics:
• NUMA Optimization and CPU pinning
• Hugepages
• OVS-DPDK
• SR-IOV
• SR-IOV OVS-hardware-offload with VF-LAG
• Distributed Virtual Router (DVR)

NUMA Optimization and CPU pinning


The Dell Technologies Reference Architecture for Red Hat OpenStack Platform version 16.1 provides the ability to enable NUMA
optimization and CPU pinning support on all compute nodes at the core or the edge site(s) in the solution.
Non-Uniform Memory Access or NUMA allows multiple CPUs to share L1, L2, L3 caches, and main memory. When running
workloads on NUMA hosts, it is important that the vCPUs executing processes are on the same NUMA node as the memory
used by these processes. This ensures all memory accesses are local to the node and thus do not consume the limited
cross-node memory bandwidth, adding latency to memory accesses. Similarly, large pages are assigned from memory and
benefit from the same performance improvements as memory allocated using standard pages.
For real-time workloads, it is beneficial to control which host CPUs are bound to an instance's vCPUs. This process is known as
CPU pinning. No instance with pinned CPUs can use the CPUs of another pinned instance, thus preventing resource contention
between instances.
NOTE: The NUMA feature can be enabled by the JetPack version 16.1 automation toolkit and enables both NUMA
optimization and vCPU pinning.

Hugepages
The Dell Technologies Reference Architecture for Red Hat OpenStack Platform version 16.1 provides the ability to enable
hugepages support on the compute nodes at the core or the edge site(s) in the solution.

26 Network Function Virtualization (NFV) support


If a program or a VM instance uses large data structures, then the use of small (that is 4KB) pages leads to the fragmentation
of those data structures and loss of data locality. With hugepages of sizes 2MB and 1GB, programs and VM instances can
improve data locality, thus leading to a higher performing and lower latency architecture

NOTE: Hugepages can be enabled by the JetPack version 16.1 automation toolkit.

OVS-DPDK
The Dell Technologies Reference Architecture for Red Hat OpenStack Platform version 16.1 provides the ability to enable
OVS-DPDK support based on two ports or four ports on the compute nodes at the core or the (s) in the solution.
Open vSwitch (OVS) is a multilayer software/virtual switch used to interconnect virtual machines in the same host and
between different hosts. OVS makes use of the kernel for packet forwarding through a data path known as fastpath which
consists of a simple flow table with action rules for the received packets. Exception packets or packets with no corresponding
forwarding rule in the flow table are sent to the user space (slowpath). Switching between two memory spaces creates a
lot of overhead, thus making the user space slowpath. User space makes a decision and updates the flow table in the kernel
space accordingly so that it can be used in the future.
The OVS kernel module acts as a cache for the user space. And just like a cache, its performance increases as the number of
rules increase in the user space.
DPDK (Data-Plane Development Kit) eliminates packet buffer copies. It does this by running a dedicated poll-mode driver, and
allocating hugepages for use as a packet buffer, then passing pointers to the packets. The elimination of copies leads to higher
performance. OVS, when enabled to use DPDK-controller physical NIC interfaces, experiences a tremendous boost to packet
delivery performance. It is also advantageous that both OVS and DPDK can operate in userspace, thus reducing kernel switches
and improving packet processing efficiencies.

NOTE: Enabling OVS-DPDK requires both Hugepages and NUMA (CPU pinning) to be enabled.

NOTE: OVS-DPDK can be enabled by the JetPack version 16.1 automation toolkit.

SR-IOV
The Dell Technologies Reference Architecture for Red Hat OpenStack Platform 16.1 provides the ability to enable SR-IOV
support based on two ports and/or four ports on the compute nodes at the core or the site(s) in the solution.
Single root I/O virtualization (SR-IOV) is an extension to the PCI Express (PCIe) specification. SR-IOV enables a single PCIe
device to appear as multiple, separate virtual devices. Traditionally in a virtualized environment, a packet has to go through an
extra layer of the hypervisor, that results in multiple CPU interrupts per packet. These extra interrupts cause a bottleneck in
high traffic environments. SR-IOV enabled devices have the ability to dedicate isolated access to its resources among various
PCIe hardware functions. These functions are later assigned to the virtual machines which allow direct memory access (DMA)
to the network data.
Enabling SR-IOV requires both NUMA and Hugepages as SR-IOV workloads need stable CPU performance in a production
environment.
By default, SR-IOV is not enabled in the Dell EMC PowerEdge R-Series system BIOS. When SR-IOV is deployed, virtual
functions are not created on the NIC interfaces.
Use the following steps to set the virtualization mode to SR-IOV and VF count to 64 in the Dell EMC PowerEdge R-Series
system BIOS device settings for Mellanox NICs:
1. Enter system BIOS during boot by pressing F2.
2. Select Device Settings.
3. Select the Network Interface Card.
NOTE: SR-IOV has been validated with Mellanox ConnectX-5 / Intel XXV 710 NICs. Below is an example of SR-IOV
configuration with Mellanox NIC
4. Select the Device Level Configuration option.

Network Function Virtualization (NFV) support 27


Figure 1. Mellanox configuration
5. Change the virtualization mode from None to SR-IOV and the PCI Virtual Functions Advertised from the
default setting eight to the desired value.
NOTE: Dell EMC has tested with a value of 64.
6. Repeat steps three through five for each Mellanox ConnectX-5 100GbE adapter where SR-IOV will be enabled.
7. Save BIOS settings and reboot the system.
NOTE: SR-IOV can be enabled for both Mellanox ConnectX-5 and Intel XXV 710 NICs by the JetPack version 16.1
automation toolkit.

SR-IOV OVS-hardware-offload with VF-LAG


The Dell Technologies Reference Architecture for Red Hat OpenStack Platform version 16.1 provides the ability to enable
SR-IOV offloading with VF-LAG (bonding of PFs) functionality based on two ports on all the compute nodes at the core in the
solution.
Open vSwitch hardware offload is a RHOSP feature which takes advantage of single root input/output virtualization (SR-IOV).
An OVS software based solution is CPU intensive, affecting system performance and preventing full utilization of available
bandwidth. This bottleneck can be addressed by making use of the OVS hardware offload capability, where the OVS data plane
is moved to the underlying offloading capable smart NIC, while keeping the OVS control plane unmodified. This enables higher
OVS performance without the associated CPU load.
In this Dell Technologies Reference Architecture for Red Hat OpenStack Platform 16.1, Dell EMC has validated this feature
enabling two port SR-IOV offload capability with VF-LAG functionality using Mellanox 100GbE ConnectX-5 network adapters for
tenant networks with bonding. Below is a logical diagram showing two port SR-IOV OVS Offload connections to the Dell EMC
switches.

28 Network Function Virtualization (NFV) support


Figure 2. Two port SR-IOV Dell EMC offload

NOTE: SR-IOV offloading has been tested with Mellanox ConnectX-5 100G NICs only.

NOTE: For SR-IOV offload with VF-LAG functionality to work, both ports of the bond should originate from a single
Mellanox ConnectX-5 NIC.

NOTE: SR-IOV offloading can be enabled by the JetPack version 16.1 automation toolkit.

Distributed Virtual Router (DVR)


The Dell Technologies Reference Architecture for Red Hat OpenStack Platform version 16.1 provides the ability to enable DVR
on all all the compute nodes in the solution.
DVR offers an alternative routing design to the centralized routing model. It isolates the failure domain of the controller
node and optimizes network traffic by deploying the L3 agent and schedule routers on every compute node. By eliminating a
centralized layer 3 agent, the routing that was performed by single node (primary controller), is now distributed across the
compute nodes using the local L3 agent. DVR follows the below routing flow rules:
● East-West traffic is routed directly on the compute nodes in a distributed fashion.
● North-South traffic with floating IP is distributed and routed on the compute nodes. This traffic requires the external
network connects to each compute node.
● North-South traffic without floating IP allocates and still needs a dedicated Controller node.
● The L3 agent on the controller node is configured with a new dvr_snat mode so that the node serves only SNAT traffic.
● The Neutron metadata agent distributes and deploys on all compute nodes. The metadata proxy service hosts all the
distributed routers.
NOTE: DVR can be enabled by the JetPack version 16.1 automation toolkit.

Network Function Virtualization (NFV) support 29


8
Additional Red Hat/OpenStack features
This Reference Architecture Guide supports the following additional Red Hat OpenStack features:
● Barbican
● Octavia
● Satellite Support
NOTE: These features can be enabled by using the JetPack version 16.1 automation toolkit.

Topics:
• Barbican
• Octavia
• Satellite

Barbican
Barbican is the secrets manager for Red Hat OpenStack Platform. You can use the Barbican API and command line to centrally
manage the certificates, keys, and passwords used by OpenStack services.
Symmetric encryption keys include - Asymmetric keys and certificates include -
● Block Storage (cinder) volume encryption
● Ephemeral disk encryption
● Object Storage (swift) encryption
● Glance image signing and verification
NOTE: Barbican can be enabled by the JetPack version 16.1 automation toolkit.

Octavia
Octavia is the OpenStack Load Balancer as a Service (LBaaS)version 2 implementation for the Red Hat OpenStack Platform.
It accomplishes its delivery of load balancing services by managing a fleet of virtual machines, collectively known as amphorae,
which it spins up on demand.
Load balancing methods -
● Round robin - Rotates requests evenly between multiple instances.
● Source IP - Requests from a unique source IP address are consistently directed to the same instance.
● Least connections - Allocates requests to the instance with the least number of active connections.
NOTE: Octavia can be enabled by the JetPack version 16.1 automation toolkit.

Satellite
JetPack support for Red Hat Satellite 6.5 gives the user the ability to deploy from a Satellite instance. All nodes within the Dell
Technologies Reference Architecture for Red Hat OpenStack Platform can register and pull required packages and container
images from the Satellite instance, instead of from the Red Hat Content Delivery Network (CDN).

NOTE: Satellite support can be enabled by the JetPack version 16.1 automation toolkit.

30 Additional Red Hat/OpenStack features


9
Distributed Compute Nodes (DCN)
An OpenStack based cloud is now a common need by many organizations. Dell Technologies and Red Hat have worked together
to build a jointly engineered and validated architecture using Red Hat OpenStack Platform's Distributed Compute Nodes
(DCN) capability, that details software, hardware, and integration points of all solution components. The architecture provides
prescriptive guidance and recommendations for:
● Edge compute node
● Network design
● Software layout
● Other system configurations
Topics:
• Overview
• DCN topology example
• Red Hat OpenStack Platform and DCN further reading
• Hardware options
• Service layout
• Deployment overview
• DCN solution architecture
• Solution expansion

Overview
DCN, as part of 16.1, leverages OpenStack features like Availability Zones (AZ) and provisioning over routed L3 networks with
Ironic, to enable deployment of compute nodes to remote locations. For example, a service provider may deploy several DCN
sites to scale out a virtual Radio Access Network (vRAN) implementation.
DCN has several caveats that must be considered when planning remote compute site deployment(s):
● Only Compute can be run at an Edge site, other services such as persistent block storage are not supported.
● Image considerations - Overcloud images for bare-metal provisioning of the remote compute nodes are pulled from the
undercloud. Also, instance images for VMs running on nodes will initially be fetched from the control plane the first time they
are used. Subsequent instances will use the locally cached image. Images are large files, implying a fast, reliable connection
to the Red Hat Director node and control plane is required.
● Networking:
○ Latency - a round-trip between the control plane and remote site must be under 100ms or stability of the system could
become compromised.
○ Drop-outs - If an site temporarily loses its connection to the control plane, then no OpenStack control plane API or CLI
operations can be executed until connectivity is restored to the site. For example, existing workloads will continue to
run, but no new instances can be started until the connection is restored. Any control functions like snapshotting, live
migration, and so on cannot occur until the link between the central cloud and site is restored, as all control features are
dependent on the control plane being able to communicate with the site.
NOTE: Connectivity issues are DCN site specific. Losing connection to one DCN site does not affect other DCN
sites.
○ This guide recommends using provider networks for DCN workloads at this time. Depending on the type of workloads you
are running on the nodes, and existing networking policies, there are several ways for configuring instance IP addressing:
■ Static IPs using config-drive in conjunction with cloud-init - Utilizing config-drive as the metadata API server
leverages the virtual media capabilities of Nova, which means there are no Neutron metadata agents or DCHP relay
required to assign an IP address to instances.
■ DHCP relay - Forwards DHCP requests to Neutron at central site.
NOTE: A separate DHCP relay instance is required for each provider network.
■ External DCHP server at site - in this case instance IP addresses are not managed by Neutron.

Distributed Compute Nodes (DCN) 31


○ Inter-compute node awareness - A limitation of Neutron is that it is not able to identify individual compute nodes as
remote or local. Therefore each compute node across all DCN sites, including the central cloud, will have a list of every
other compute node. Depending on your networking configuration this can happen by:
■ Using VXLAN - First, the same Neutron networks must be configured at every site, then every compute node will
build a VXLAN tunnel (through the control plane) to all controllers andcompute nodes regardless if they are remote or
local.
■ Using VLAN only - This method requires that identical network bridges and VLANs are used across all sites.

DCN topology example


It is possible with DCN to deploy any number of remote compute sites, each with its own AZ. Once the site is provisioned,
specifying the site's AZ when launching an instance is all that is required to launch an instance, for example:

openstack server create --flavor tiny --image cirros \


--network edge-management-vxlan \
--security-group edge-management \
--availability-zone AZ_DCN_2 \
edge_vm_1

The following is an example of DCN topology.

32 Distributed Compute Nodes (DCN)


Figure 3. DCN topology

Red Hat OpenStack Platform and DCN further reading


For more on Red Hat OpenStack Platform 16.1 and DCN please see:

Distributed Compute Nodes (DCN) 33


● Red Hat OpenStack Platform 16.1
● Deploying Distributed Compute Nodes to Edge Sites
● Routed L3 Networks (Spine Leaf Networking)

Hardware options
To reduce time spent on specifying hardware for an initial edge deployment, this Reference Architecture Guide offers a full
solution using validated Dell EMC PowerEdge server hardware designed to allow a wide range of configuration options, including
optimized configurations for compute nodes.
Dell EMC recommends starting an edge site deployment using components from this Reference Architecture Guide - Version
16.1. These hardware and operations processes comprise a flexible foundation upon which to expand as your site deployment(s)
grow, so your investment is protected.
As noted throughout this Reference Architecture Guide - Version 16.1, Dell EMC constantly adds capabilities to expand this
offering, and other hardware may be available at the time of this reading. Please contact your Dell EMC sales representative for
more information on new hardware releases.

Service layout
During a DCN site deployment, a subset of OpenStack OpenStack services will be installed each compute node.

Table 14. compute node services


Hypervisor (KVM)
Nova compute services
Neutron (ovs/ml2 agent)
Heat agent

Deployment overview
This is an overview of the DCN deployment process that can be utilized for planning purposes:

NOTE: The DCN feature can be enabled by the JetPack version 16.1 automation toolkit and can enable multiple DCN sites.

1. Hardware setup:
● Rack and stack
● Cabling
● iDRAC setup
● PXE NIC configuration
● Server BIOS and RAID configuration
● Switch configuration
2. Software setup at each DCN site:
● Install a DCHP relay(s) used to forward DCHP traffic to Neutron running on the central cloud control plane
○ At least one DCHP relay is required for provisioning compute nodes at the DCN site
○ Where the DHCP relay(s) are installed is up to the customer
● Discover edge site nodes
● Import discovered nodes into Red Hat OpenStack Director
● Configure overcloud files for each DCN site
● Provision DCN site
● Validate DCN nodes' networking
3. Environment tests
● Once Red Hat OpenStack Director has completed a DCN site deployment, a typical set of validation tasks will include:
4. Create a flavor,

34 Distributed Compute Nodes (DCN)


● Create a flavor with the supporting metadata to target the DCN site when launching an instance, for example:

$ openstack flavor create --property dcn-site-1=true dcn-site-1-flavor


+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| id | 4bc6c3e7-42be-4620-ae84-819459baf496 |
| name | dcn-site-1-flavor |
| os-flavor-access:is_public | True |
| properties | dcn-site-1='true' |
+----------------------------+--------------------------------------+

5. Launch an instance,
● Launch an instance using the new flavor and DCN site provider network, for example:

$ openstack server create --flavor dcn-site-1-flavor \


--network dcn-site-1-provider-net dcn-site-1-vm-1 ...
+-----------+---------------------------------------+
| Field | Value |
+-----------+---------------------------------------+
| flavor | dcn-site-1-flavor (4bc6c3e7-42be-...) |
| hostId | 2ef6c3e7-32ae-2120-ae84-819459baf488 |
| name | dcn-site-1-vm-1 |
| addresses | dcn-site-1-provider-net=192.168.1.123 |
| ... | ... |
+--------+------------------------------------------+

6. Validate instance,
● Validate instance is running and that networking and addressing is correct.

DCN solution architecture


Solution bundle 25GbE/100GbE network configuration
The network for the Dell Technologies Reference Architecture for Red Hat OpenStack Platform has been designed to support
production-ready servers with a highly available network configuration.

NOTE: See 25GbE cluster network logical architecture for Edge profile for a logical architectural layout.

Solution expansion
The Dell Technologies Reference Architecture for Red Hat OpenStack Platform can be expanded by:
● Adding compute nodes to an existing DCN site deployment(s). No more than 20 compute nodes per site are supported at
this time.
● Deploy additional DCN site(s)
NOTE: Currently, Reference Architecture Guide version 16.1 supports up to a total of 700 compute nodes across all sites,
including the core OpenStack installation. For other expansion details, please speak with your Dell EMC sales representative.

Distributed Compute Nodes (DCN) 35


10
Operational notes
This section provides a basic overview of several important system aspects.
Topics:
• High availability (HA)
• Service layout
• Deployment overview

High availability (HA)


In order for the solution to be ready for production, different systems need to be fault-tolerant. The Dell Technologies
Reference Architecture for Red Hat OpenStack Platform design utilizes both hardware-based and software-based redundancy.
This includes, but is not limited to:
● Operating systems are hosted on either a RAID 1 or RAID 10 hard drive set.
● Critical network connections from server to switch utilize network bonding.
● Multiple controllers host the control plane services. Minimally, three controller nodes are required.
● Control plane services are made highly available utilizing ha-proxy, Corosync, Pacemaker, and/or native resiliency.
● Red Hat Ceph storage utilizes a minimum of three servers.
● Red Hat Ceph storage is used with either replication or erasure coding.
● Optional: Instance high availability
This validated option utilizes remote Pacemaker to monitor the compute nodes. If preset criteria are met, the process of
migrating instances off of the failing compute nodes to others begins. If a compute node completely fails, Pacemaker can
be configured to start the failed instances on different compute nodes.
● Optional: Dell EMC Unity storage , SC series storage , and Dell EMC PowerMax storage arrays are highly available.
NOTE: The Solution Admin Host (SAH), and the server hosted on it (Red Hat OpenStack Director), are not fault tolerant,
but are not required for continued functionality of the Dell Technologies Reference Architecture for Red Hat OpenStack
Platform cluster.

Service layout
During the deployment each service configured by the Dell Technologies Reference Architecture for Red Hat OpenStack
Platform needs to reside upon a particular hardware type. For each server platform, two types of nodes have been designed:
● Dell EMC PowerEdge R650 or Dell EMC PowerEdge R750 for compute nodes, controller nodes, Solution Admin Host
(SAH)s, or infrastructure hardware type.
● Dell EMC PowerEdge R750 for storage nodes.
Red Hat OpenStack Director is designed for flexibility, enabling you to try different configurations in order to find the optimal
service placement for your workload. Overcloud: Node type to services presents the recommended layout of each service.
The Red Hat OpenStack Director is deployed to the Solution Admin Host (SAH) as an individual VM. This enables the VM to
control its respective resources.

Table 15. Overcloud: Node type to services


Node to deploy Service
Solution Admin Host (SAH) (KVM) Red Hat OpenStack Director
OpenStack controllers Red Hat Ceph Storage Dashboard
OpenStack controllers Cinder-scheduler

36 Operational notes
Table 15. Overcloud: Node type to services (continued)
Node to deploy Service
OpenStack controllers Cinder-volume
OpenStack controllers Database-server
OpenStack controllers Glance-Image
OpenStack controllers HAproxy (Load Balancer)
OpenStack controllers Heat
OpenStack controllers Keystone-server
OpenStack controllers Neutron-server
OpenStack controllers Nova-controller
OpenStack controllers Nova dashboard-server
Three or more compute nodes Nova-multi-compute
OpenStack controllers Pacemaker
OpenStack controllers RabbitMQ-server (Messaging)
OpenStack controllers Barbican (Secure manangement of secrets)
OpenStack controllers Octavia (LBaaS)
OpenStack controllers Red Hat Ceph storage RADOS gateway
OpenStack controllers Red Hat Ceph storage Monitor a

Three or more storage servers Red Hat Ceph storage (Block)


Optional services
Dell EMC Unity storage
Dell EMC PowerMax storage
Dell EMC SC series storage
Dell EMC SC series storage Enterprise Manager Server

a. The number of OSDs that can be supported with three Controller nodes is listed at 1,000 (https://access.redhat.com/
articles/1548993)

Deployment overview
This is an overview of the deployment process that can be utilized for planning purposes:
Hardware setup:
● Rack and stack
● Cabling
● iDRAC setup
● PXE NIC configuration
● Server BIOS and RAID configuration
● Switch configuration
Software setup:
● Deploy SAH for provisioning services:
○ Deploy Red Hat OpenStack Director Virtual Server (VM) to the SAH.
● Discover nodes
● Import discovered nodes into Red Hat OpenStack Director
● Configure overcloud files

Operational notes 37
● Provision overcloud
● Validate all nodes networking
● Post-deployment, including but not limited to:
○ Enabling fencing
○ Enabling local storage for ephemeral
Environment tests
● Tempest can be used to validate the deployment. At minimum the following tests should be performed:
○ Project creation
○ User creation
○ Network creation
○ Image upload and launch
○ Floating IPaAssignment
○ Basic network testing
○ Volume creation and attachment to VM
○ Object storage upload, retrieval and deletion
○ Deletion of all artifacts created during validation.

38 Operational notes
11
Solution architecture
This core architecture provides prescriptive guidance and recommendations, jointly engineered by Dell EMC and Red Hat, for
deploying Dell Technologies Reference Architecture for Red Hat OpenStack Platform version 16.1 with Dell EMC infrastructure.
The goals are to:
● Provide practical system design guidance and recommended configurations
● Develop tools to use with OpenStack for day-to-day usage and management
● Develop networking configurations capable of supporting your production system
The development of this architecture builds upon the experience and engineering skills of Dell EMC and Red Hat, and
encapsulates best practices developed in numerous real-world deployments. The designs and configurations in this architecture
have been tested in Dell EMC and Red Hat labs to verify system functionality and operational robustness.
The solution consists of the components shown in Solution with 25GbE/100GbE, Red Hat Ceph storage cluster, optional Dell
EMC Unity storage, optional Dell EMC PowerMax storage and optional SC series storage, and represents the base upon which
all optional components and expansion of the Dell Technologies Reference Architecture for Red Hat OpenStack Platform are
built.
Topics:
• Solution common settings
• Solution with 25GbE/100GbE networking overview

Solution common settings


Many settings are common through the Solution. The configurations that are tested are outlined in this section.

Solution Admin Host (SAH) networking


The Solution Admin Host is configured for 25GbE with the server internal bridged networks for the Virtual Machines. It is
physically connected to the following networks:
● Management Network—Used by the Red Hat OpenStack Director for iDRAC control of all Overcloud nodes.
● Private API Network—Used by the Red Hat OpenStack Director to run Tempest tests against the OpenStack private API
● Provisioning Network—Used by the Red Hat OpenStack Director to service DHCP to all hosts, provision each host, and act
as a proxy for external network access
● Public API Network—Used for:
○ Inbound Access
■ HTTP/HTTPS access to the Red Hat OpenStack Director
■ Optional - SSH Access to the Red Hat OpenStack Director
○ Outbound Access
■ HTTP/HTTPS access for Red Hat Ceph storage, RHEL, and RHOSP subscriptions.
■ Used by the Red Hat OpenStack Director to run Tempest tests using the OpenStack public API.

Node type 802.1q tagging information


The solution is designed with the idea that different network traffic should be segregated from other traffic. This is
accomplished by utilizing 802.1q VLAN Tagging for the different segments. The tables Table 11: OpenStack node type to
network 802.1q tagging, Table 12: OpenStack Compute Node for xSP and CSP profile to network 802.1q tagging, and Table 13:
Storage node type to network 802.1q tagging summarize this. This segregation is independent of network speed and used by
25GbE configuration.

Solution architecture 39
Table 16. OpenStack node type to network 802.1q tagging
Network Solution Admin Host OpenStack controller Red Hat Ceph storage
External Network VLAN for Not Connected Connected, tagged Not Connected
Tenants (Floating IP Network)
iDRAC physical connection to Connected, Untagged Connected, Untagged Connected, Untagged
the Management/OOB VLAN
Internal Networks VLAN for Not Connected Connected, Tagged Not Connected
Tenants
Management/OOB Network Connected, Tagged Not Connected Not Connected
VLAN
Private API Network VLAN Connected, Tagged Connected, Tagged Not Connected
Provisioning VLAN Connected, Tagged Connected, Untagged Connected, Untagged
Public API Network VLAN Connected, Tagged Connected, Tagged Not Connected
Storage Clustering VLAN Not Connected Not Connected Connected, Tagged
Storage Network VLAN Connected, Tagged Connected, Tagged Connected, Tagged
Tenant Tunnel Network Not Connected Connected, Tagged Not Connected

Table 17. OpenStack Compute Node for xSP and CSP profile to network 802.1q tagging
Network xSP OpenStack compute CSP - OpenStack compute NFV
External Network VLAN for Tenants Not Connected Connected, Tagged
(Floating IP Network)
iDRAC physical connection to the Connected, Untagged Connected, Untagged
Management/OOB VLAN
Internal Networks VLAN for Tenants Connected, Tagged Connected, Tagged
Management/OOB Network VLAN Not Connected Not Connected
Private API Network VLAN Connected, Tagged Connected, Tagged
Provisioning VLAN Connected, Untagged Connected, Untagged
Public API Network VLAN Not Connected Not Connected
Storage Clustering VLAN Not Connected Not Connected
Storage Network VLAN Connected, Tagged Connected, Tagged
Tenant Tunnel Network Connected, Tagged Connected, Tagged

Table 18. Storage node type to network 802.1q tagging


Network Dell EMC Unity Dell EMC SC series storage Dell EMC SC series storage
Enterprise Manager array
External Network for Tenants Connected, Tagged Not Connected Not Connected
VLAN (Floating IP Network)
iDRAC physical connection to Not Connected Not Connected Not Connected
the Management/OOB VLAN
Internal Networks VLAN for Connected, Tagged Not Connected Not Connected
Tenants
Management/OOB Network Not Connected Not Connected Not Connected
VLAN
Provisioning VLAN Not Connected Not Connected Not Connected
Private API Network VLAN Not Connected Not Connected Not Connected

40 Solution architecture
Table 18. Storage node type to network 802.1q tagging (continued)
Network Dell EMC Unity Dell EMC SC series storage Dell EMC SC series storage
Enterprise Manager array
Public API Network VLAN Not Connected Connected, Untagged Not Connected
Storage Network VLAN Connected, Untagged Connected, Untagged Connected, Untagged
Storage Clustering VLAN Not Connected Not Connected Not Connected
Tenant Tunnel Network Not Connected Not Connected Not Connected

Table 19. OpenStack node types for HCI profile to network 802.1q tagging
Network HCI OpenStack controller OpenStack
External Network VLAN for Tenants Not Connected Connected, tagged
(Floating IP Network)
iDRAC physical connection to the Connected, Untagged Connected, Untagged
Management/OOB VLAN
Internal Networks VLAN for Tenants Connected, Tagged Connected, Tagged
Management/OOB Network VLAN Not Connected Not Connected
Private API Network VLAN Connected, Tagged Connected, Tagged
Provisioning VLAN Connected, Tagged Connected, Untagged
Public API Network VLAN Not Connected Not Connected
Storage Clustering VLAN Not Connected Not Connected
Storage Network VLAN Connected, Tagged Connected, Tagged
Tenant Tunnel Network Connected, Tagged Connected, Tagged

Solution Red Hat Ceph storage configuration


The Red Hat Ceph storage cluster provides data protection through replication, block device cloning, and snapshots. By default
the data is striped across the entire cluster, with three replicas of each data entity. The number of storage nodes in a single
cluster can scale to hundreds of nodes and many petabytes in size.
Red Hat Ceph storage considers the physical placement (position) of storage nodes within defined fault domains (i.e., rack, row,
and data center) when deciding how data is replicated. This reduces the probability that a given failure may result in the loss of
more than one data replica.
The Red Hat Ceph storage cluster services include:
● Ceph Dashboard—Ceph web based monitoring tool hosted on the Controllers.
● RADOS Gateway—Object storage gateway.
● Object Storage Daemon (OSD)—Running on storage nodes, the OSD serves data to the Red Hat Ceph storage clients from
disks on the storage nodes. Generally, there is one OSD process per disk drive.
● Monitor (MON)—Running on Controller nodes, the MON process is used by the Red Hat Ceph storage clients and internal
Red Hat Ceph storage processes, to determine the composition of the cluster and where data is located. There should be a
minimum of three MON processes for the Red Hat Ceph storage cluster. The total number of MON processes should be odd.
● Ceph Manager Daemon (ceph-mgr)—Running on Controller nodes alongside the MON processes, it provides additional
monitoring and interfaces to external monitoring and management systems.
NOTE: If MON processes on Controller nodes become a bottleneck, then additional MON processes can be added to the
cluster by using dedicated machines, or by starting MON processes on storage Nodes. A custom Services engagement can
be arranged; please contact your Dell EMC sales representative for assistance.
The Storage Network VLAN is described in the Red Hat Ceph storage documentation as the public network. The Storage
Cluster Network VLAN is described in the Red Hat Ceph storage documentation as the cluster network.
A supported distribution by Red Hat with production level support of Ceph is used in this solution: Red Hat Ceph storage 4,
which also includes the Red Hat Ceph Storage Dashboard VM. The Red Hat Ceph Storage Dashboard also includes Red Hat

Solution architecture 41
Ceph storage troubleshooting and servicing tools and utilities. Red Hat Ceph Storage Dashboard is installed on the Controllers.
Note that:
● The SAH must have access to the Controller and Storage nodes through the Private API Access VLAN in order to manage
Red Hat Ceph storage.
● The Controller nodes must have access to the Storage nodes through the Storage Network VLAN in order for the MON
processes on the Controller nodes to be able to query the Red Hat Ceph storage MON processes, for the cluster state and
configuration.
● The Compute nodes must have access to the Storage nodes through the Storage Network VLAN in order for the Red Hat
Ceph storage client on that node to interact with the storage nodes, OSDs, and the Red Hat Ceph storage MON processes.
● The Storage nodes must have access to the Storage Network VLAN, as previously stated, and to the Storage Cluster
Network VLAN.

Solution Admin Host (SAH) networking


The SAH is configured for 25GbE with the server internal bridged networks for the VMs. It is physically connected to the
following networks:
● Management network — Used by the Red Hat OpenStack Director for iDRAC control of all Overcloud nodes.
● Private API network — Used by the Red Hat OpenStack Director to run Tempest tests against the OpenStack private API.
● Provisioning network — Used by the Red Hat OpenStack Director to service DHCP to all hosts, provision each host, and
act as a proxy for external network access.
● Public API network — Used for:
○ Inbound Access
■ HTTP/HTTPS access to the Red Hat OpenStack Director
■ Optional - SSH access to the Red Hat OpenStack Director
○ Outbound Access
■ HTTP/HTTPS access for Red Hat Ceph storage, RHEL, and RHOSP subscriptions.
■ Used by the Red Hat OpenStack Director to run Tempest tests using the OpenStack public API.

Node type 802.1q tagging information


This solution separates different types of network traffic. This is accomplished by utilizing 802.1q VLAN Tagging for the
different segments. The tables OpenStack node type to network 802.1q tagging, OpenStack <ph keyref="compute_node"/> for
xSP and CSP profile to network 802.1q, and Storage node type to network 802.1q tagging summarize this. This separation is
independent of network speed and used by 25GbE configuration.

Table 20. OpenStack node type to network 802.1q tagging


Network Solution Admin Host Red Hat Ceph
(SAH) storage
External network VLAN for tenants (floating Not connected Connected, tagged Not connected
IP network)
iDRAC physical connection to the Connected, untagged Connected, untagged Connected, untagged
Management/OOB VLAN
Internal networks VLAN for tenants Not connected Connected, tagged Not connected
Management/OOB network VLAN Connected, tagged Not connected Not connected
Private API network VLAN Connected, tagged Connected, tagged Not connected
Provisioning VLAN Connected, tagged Connected, untagged Connected, untagged
Public API network VLAN Connected, tagged Connected, tagged Not connected
Storage clustering VLAN Not connected Not connected Connected, tagged
Storage network VLAN Connected, tagged Connected, tagged Connected, tagged
Tenant tunnel network Not connected Connected, tagged Not connected

42 Solution architecture
Table 21. OpenStack compute node for xSP and CSP profile to network 802.1q tagging
Network xSP OpenStack compute CSP - OpenStack compute
NFV
External network VLAN for tenants (floating Not connected Connected, tagged
IP network)
iDRAC physical connection to the Connected, untagged Connected, untagged
Management/OOB VLAN
Internal networks VLAN for tenants Connected, tagged Connected, tagged
Management/OOB network VLAN Not connected Not connected
Private API network VLAN Connected, tagged Connected, tagged
Provisioning VLAN Connected, untagged Connected, untagged
Public API network VLAN Not connected Not connected
Storage clustering VLAN Not connected Not connected
Storage network VLAN Connected, tagged Connected, tagged
Tenant tunnel network Connected, tagged Connected, tagged

Table 22. Storage node type to network 802.1q tagging


Network Dell EMC Unity Dell EMC SC series storage Dell EMC SC series
Enterprise Manager storage array
External network VLAN for tenants Connected, tagged Not connected Not connected
(floating IP network)
iDRAC physical connection to the Not connected Not connected Not connected
Management/OOB VLAN
Internal Networks VLAN for tenants Connected, tagged Not connected Not connected
Management/OOB network VLAN Not connected Not connected Not connected
Provisioning VLAN Not connected Not connected Not connected
Private API network VLAN Not connected Not connected Not connected
Public API network VLAN Not connected Connected, untagged Not connected
Storage network VLAN Connected, untagged Connected, untagged Connected, untagged
Storage clustering VLAN Not connected Not connected Not connected
Tenant tunnel network Not connected Not connected Not connected

Table 23. OpenStack node types for HCI profile to network 802.1q tagging
Network HCI OpenStack
External network VLAN for tenants (floating IP network) Not connected Connected, tagged
iDRAC physical connection to the Management/OOB Connected, untagged Connected, untagged
VLAN
Internal networks VLAN for tenants Connected, tagged Connected, tagged
Management/OOB network VLAN Not connected Not connected
Private API network VLAN Connected, tagged Connected, tagged
Provisioning VLAN Connected, tagged Connected, untagged
Public API network VLAN Not connected Not connected
Storage clustering VLAN Not connected Not connected
Storage network VLAN Connected, tagged Connected, tagged

Solution architecture 43
Table 23. OpenStack node types for HCI profile to network 802.1q tagging (continued)
Network HCI OpenStack
Tenant tunnel network Connected, untagged Connected, tagged

Solution Red Hat Ceph storage configuration


The Red Hat Ceph storage cluster provides data protection through replication, block device cloning, and snapshots. By default
the data is striped across the entire cluster, with three replicas of each data entity. The number of Storage nodess in a single
cluster can scale to hundreds of nodes and many petabytes in size.
Red Hat Ceph storage considers the physical placement (position) of storage nodes within defined fault domains (rack, row, and
data center) when deciding how data is replicated. This reduces the probability that a given failure may result in the loss of more
than one data replica.
The Red Hat Ceph storage cluster services include:
● Ceph dashboard — Ceph web based monitoring tool hosted on the controllers.
● RADOS gateway — Object storage gateway.
● Object Storage Daemon (OSD) — Running on storage nodes, the OSD serves data to the Red Hat Ceph storage clients
from disks on the storage nodes. Generally, there is one OSD process per disk drive.
● Monitor (MON) — Running on controller nodes, the MON process is used by the Red Hat Ceph storage clients and internal
Red Hat Ceph storage processes, to determine the composition of the cluster and the location of data. There should be a
minimum of three MON processes for the Red Hat Ceph storage cluster. The total number of MON processes should be odd.
● — Running on controller nodes alongside the MON processes, provides additional monitoring and interfaces to external
monitoring and management systems
NOTE: If MON processes on controller nodes become a bottleneck, then additional MON processes can be added to the
cluster by using dedicated machines, or by starting MON processes on Storage nodess. A custom Services engagement can
be arranged; please contact your Dell EMC sales representative for assistance.
The Storage Network VLAN is described in the Red Hat Ceph storage documentation as the public network. The Storage
Network VLAN is described in the Red Hat Ceph storage documentation as the cluster network.
A supported distribution by Red Hat with production level support of Ceph is used in this solution: Red Hat Ceph storage 4,
which also includes the Red Hat Ceph Storage Dashboard VM. The Red Hat Ceph Storage Dashboard also includes Red Hat
Ceph storage troubleshooting and servicing tools and utilities. Red Hat Ceph Storage Dashboard is installed on the controllers.
Note that:
● The SAH must have access to the controller node and storage nodes through the Private API Access VLAN in order to
manage Red Hat Ceph storage.
● The controller nodes must have access to the storage nodes through the Storage Network VLAN in order for the MON
processes on the Controller nodes to be able to query the Red Hat Ceph storage MON processes, for the cluster state and
configuration.
● The compute nodes must have access to the storage nodes through the Storage Network VLAN in order for the Red Hat
Ceph storage client on that node to interact with the storage nodes, OSDs, and the Red Hat Ceph storage MON processes.
● The storage nodes must have access to the Storage Network VLAN, as previously stated, and to the Storage Cluster
Network VLAN.

Solution with 25GbE/100GbE networking overview


Since the Solution is designed for a production environment, key OpenStack services are made highly available (HA) by
clustering the nodes. The networking is based upon 25GbE bonds for data networks, and the network switches are configured
for HA. The Out of Band Management (iDRAC's) network is not HA, and is 1GbE. The 100GbE networking is used in the solution
for the user/tenant traffic.
For basic hardware configuration refer to Bill of materials.

Solution 25GbE/100GbE with Ceph rack layout


The Solution includes three (3) storage nodes, configured in a Red Hat Ceph storage cluster, which is tied into Cinder, Glance,
and Nova.

44 Solution architecture
See Solution Admin Host (SAH) Dell EMC PowerEdge R650, Controller node Dell EMC PowerEdge R650, Compute node Dell
EMC PowerEdge R650 or Compute node Dell EMC PowerEdge R750 hardware configurations. The Solution includes:
● Node 1: Solution Admin Host (SAH) with Red Hat OpenStack Director
● Nodes 2 - 4: Dell EMC PowerEdge R650 OpenStack controllers
● Nodes 5 - 7: Dell EMC PowerEdge R650 or Dell EMC PowerEdge R750 Nova compute nodes
● Nodes 8 - 10: Dell EMC PowerEdge R750 storage nodes
● Network Switches: Two (2) Dell EMC Networking S5232F-ON, and one (1) Dell EMC Networking S3048-ON
NOTE: The following rack is not to scale but shows the node types and usage.

Figure 4. Solution with 25GbE/100GbE, Red Hat Ceph storage cluster, optional SC series storage, Dell EMC Unity,
Dell EMC PowerStore, and Dell EMC PowerMax storage.

NOTE: The following rack is not to scale but shows the node types and usage.

Solution architecture 45
Figure 5. Solution with 25GbE/100GbE and Dell EMC PowerEdge XE2420

Solution 25GbE with PowerFlex rack layout


The solution includes three (3) storage nodes, configured in a PowerFlex cluster, which is tied into Cinder, Glance, and Nova.
See Table 15: Solution Admin Host (SAH) Dell EMC PowerEdge R640, Table 16: Controller node Dell EMC PowerEdge R640,
Table 17: Compute node Dell EMC PowerEdge R640, and storage node VxFlex R740xd for hardware configurations. The solution
includes:
● Node 1: Solution Admin Host (SAH) with Red Hat OpenStack Director
● Nodes 2 - 4: Dell EMC PowerEdge R640 OpenStack controllers
● Nodes 5 - 7: Dell EMC PowerEdge R640 Nova compute nodes
● Nodes 8 - 10: VxFlex R740xd storage nodes
● Network Switches: Two (2) S5232F-ON, and one (1) S3048-ON
NOTE: The following rack is not to scale but shows the node types and usage.

46 Solution architecture
Figure 6. Solution with 25GbE/100GbE, PowerFlex cluster, optional SC series storage, Dell EMC Unity storage, Dell
EMC PowerStore, and Dell EMC PowerMax.

Solution architecture 47
NOTE: The following rack is not to scale but shows the node types and usage.

Solution 25GbE/100GbE network configuration


The network for the Dell Technologies Reference Architecture for Red Hat OpenStack Platform has been designed to support
production-ready servers with a highly available network configuration.
The node type will determine how the switches are configured for delivering the different networks. Table 11: OpenStack node
type to network 802.1q tagging and Table 13: Storage node type to network 802.1q tagging outline the networks to the node
types.
For the CSP Profile with non OVS Offload and Ceph storage logical network:

Figure 7. 25GbE/100GbE cluster network logical architecture for CSP profile

For the CSP Profile with OVS Offload and Ceph storage logical network:

48 Solution architecture
Figure 8. 25GbE/100GbE cluster network logical architecture for CSP OVS Offload Ceph storage profile

For the CSP Profile with PowerFlex:

Solution architecture 49
Figure 9. 25GbE/100GbE cluster network logical architecture for CSP and PowerFlex profile

For the HCI profile with 25GbE/100GbE cluster network:

50 Solution architecture
Figure 10. 25GbE/100GbE cluster network logical architecture for HCI profile

For the Edge profile with 25GbE cluster network:

Solution architecture 51
Figure 11. 25GbE cluster network logical architecture for profile

52 Solution architecture
12
Bill of materials (BOM)
This guide provides bill of material information necessary to purchase the proper hardware to deploy the Dell Technologies
Reference Architecture for Red Hat OpenStack Platform.

NOTE: For cable, racks, power please contact your Dell EMC support representative.

Topics:
• Nodes overview
• Bill of Materials for Dell EMC PowerEdge R-Series solution
• Bill of Materials for Dell EMC PowerEdge R-Series — DCN
• Subscriptions and network switches in the solution

Nodes overview
The minimum hardware needed is:
● One Solution Admin Host (SAH)
● Three controller nodes
● Three compute nodes
● Three storage servers
Please consult with your Dell EMC sales representative to ensure proper preparation and submission of your hardware and
software orders.

Bill of Materials for Dell EMC PowerEdge R-Series


solution
Base configuration
● One Dell EMC PowerEdge R650 SAH node
● Three Dell EMC PowerEdge R650 controller nodes
● Three Dell EMC PowerEdge R750 storage nodes
NOTE: Three compute nodes may consist of:
● Three Dell EMC PowerEdge R650 compute nodes or
● Three Dell EMC PowerEdge R750 compute nodes or

Table 24. Solution Admin Host (SAH) Dell EMC PowerEdge R650
Machine function SAH node
Platform Dell EMC PowerEdge R650 (one qty)
CPU 2 x Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz
RAM (minimum) 192 GB DDR-4 2933 MHz
LOM 1 x Broadcom Gigabit Ethernet BCM5720
Add-in network 2 x Mellanox ConnectX-5 EN 25GbE Dual-port SFP28 Adapter
Disk 8 x 1117 GB 10k SAS 12Gbps

Bill of materials (BOM) 53


Table 24. Solution Admin Host (SAH) Dell EMC PowerEdge R650 (continued)
Machine function SAH node
Storage controller 1 x PERC H755 Front
RAID RAID 10

Table 25. Controller node Dell EMC PowerEdge R650


Machine function Controller nodes
Platform Dell EMC PowerEdge R650 (three qty)
CPU 2 x Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz
RAM (Minimum) 192 GB DDR-4 2933 MHz
LOM 1 x Broadcom Gigabit Ethernet BCM5720
Add-in network 2 x Mellanox ConnectX-5 EN 25GbE Dual-port SFP28 Adapter
Disk 8 x 600 GB 10k SAS 12Gbps
Storage Controller 1 x PERC H755 Front
RAID RAID 10

Table 26. Storage node Dell EMC PowerEdge R750


Machine function Storage nodes
Platform Dell EMC PowerEdge R750 (three qty)
CPU 2 x Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz
RAM (minimum) 192GB DDR-4 2933 MHz
LOM 1 x Broadcom Gigabit Ethernet BCM5720
Add-in network 2 x Mellanox ConnectX-5 EN 25GbE Dual-port SFP28
Adapter
Disk 8 x 1117GB 10k SAS 12Gbps
Storage controller BOSS-S2
Non-RAID HBA355i Front
RAID RAID 1 on BOSS-S2

Table 27. Compute node Dell EMC PowerEdge R650 or Dell EMC PowerEdge R750
Machine function Compute nodes
Platform Dell EMC PowerEdge R650 (three qty)
CPU 2 x Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz
RAM (minimum) 192 GB DDR-4 2933 MHz
Add-in network 2 x Mellanox ConnectX-5 EN 25GbE Dual-port SFP28 Adapter
Disk 8 x 1117 GB 10k SAS 12Gbps
Storage controller 1 x PERC H755 Front
RAID RAID 10

NOTE: Be sure to consult your Dell EMC account representative before changing the recommended hardware
configurations.

54 Bill of materials (BOM)


Bill of Materials for Dell EMC PowerEdge R-Series —
DCN
Up to 20 Dell EMC PowerEdge R-Series compute servers are supported per DCN site in any combination of the following:
● Dell EMC PowerEdge XR11
● Dell EMC PowerEdge XR12
● Dell EMC PowerEdge XE2420

Edge compute node configuration with 25GbE networking


Table 28. Compute node Dell EMC PowerEdge XR11
Platform Dell EMC PowerEdge XR11
CPU 1 x Intel(R) Xeon(R) Gold 6338N CPU @ 2.20GHz
RAM (minimum) 192GB DDR-4 2666 MHz
LOM Broadcom Adv Quad 25Gb Ethernet
Add-in network 1 x Mellanox ConnectX-5 EN 25GbE Dual-port SFP28
Adapter
Disk 4 x 446GB SATA
Storage controller PERC 755 Adapter
RAID RAID 10

Table 29. Compute node Dell EMC PowerEdge XR12


Platform Dell EMC PowerEdge XR12
CPU Intel(R) Xeon(R) Gold 6338N CPU @ 2.20GHz
RAM (minimum) 192GB DDR-4 2666 MHz
LOM Broadcom Adv Quad 25Gb Ethernet
Add-in network 1 x Mellanox ConnectX-5 EN 25GbE Dual-port SFP28
Adapter
Disk 4 x 446GB SATA
Storage controller PERC 755 Adapter
RAID RAID 10

Table 30. Compute node Dell EMC PowerEdge XE2420


Platform Dell EMC PowerEdge XE2420
CPU Intel ® Xeon ® Intel(R) Xeon(R) Gold 6238 CPU @ 2.10GHz
RAM (minimum) 192GB DDR-4 2933 MHz
NIC Mezzanine 1 x Intel ® XXV710 DP OCP 25GbE DA/SFP+
Add-in network 3 x Intel ® XXV710 DP 25GbE DA/SFP+ Adapter
Disk 4 x 900GB 10k SAS 12Gbps
Storage controller PERC H740P controller
RAID RAID 10

Bill of materials (BOM) 55


NOTE: Be sure to consult your Dell EMC account representative before changing the recommended hardware
configurations.

Subscriptions and network switches in the solution


A Dell EMC sales representative will determine the correct software subscriptions needed for the Dell Technologies Reference
Architecture for Red Hat OpenStack Platform and Dell EMC Networking OS10 subscriptions.
Required subscriptions:
● Red Hat OpenStack Platform
● Red Hat Ceph storage
● Dell EMC Networking OS10
● Red Hat Satellite - Optional
NOTE: Please contact your Dell EMC sales representative.

Default network switch - Dell EMC Networking S3048-ON switch


Table 31. S3048-ON switch
Product Description
S3048-ON 48 line-rate 1000BASE-T ports, 4 line-rate 10GbE SFP+ ports
(one qty)
Redundant power supplies ● AC power supply or
● DC power supply
Fans ● Fan module I/O panel to PSU airflow or
● Fan module PSU to I/O panel airflow
Validated operating systems Dell EMC Networking OS10

Dell EMC Networking S4048-ON optional switch


Table 32. S4048-ON switch
Product Description
S4048-ON 48 x 10GbE SFP+, 6x QSFP+ (one qty) - optional
Redundant power supplies ● AC power supply or
● DC power supply
Fans ● Fan module I/O panel to PSU airflow or
● Fan module PSU to I/O panel airflow
Validated operating systems Dell EMC Networking OS10

Dell EMC Networking S5232F-ON switch


Table 33. S5232F-ON switch
Product Description
S5232F-ON 100 GbE, 40 GbE, and 25 GbE (two qty)
Redundant power supplies ● AC power supply or
● DC power supply
Fans ● Fan module I/O panel to PSU airflow or

56 Bill of materials (BOM)


Table 33. S5232F-ON switch (continued)
Product Description
● Fan module PSU to I/O panel airflow
Validated operating systems Dell EMC Networking OS10

Dell EMC Networking S5224F-ON switch


Table 34. S5224F-ON switch
Product Description
S5224F-ON 100 GbE, 40 GbE, and 25 GbE (two qty)
Redundant power supplies ● AC power supply or
● DC power supply
Fans ● Fan module I/O panel to PSU airflow or
● Fan module PSU to I/O panel airflow
Validated operating systems Dell EMC Networking OS10

Bill of materials (BOM) 57


13
Bill of materials - legacy Dell EMC servers
This appendix provides legacy Dell EMC servers supported bill of material information necessary to purchase the proper
hardware to deploy the Dell Technologies Reference Architecture for Red Hat OpenStack Platform.

NOTE: For cable, racks, power please contact your Dell EMC support representative.

Topics:
• Bill of Materials for Dell EMC PowerEdge R-Series - Mellanox
• Bill of Materials for Dell EMC PowerEdge R-Series solution — Intel NICs
• Bill of Materials for Dell EMC PowerEdge R740xd — PowerFlex
• Bill of Materials for Dell EMC PowerEdge R-Series solution — Hyper-Converged Infrastructure

Bill of Materials for Dell EMC PowerEdge R-Series -


Mellanox
The base Dell EMC PowerEdge R-Series solution is comprised of:
● One Dell EMC PowerEdge R640 SAH node
● Three Dell EMC PowerEdge R640 controller nodes
● Three Dell EMC PowerEdge R740xd storage nodes
NOTE: Three compute nodes may consist of:
● Three Dell EMC PowerEdge R640 compute nodes or
● Three Dell EMC PowerEdge R740xd compute nodes or
● Three Dell EMC PowerEdge R6515 compute nodes or
● Three Dell EMC PowerEdge R7515 compute nodes

Base configuration — Mellanox


Table 35. Solution Admin Host (SAH) Dell EMC PowerEdge R640
Machine function SAH node
Platform Dell EMC PowerEdge R640 (one qty)
CPU Intel Xeon Gold 6230 CPU @ 2.10GHz
RAM (minimum) 192 GB DDR-4 2933 MHz
LOM 1 x Mellanox 25GbE 2P ConnectX4LX SFP RNDC
Add-in network 1 x Mellanox 25GbE 2P ConnectX4LX SFP adapter
Disk 8 x 600 GB 10k SAS 12Gbps
Storage controller PERC H740 Mini controller
RAID RAID 10

Table 36. Controller node Dell EMC PowerEdge R640


Machine function Controller nodes
Platform Dell EMC PowerEdge R640 (three qty)

58 Bill of materials - legacy Dell EMC servers


Table 36. Controller node Dell EMC PowerEdge R640 (continued)
Machine function Controller nodes
CPU Intel Xeon Gold 6230 CPU @ 2.10GHz
RAM (Minimum) 192 GB DDR-4 2933 MHz
LOM 1 x Mellanox 25GbE 2P ConnectX4LX SFP RNDC
Add-in network ● 1 x Mellanox 25GbE 2P ConnectX4LX SFP adapter
● 2 x Mellanox 100GbE 2P ConnectX5 SFP adapter (Used
for CSP Profile)
Disk 8 x 600 GB 10k SAS 12Gbps
Storage Controller PERC H740 Mini controller
RAID RAID 10

Table 37. Storage node Dell EMC PowerEdge R740xd


Machine function Storage nodes
Platform Dell EMC PowerEdge R740xd (three qty)
CPU Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz
RAM (minimum) 192GB DDR-4 2933 MHz
LOM 1 x Mellanox 25GbE 2P ConnectX4LX SFP RNDC
Add-in network 1 x Mellanox 25GbE 2P ConnectX4LX SFP adapter
Disk Option 1: Option 2: Option 3:
● OSD and Journal Drives ● OSD and Journal ● NVMe Front Drives:
separate colocated ● 12 x 1.2TB Intel NVMe
● Front Drives: 12 x 2.4TB ● Front drives: 24 x 1.8TB ● 2 x 1.8 TB 10K SAS
10K SAS 10K SAS
● Mid Bay drive: 4 x 2.4 TB ● Mid Bay drives: 4 x 1.8.TB
10K SAS 10K SAS
● Flex Bay drive: 4 x 400GB ● Flex Bay drives: 4 x 1.8TB
SAS SSD 10K SAS
Storage controller ● HBA 330 ● HBA 330 ● PERC H740 Mini
● BOSS Cntrl + 2 M.2 ● BOSS Cntrl + 2 M.2 controller
● 240G,R1,FH ● 240G,R1,FH
RAID Operating System on BOSS in RAID 1 configuration pass through each data disk

Table 38. Compute node Dell EMC PowerEdge R640


Machine function Compute nodes
Platform Dell EMC PowerEdge R640 (three qty)
CPU Intel Xeon Gold 6230 CPU @ 2.10GHz
RAM (minimum) 192 GB DDR-4 2933 MHz
LOM 1 x Mellanox 25GbE 2P ConnectX4LX SFP RNDC
Add-in network Option 1: xSP Option 2: CSP
● 1 x Mellanox 25GbE 2P ● 1 x Mellanox 25GbE 2P ConnectX4LX SFP
ConnectX4LX SFP adapter ● 2 x Mellanox 100GbE 2P ConnectX5 SFP adapter
Disk 8 x 600 GB 10k SAS 12Gbps
Storage controller PERC H740 Mini controller
RAID RAID 10

Bill of materials - legacy Dell EMC servers 59


Table 39. Compute node Dell EMC PowerEdge R740xd
Platform Dell EMC PowerEdge R740xd
Platform Dell EMC PowerEdge R740xd (three qty)
CPU Intel ® Xeon ® Gold 6230 CPU @ 2.10GHz
RAM (minimum) 192GB DDR-4 2933 MHz
LOM 1 x Intel ® X710 10GbE SFP+
Add-in network 2 x Intel ® XXV710 DP 25GbE DA/SFP+ Adapter
Disk* 8-24 x 600GB 10k SAS 12Gbps
Storage controller PERC H740 Mini controller
RAID RAID 10

NOTE: * When choosing Dell EMC PowerEdge R740xd as compute nodes, all nodes must have the same number of disks.

Table 40. Compute node Dell EMC PowerEdge R6515


Machine function Compute nodes
Platform Dell EMC PowerEdge R6515 (three qty)
CPU 1 x AMD EPYC 7702P 64-Core Processorr @ 2000 MHz
RAM (minimum) 192GB DDR-4 2933 MHz
LOM 1 x Broadcom Gigabit Ethernet BCM5720
NIC mezzanine 1 x Broadcom Adv. Dual 25Gb Ethernet
Add-in network 2 x Mellanox GbE 2P ConnectX5 SFP Adapter
Disk 6 x 900GB 10k SAS 12Gbps
Storage controller PERC H730 Mini controller
RAID RAID 10

Table 41. Compute node Dell EMC PowerEdge R7515


Machine function Compute nodes
Platform Dell EMC PowerEdge R7515 (three qty)
CPU 1 x AMD EPYC 7742 64-Core Processor @ 2250 MHz
RAM (minimum) 192GB DDR-4 2933 MHz
LOM 1 x Mellanox 25GbE 2P ConnectX4LX SFP RNDC
NIC Mezzanine 1 x Broadcom Adv. Dual 25Gb Ethernet
Add-in network Option 1: Broadcom Option 2: Intel Option 3: Mellanox
● 3 x Broadcom Adv. Dual ● 4 x Intel XXV710 DP ● 4 x Mellanox 100GbE 2P
25Gb Ethernet adapter 25GbE DA/SFP+ adapter ConnectX5 SFP adapter
Disk 10 x 600GB 10k SAS 12Gbps
Storage controller PERC H740 Mini controller
RAID RAID 10

NOTE: Be sure to consult your Dell EMC account representative before changing the recommended hardware
configurations.

60 Bill of materials - legacy Dell EMC servers


Base configuration - Broadcom, Mellanox, or Intel NIC's in Dell
EMC PowerEdge R7515 compute nodes
The base Dell EMC PowerEdge R-Series solution is comprised of:
● Three Dell EMC PowerEdge R7515 compute nodes
● Three Dell EMC PowerEdge R740xd storage nodes

Table 42. Compute node Dell EMC PowerEdge R7515


Machine function Compute nodes
Platform Dell EMC PowerEdge R7515 (three qty)
CPU 1 x AMD EPYC 7742 64-Core Processor @ 2250 MHz
RAM (minimum) 192GB DDR-4 2933 MHz
LOM 1 x Mellanox 25GbE 2P ConnectX4LX SFP RNDC
NIC Mezzanine 1 x Broadcom Adv. Dual 25Gb Ethernet
Add-in network Option 1: Broadcom Option 2: Intel Option 3: Mellanox
● 3 x Broadcom Adv. Dual ● 4 x Intel XXV710 DP ● 4 x Mellanox 100GbE 2P
25Gb Ethernet adapter 25GbE DA/SFP+ adapter ConnectX5 SFP adapter
Disk 10 x 600GB 10k SAS 12Gbps
Storage controller PERC H740 Mini controller
RAID RAID 10

Table 43. Storage node Dell EMC PowerEdge R740xd


Machine function Storage nodes
Platform Dell EMC PowerEdge R740xd (three qty)
CPU Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz
RAM (minimum) 192GB DDR-4 2933 MHz
LOM 1 x Mellanox 25GbE 2P ConnectX4LX SFP RNDC
Add-in network 1 x Mellanox 25GbE 2P ConnectX4LX SFP adapter
Disk Option 1: Option 2: Option 3:
● OSD and Journal Drives ● OSD and Journal ● NVMe Front Drives:
separate colocated ● 12 x 1.2TB Intel NVMe
● Front Drives: 12 x 2.4TB ● Front drives: 24 x 1.8TB ● 2 x 1.8 TB 10K SAS
10K SAS 10K SAS
● Mid Bay drive: 4 x 2.4 TB ● Mid Bay drives: 4 x 1.8.TB
10K SAS 10K SAS
● Flex Bay drive: 4 x 400GB ● Flex Bay drives: 4 x 1.8TB
SAS SSD 10K SAS
Storage controller ● HBA 330 ● HBA 330 ● PERC H740 Mini
● BOSS Cntrl + 2 M.2 ● BOSS Cntrl + 2 M.2 controller
● 240G,R1,FH ● 240G,R1,FH
RAID Operating System on BOSS in RAID 1 configuration pass through each data disk

NOTE: Be sure to consult your Dell EMC account representative before changing the recommended hardware
configurations.

Bill of materials - legacy Dell EMC servers 61


Bill of Materials for Dell EMC PowerEdge R-Series
solution — Intel® NICs
The base Dell EMC PowerEdge R-Series solution is comprised of:
● One Dell EMC PowerEdge R640 SAH node
● Three Dell EMC PowerEdge R640 controller nodes
● Three Dell EMC PowerEdge R740xd compute nodes
● Three Dell EMC PowerEdge R740xd storage nodes

Base configuration — Intel® NICs


Table 44. Solution Admin Host (SAH) Dell EMC PowerEdge R640
Machine function SAH node
Platform Dell EMC PowerEdge R640 (one qty)
CPU Intel Xeon Gold 6230 CPU @ 2.10 GHz
RAM (minimum) 192 GB DDR-4 2933 MHz
LOM N/A
Add-in network 2 x Intel XXV710 DP 25GbE DA/SFP+ adapter
Disk 8 x 600 GB 10k SAS 12Gbp
Storage controller PERC H740 Mini controller
RAID RAID 10

Table 45. Controller node Dell EMC PowerEdge R640


Machine function Controller nodes
Platform Dell EMC PowerEdge R640 (three qty)
CPU Intel Xeon Gold 6230 CPU @ 2.10 GHz
RAM (minimum) 192 GB DDR-4 2933 MHz
LOM N/A
Add-in network 2 x Intel XXV710 DP 25GbE DA/SFP+ adapter
Disk 8 x 600 GB 10k SAS 12Gbps
Storage controller PERC H740 Mini controller
RAID RAID 10

Table 46. Compute node Dell EMC PowerEdge R740xd


Machine function Compute nodes
Platform Dell EMC PowerEdge R740xd (three qty)
CPU Intel Xeon Gold 6230 CPU @ 2.10GHz
RAM (minimum) 192 GB DDR-4 2933 MHz
LOM N/A
Add-in network Option 1: xSP Option 2: CSP
● 2 x Intel XXV710 DP 25GbE DA/ ● 4 x Intel XXV710 DP 25GbE DA/SFP+
SFP+ adapter adapter
Disk 8 x 600 GB 10k SAS 12Gbps

62 Bill of materials - legacy Dell EMC servers


Table 46. Compute node Dell EMC PowerEdge R740xd (continued)
Machine function Compute nodes
Storage controller PERC H740 Mini controller
RAID RAID 10

Table 47. Storage node Dell EMC PowerEdge R740xd


Machine function Storage nodes
Platform Dell EMC PowerEdge R740xd (three qty)
CPU Intel Xeon Gold 6230 CPU @ 2.10GHz
RAM (minimum) 192 GB DDR-4 2933 MHz
LOM N/A
Add-in network 2 x Intel XXV710 DP 25GbE DA/SFP+ adapter
Disk Option 1: Option 2: Option 3:
● OSD and Journal Drives ● OSD and Journal colocated ● NVMe Front Drives:
separate ● Front Drives: 12 x 2.4 TB 10K ● 12 x 1.2 TB Intel NVMe
● Front Drives: 12 x 2.4 SAS ● 2 x 1.8 TB 10K SAS
TB 10K SAS ● Mid Bay drives: 4 x 1.8.TB 10K
● Mid Bay drive: 4 x 2.4 SAS
TB 10K SAS ● Flex Bay drives: 4 x 1.8 TB 10K
● Flex Bay drive: 4 x 400 SAS
GB SAS SSD
Storage controller ● HBA 330 ● HBA 330 PERC H740 Mini controller
● BOSS Cntrl + 2 M.2 ● BOSS Cntrl + 2 M.2
● 240G,R1,FH ● 240G,R1,FH
RAID ● Operating System on BOSS in RAID 1
● Configuration
● Pass through each data disk

NOTE: When using Intel ® NICs, Open vSwitch (OVS) hardware offloading is not supported. All other NFV features
documented in this Reference Architecture Guide are supported. Be sure to consult your Dell EMC account representative
before changing the recommended hardware configurations.

Bill of Materials for Dell EMC PowerEdge R-Series solution — Intel


NICs
Table 48. Solution Admin Host (SAH) (SAH) Dell EMC PowerEdge R640
Machine function SAH node
Platform Dell EMC PowerEdge R640 (one qty)
CPU Intel Xeon Gold 6230 CPU @ 2.10GHz
RAM (minimum) 192GB DDR-4 2933 MHz
LOM N/A
Add-in network 2 x Intel XXV710 DP 25GbE DA/SFP+ adapter
Disk 8 x 600GB 10k SAS 12Gbps
Storage controller PERC H740 Mini controller
RAID RAID 10

Bill of materials - legacy Dell EMC servers 63


Table 49. Controller node Dell EMC PowerEdge R640
Machine function Controller nodes
Platform Dell EMC PowerEdge R640 (three qty)
CPU Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz
RAM (minimum) 192GB DDR-4 2933 MHz
LOM N/A
Add-in network 2 x Intel XXV710 DP 25GbE DA/SFP+ adapter
Disk 8 x 600GB 10k SAS 12Gbps
Storage controller PERC H740 Mini controller
RAID RAID 10

Table 50. Compute node Dell EMC PowerEdge R740


Machine function Compute nodes
Platform Dell EMC PowerEdge R740 (three qty)
CPU Intel Xeon Gold 6230 CPU @ 2.10GHz
RAM (minimum) 192GB DDR-4 2933 MHz
LOM N/A
Add-in network Option 1: xSP Option 2: CSP
● 2 x Intel XXV710 DP 25GbE DA/ ● 4 x Intel XXV710 DP 25GbE DA/
SFP+ adapter SFP+ adapter
Disk 8 x 600GB 10k SAS 12Gbps
Storage controller PERC H740 Mini controller
RAID RAID 10

Table 51. Storage node Dell EMC PowerEdge R740xd


Machine function Storage nodes
Platform Dell EMC PowerEdge R740xd (three qty)
CPU Intel Xeon Gold 6230 CPU @ 2.10GHz
RAM (minimum) 192GB DDR-4 2933 MHz
LOM N/A
Add-in network 2 x Intel XXV710 DP 25GbE DA/SFP+ adapter
Disk Option 1: Option 2: Option 3:
● OSD and Journal Drives ● OSD and Journal ● NVMe Front Drives:
separate colocated ● 12 x 1.2TB Intel NVMe
● Front Drives: 12 x 2.4TB ● Front drives: 24 x 1.8TB ● 2 x 1.8 TB 10K SAS
10K SAS 10K SAS
● Mid Bay drive: 4 x 2.4 TB ● Mid Bay drives: 4 x
10K SAS 1.8.TB10K SAS
● Flex Bay drive: 4 x 400GB ● Flex Bay drives: 4 x 1.8TB
SAS SSD 10K SAS
Storage controller HBA 330 HBA 330 PERC H740 Mini controller
● BOSS Cntrl + 2 M.2 ● BOSS Cntrl + 2 M.2
● 240G,R1,FH ● 240G,R1,FH
RAID ● Operating System on BOSS in RAID 1
● Configuration

64 Bill of materials - legacy Dell EMC servers


Table 51. Storage node Dell EMC PowerEdge R740xd (continued)
Machine function Storage nodes
● Pass through each data disk

NOTE: When using Intel ® NICs, Open vSwitch (OVS) hardware offloading is not supported. All other NFV features
documented in this Reference Architecture Guide are supported. Be sure to consult your Dell EMC account representative
before changing the recommended hardware configurations.

Bill of Materials for Dell EMC PowerEdge R740xd —


PowerFlex
The base Dell EMC PowerEdge R-Series solution is comprised of:
● One Dell EMC PowerEdge R640 SAH node
● Three Dell EMC PowerEdge R640 controller nodes
● Three Dell EMC PowerEdge R640 compute nodes
● Three Dell EMC VxFlex Ready Nodes storage nodes

Base configuration - PowerFlex


Table 52. Solution Admin Host (SAH) (SAH) Dell EMC PowerEdge R640
Machine function SAH node
Platform Dell EMC PowerEdge R640 (one qty)
CPU Intel Xeon Gold 6230 CPU @ 2.10GHz
RAM (minimum) 192GB DDR-4 2933 MHz
LOM 1 x Mellanox 25GbE 2P ConnectX4LX SFP RNDC
Add-in network 1 x Mellanox 25GbE 2P ConnectX4LX SFP adapter
Disk 8 x 600GB 10k SAS 12Gbps
Storage Controller PERC H740 Mini controller
RAID RAID 10

Table 53. Controller node Dell EMC PowerEdge R640


Machine function Controller nodes
Platform Dell EMC PowerEdge R640 (three qty)
CPU Intel Xeon Gold 6230 CPU @ 2.10GHz
RAM (minimum) 192GB DDR-4 2933 MHz
LOM 1 x Mellanox 25GbE 2P ConnectX4LX SFP RNDC
Add-in network ● 1 x Mellanox 25GbE 2P ConnectX4LX SFP adapter
● 2 x Mellanox 100GbE 2P ConnectX5 SFP adapter (Used
for CSP Profile)
Disk 8 x 600GB 10k SAS 12Gbps
Storage controller PERC H740 Mini controller
RAID RAID 10

Bill of materials - legacy Dell EMC servers 65


Table 54. Compute node Dell EMC PowerEdge R640
Machine function Compute nodes
Platform Dell EMC PowerEdge R640 (three qty)
CPU Intel Xeon Gold 6230 CPU @ 2.10GHz
RAM (minimum) 192GB DDR-4 2933 MHz
LOM 1 x Mellanox 25GbE 2P ConnectX4LX SFP RNDC
Add-in network Option 1: xSP Option 2: CSP
● 1 x Mellanox 25GbE 2P ● 1 x Mellanox 25GbE 2P
ConnectX4LX SFP adapter ConnectX4LX SFP
● 2 x Mellanox 100GbE 2P ConnectX5
SFP adapter
Disk 8 x 600GB 10k SAS 12Gbps
Storage controller PERC H740 Mini controller
RAID RAID 10

Table 55. Storage node Dell EMC VxFlex R740xd


Machine function Storage nodes
Platform Dell EMC VxFlex R740xd (three qty)
CPU Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz
RAM (minimum) 192GB DDR-4 2933 MHz
LOM 1 x Intel ® X710 10GbE SFP+
Add-in network 2 x Mellanox 25GbE 2P ConnectX4LX SFP adapter
Disk ● OSD and Journal colocated
● Front drives: 24 x 223GB SATA SSD
Storage controller ● HBA 330
● BOSS Cntrl + 2 M.2 240G,R1,FH
RAID ● Operating System on BOSS in RAID 1
● Configuration
● Pass through each data disk

NOTE: Be sure to consult your Dell EMC account representative before changing the recommended hardware
configurations.

Bill of Materials for Dell EMC PowerEdge R-Series


solution — Hyper-Converged Infrastructure
This section provide bill of materials information necessary to purchase the proper hardware to deploy the Dell Technologies
Reference Architecture for Red Hat OpenStack Platform for Dell Technologies Reference Architecture for Hyper-Converged
Infrastructure on Red Hat OpenStack Platform.
The base Dell EMC PowerEdge R-Series HCI solution is comprised of:
● One Dell EMC PowerEdge R640 SAH node
● Three Dell EMC PowerEdge R640 controller nodes
● Three Dell EMC PowerEdge R740xd compute/storage nodes

66 Bill of materials - legacy Dell EMC servers


Base configuration — Intel® NICs
Table 56. Solution Admin Host (SAH) (SAH) Dell EMC PowerEdge R640
Machine function SAH node
Platform Dell EMC PowerEdge R640 (one qty)
CPU Intel Xeon Gold 6230 CPU @ 2.10GHz
RAM (minimum) 192GB DDR-4 2933 MHz
LOM N/A
Add-in network 2 x Intel XXV710 DP 25GbE DA/SFP+ adapter
Disk 8 x 600GB 10k SAS 12Gbps
Storage controller PERC H740 Mini controller
RAID RAID 10

Table 57. Controller node Dell EMC PowerEdge R640


Machine function Controller nodes
Platform Dell EMC PowerEdge R640 (three qty)
CPU Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz
RAM (minimum) 192GB DDR-4 2933 MHz
LOM N/A
Add-in network 2 x Intel XXV710 DP 25GbE DA/SFP+ adapter
Disk 8 x 600GB 10k SAS 12Gbps
Storage controller PERC H740 Mini controller
RAID RAID 10

Table 58. Hyper-Converged Infrastructure node Dell EMC PowerEdge R740xd


Machine function Storage nodes
Platform Dell EMC PowerEdge R740xd (three qty)
CPU Intel Xeon Gold 6230 CPU @ 2.10GHz
RAM (minimum) 192GB DDR-4 2933 MHz
LOM N/A
Add-in network 2 x Intel XXV710 DP 25GbE DA/SFP+ adapter
Disk Option 1: Option 2: Option 3:
● OSD and Journal Drives ● OSD and Journal ● NVMe Front Drives:
separate colocated ● 12 x 1.2TB Intel NVMe
● Front Drives: 12 x 2.4TB ● Front drives: 24 x 1.8TB ● 2 x 1.8 TB 10K SAS
10K SAS 10K SAS
● Mid Bay drive: 4 x 2.4 TB ● Mid Bay drives: 4 x 1.8.TB
10K SAS 10K SAS
● Mid Bay drive: 4 x 2.4 TB ● Flex Bay drives: 4 x 1.8TB
10K SAS 10K SAS
● Flex Bay drive: 4 x 400GB
SAS SSD
Storage controller ● HBA 330 ● HBA 330 PERC H740 Mini controller
● BOSS Cntrl + 2 M.2 ● BOSS Cntrl + 2 M.2
● 240G,R1,FH ● 240G,R1,FH

Bill of materials - legacy Dell EMC servers 67


Table 58. Hyper-Converged Infrastructure node Dell EMC PowerEdge R740xd (continued)
Machine function Storage nodes
RAID ● Operating System on BOSS in RAID 1
● Configuration
● Pass through each data disk

NOTE: NFV features support is experimental in a HCI profile. All other NFV features documented in this Reference
Architecture Guide are supported. Be sure to consult your Dell EMC account representative before changing the
recommended hardware configurations.

68 Bill of materials - legacy Dell EMC servers


14
References
Additional information can be found at the Knowledge Base for Dell Technologies Reference Architecture for Red Hat
OpenStack Platform. This page contains the latest copy of this guide as well as associated solution briefs and white
papers. It also contains information on related solutions, including network edge, storage, and Ironic bare metal solutions.
Please visit Dell EMC Ready and Reference Architectures for OpenStack Platform with General Solutions. Or you can e-mail
openstack@dell.com.
For more information on Dell EMC Service Provider Solutions, visit https://www.delltechnologies.com/en-us/industry/telecom/
index.htm.

References 69
15
Glossary
API—Application Programing Interface is a specification that defines how software components can interact.
BMC/iDRAC Enterprise—Baseboard management controller. An on-board microcontroller that monitors the system for critical
events by communicating with various sensors on the system board, and sends alerts and log events when certain parameters
exceed their preset thresholds.
BOSS— The Boot Optimized Storage Solution (BOSS) enables customers to segregate operating system and data on server-
internal storage. This is helpful in the Hyper-Converged Infrastructure (HCI) and Software Defined Storage (SDS) arenas, to
separate operating system drives from data drives, and implement hardware RAID mirroring (RAID1) for OS drives.
CDH—See http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-145.pdf. Cloud computing is a model for
enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g.,
networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management
effort or service provider interaction.
Cluster—A set of servers dedicated to OpenStack that can be attached to multiple distribution switches.
Compute node—The hardware configuration that best supports the hypervisor server or Nova compute roles.
DPDK— DPDK (Data-Plane Development Kit) eliminates packet buffer copies..
DevOps— Development Operations (DevOps) is an operational model for managing data centers using improved automated
deployments, shortened lead times between fixes, and faster mean time to recovery. See https://en.wikipedia.org/wiki/
DevOps.
DIMM—Dual In-line Memory Module.
DNS— The domain name system (DNS) defines how Internet domain names are located, and translated into Internet Protocol
(IP) addresses.
FQDD— A fully qualified device descriptor (FQDD) is a method used to describe a particular component within a system or
subsystem, and is used for system management and other purposes.
FQDN— A fully qualified domain name (FQDN) is the portion of an Internet Uniform Resource Locator (URL) that fully
identifies the server to which an Internet request is addressed. The FQDN includes the second-level domain name, such as
"dell.com", and any other levels as required.
GUI— Graphical User Interface. A visual interface for human interaction with the software, taking inputs and generating easy to
understand visual outputs.
Hypervisor—Software that runs virtual machines (VMs).
IaaS—Infrastructure as a Service.
Infrastructure node—Systems that handle the control plane and deployment functions.
ISV—Independent Software Vendor.
JBOD—Just a Bunch of Disks.
LAG—Link Aggregation Group.
LOM—LAN on motherboard.
LVM—Logical Volume Management.
ML2—The Modular Layer 2 plug-in is a framework that allows OpenStack to utilize different layer 2 networking technologies
NFS— The Network File System (NFS) is a distributed filesystem that allows a computer user to access, manipulate, and store
files on a remote computer, as though they resided on a local file directory.
NIC—Network Interface Card.
Node—One of the servers in the cluster.
NUMA—Non-Uniform Memory Access.
Overcloud—The functional cloud that is available to run guest VMs and workloads.
Pod—An installation comprised of three racks, and consisting of servers, storage, and networking.

70 Glossary
REST— REST - Representational State Transfer (also ReST). Relies upon stateless, client-server, cacheable communications
protocol to access the API.
RHOSP—Red Hat OpenStack Platform.
RPC—Remote Procedure Call.
SAH—The Solution Admin Host (SAH) is a physical server that supports VMs for the overcloudUndercloud machines needed
for the cluster to be deployed and operated.
SDS—Software-defined storage (SDS) is an approach to computer data storage in which software is used to manage policy-
based provisioning and management of data storage, independent of the underlying hardware.
SDN—Software-defined Network (SDN) is where the software will define, create, use and destroy different networks as
needed.
Stamp—A stamp is the collection of all servers and network switches in the solution.
Storage Node—The hardware configuration that best supports SDS functions such as Red Hat Ceph storage.
ToR—Top-of-rack switch/router.
U— U used in the definition of the size of server, example 1U or 2U. A "U" is a unit of measure equal to 1.75 inches in height.
Undercloud—The undercloud is the system used to control, deploy, and monitor the Overcloud - it is a single node OpenStack
deployment completely under the administrators control. The undercloud is not HA configured.
VLT—A Virtual Link Trunk (VLT) is the combined port channel between an attached device (ToR switch) and the VLT peer
switches.
VLTi—A Virtual Link Trunk Interconnect (VLTi) is an interconnect used to synchronize states between the VLT peer switches.
Both endpoints must be the same speed, i.e. 40Gb → 40Gb; 1G interfaces are not supported.
VM—Virtual Machine - a simulation of a computer system.

Glossary 71

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy