H18914 Dell Openstack Ra
H18914 Dell Openstack Ra
H18914 Dell Openstack Ra
Dell Technologies
October 2021
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2017 – 2021 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Other trademarks may be trademarks of their respective owners.
Contents
Chapter 1: Overview...................................................................................................................... 6
Dell EMC Reference Architecture Guide 16.1............................................................................................................... 6
New features.................................................................................................................................................................. 6
Key benefits.................................................................................................................................................................... 6
Hardware options.................................................................................................................................................................7
Networking and network services................................................................................................................................... 7
JetPack automation toolkit............................................................................................................................................... 8
OpenStack architecture.....................................................................................................................................................8
Contents 3
Network components....................................................................................................................................................... 22
Server nodes.................................................................................................................................................................22
Leaf switches............................................................................................................................................................... 23
Spine switches............................................................................................................................................................. 23
Layer-2 and Layer-3 switching................................................................................................................................ 23
VLANs.............................................................................................................................................................................23
Management network services................................................................................................................................ 24
Dell EMC OpenSwitch solution................................................................................................................................ 24
4 Contents
Bill of Materials for Dell EMC PowerEdge R-Series solution .................................................................................53
Bill of Materials for Dell EMC PowerEdge R-Series — DCN................................................................................. 55
Edge compute node configuration with 25GbE networking.............................................................................55
Subscriptions and network switches in the solution................................................................................................ 56
Default network switch - Dell EMC Networking S3048-ON switch...............................................................56
Dell EMC Networking S4048-ON optional switch.............................................................................................. 56
Dell EMC Networking S5232F-ON switch............................................................................................................ 56
Dell EMC Networking S5224F-ON switch............................................................................................................ 57
Contents 5
1
Overview
Topics:
• Dell EMC Reference Architecture Guide 16.1
• Hardware options
• Networking and network services
• JetPack automation toolkit
• OpenStack architecture
New features
● Added support for Dell EMC PowerEdge R650 dual-socket, 1U 3rd Generation Intel® Xeon® scalable processors .
Reference https://i.dell.com/sites/csdocuments/Product_Docs/en/poweredge-r650-spec-sheet.pdf for more information.
● Added support for Dell EMC PowerEdge R750 dual-socket, 2 U rack 3rd Generation Intel® Xeon® scalable processors.
Reference https://i.dell.com/sites/csdocuments/Product_Docs/en/poweredge-R750-spec-sheet.pdf for more information.
● Added support for Dell EMC PowerEdge XR11 single-socket, 1U 3rd Generation Intel® Xeon® scalable processors .
Reference https://i.dell.com/sites/csdocuments/Product_Docs/en/xr11-spec-sheet.pdf for more information.
● Added support for Dell EMC PowerEdge XR12 single-socket, 2 U rack 3rd Generation Intel® Xeon® scalable processors.
Reference https://i.dell.com/sites/csdocuments/Product_Docs/en/xr12-spec-sheet.pdf for more information.
● Added support for Dell EMC PowerStore.
Reference https://www.delltechnologies.com/asset/en-us/products/storage/technical-support/h18143-dell-emc-
powerstore-family-spec-sheet.pdf for more information.
● Support for the latest release of 16.1 including the latest updates.
● Support for Red Hat Enterprise Linux RHEL 8.2 including the latest updates.
Key benefits
The Dell Technologies offers several benefits to help service providers and high-end enterprises rapidly implement Dell EMC
hardware and software:
6 Overview
● Ready-to-use solution: The Reference Architecture Guide has been fully engineered, validated, tested in Dell Technologies
laboratories and documented by Dell Technologies. This decreases your investment and deployment risk, and it enables
faster deployment time.
● Long lifecycle deployment: Dell EMC PowerEdge R-Series, VxFlex R740xd, Dell EMC PowerEdge XR11, Dell EMC PowerEdge
XR12, and Dell EMC PowerEdge XE2420 servers, recommended in the Reference Architecture Guide, include long-life Intel
Xeon processors which reduce your investment risk and protect your investment for the long-term.
● World-class professional services: The Reference Architecture Guide includes Dell Technologies professional services that
span consulting, deployment, and design support to guide your deployment needs.
● Customizable solution: The Reference Architecture Guide is prescriptive, but it can be customized to address each
customer’s unique virtual network function (vNF) or other workload requirements.
● Co-engineered and Integrated: OpenStack depends upon Linux for performance, security, hardware enablement, networking,
storage, and other primary services. The delivers an OpenStack distribution with the proven performance, stability, and
scalability of RHEL 8.2 enabling you to focus on delivering the services your customers want, instead of focusing on the
underlying operating platform.
● Deploy with confidence: The provides hardened and stable branch releases of OpenStack and Linux. The is a long life release
product supported by Red Hat for a four-year “production phase” life cycle, well beyond the six-month release cycle of
unsupported, community OpenStack. life cycle support policies can be found at https://access.redhat.com/support/policy/
updates/openstack/platform
● Take advantage of broad application support: Red Hat Enterprise Linux, running as guest Virtual Machines (VM), provides
a stable application development platform with a broad set of ISV certifications. You can therefore rapidly build and deploy
your cloud applications.
● Avoid vendor lock-in: By moving to open technologies, while maintaining your existing infrastructure investments.
● Benefit from the world’s largest partner ecosystem: Red Hat has assembled the world’s largest ecosystem of certified
partners for OpenStack compute, storage, networking, ISV software, and services for deployments. This ensures the same
level of broad support and compatibility that customers enjoy today in the Red Hat Enterprise Linux ecosystem.
● Upgrade of: Red Hat OpenStack Director-based installations.
● Bring security to the cloud: Rely upon the SELinux military-grade security and container technologies of Red Hat Enterprise
Linux to prevent intrusions and protect your data, when running in public or private clouds. For more information regarding
security in the 16.1 release, see https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/
security_and_hardening_guide/index. Additional information pertaining to Red Hat vulnerabilities can be found at https://
access.redhat.com/security/.
Hardware options
To reduce time spent on specifying hardware for an initial system, this Architecture Guide offers a full solution using validated
Dell EMC PowerEdge and Dell EMC VxFlex R740xd server hardware designed to allow a wide range of configuration options,
including optimized configurations for:
● Compute nodes (both Core and DCN)
● Hyper-Converged Infrastructure nodes
● Infrastructure nodes
● Storage nodes
Dell Technologies recommends starting with OpenStack software using components from this Reference Architecture Guide
- Version 16.1. These hardware and operations processes comprise a flexible foundation upon which to expand as your cloud
deployment grows, so your investment is protected.
As noted throughout this Architecture Guide - Version 16.1, Dell Technologies constantly adds capabilities to expand this
offering, and other hardware may be available.
Overview 7
● NIC bonding
● Redundant trunking top-of-rack (ToR) switches into core routers
This enables the Dell Technologies to operate in a full production environment.
See Network architecture for guidelines. Detailed designs are available through Dell Technologies consulting services.
OpenStack architecture
While OpenStack has many configurations and capabilities, Dell Technologies Reference Architecture for Red Hat OpenStack
Platform 16.1 primary components are a containerized version.
NOTE: For a complete overview of the OpenStack software, visit Red Hat OpenStack Platform and the OpenStack Project.
8 Overview
2
BIOS and firmware compatibility
This chapter documents the versions of BIOS and firmware that are used to create this Dell Technologies Reference
Architecture for Red Hat OpenStack Platform.
Topics:
• Tested BIOS and firmware
Table 1. Dell EMC PowerEdge R650 tested BIOS and firmware versions
Product Version
BIOS 1.2.4
iDRAC with Lifecycle controller 4.40.29.00
Mellanox ConnectX-5 16.28.45.12
PERC H755 Front 52.14.0-3901
Table 2. Dell EMC PowerEdge R750 tested BIOS and firmware versions
Product Version
BIOS 1.2.4
iDRAC with Lifecycle controller 4.40.29.00
Mellanox ConnectX-6 - Used for NFV functions for Compute 22.27.61.06
Role
Mellanox ConnectX-5 16.26.60.00
PERC 755 Front- Used for Compute Role 52.14.0-3901
HBA 335i Fnt - Used for Ceph Storage Role 15.15.15.00
BOSS S2 - Used for Ceph Storage Role 2.5.13.4008
Table 3. Dell EMC PowerEdge XR11 tested BIOS and firmware versions
Product Version
BIOS 1.3.8
iDRAC with Lifecycle controller 4.40.35.00
Table 4. Dell EMC PowerEdge XR12 tested BIOS and firmware versions
Product Version
BIOS 1.3.8
iDRAC with Lifecycle controller 4.40.35.00
Mellanox ConnectX-5 16.26.60.00
PERC 755 Front- Used for Compute Role 52.14.0-3901
HBA 335i Fnt - Used for Ceph Storage Role 15.15.15.00
BOSS S2 - Used for Ceph Storage Role 2.5.13.4008
Table 5. Dell EMC PowerEdge R640/Dell EMC PowerEdge R740xd tested BIOS and firmware versions
Product Version
BIOS 2.11.2
iDRAC with Lifecycle controller 5.00.00.00
Intel XXV710 NIC 20.0.17
Mellanox ConnectX-4 14.26.60.00
Mellanox ConnectX-5 16.26.60.00
PERC H730P Mini controller (Dell EMC PowerEdge R640) 25.5.8.001
PERC H740P Mini controller (Dell EMC PowerEdge R640) 51.14.0-3900
HBA330 Mini (Dell EMC PowerEdge R740xd) 16.17.01.00
BOSS-S1 (Dell EMC PowerEdge R740xd) 2.5.13.3024
Table 6. Dell EMC PowerEdge R6515/Dell EMC PowerEdge R7515 tested BIOS and firmware versions
Product Version
BIOS 2.0.3
iDRAC with Lifecycle controller 4.40.00.00
Intel XXV710 NIC 19.5.12
Mellanox ConnectX-5 16.27.61.20
Broadcom Adv. Dual 25Gb Ethernet 21.60.22.11
PERC H740P Mini controller (Dell EMC PowerEdge R640) 51.13.2-3714
Various factors can influence a mismatch between the required versions and the versions that are installed on the servers,
such as firmware updates after the server shipment, or a FRU replacement with a different firmware version than in the
warehouse. For example, if you have replaced the server system board, the FRU's BIOS and iDRAC firmware versions will be
different.
You are required to verify that all server drivers, BIOS, and firmware meet the required versions, as published in the
Dell EMC VxFlex Ready Nodes driver and firmware matrix, which is located at https://www.dell.com/support/home/en-us/
product-support/product/scaleio-ready-node--poweredge-14g/docs, before deploying a server in the Dell EMC VxFlex
Ready Nodes environment.
To perform any updates needed to meet Dell EMC VxFlex Ready Nodes requirements, use the Dell EMC VxFlex Ready
Nodes hardware Update Bootable ISO ("Hardware ISO"). The Hardware ISO is based on the Dell EMC OpenManage
Deployment Toolkit (DTK). The DTK provides a framework of tools necessary for the configuration of Dell EMC VxFlex
Ready Nodes servers. For PowerFlex, a custom script has been injected, along with specific qualified BIOS/firmware update
packages.
The Hardware ISO has been designed to make the firmware update process consistent and simple. You can use it to update
firmware, BIOS, and configuration settings two different ways, depending on how many Dell EMC VxFlex Ready Nodes
servers you are updating:
To update the firmware, BIOS, and configuration settings on a single server, use the iDRAC Virtual KVM. If several servers
in the Dell EMC VxFlex Ready Nodes deployment require updates, it is recommended that you use remote RACADM.
RACADM allows for the simultaneous update of the versions on multiple Dell EMC VxFlex Ready Nodes servers, with
minimal steps.
Table 8. Dell EMC PowerEdge XE2420 tested BIOS and firmware versions
Product Version
BIOS 1.4.1
iDRAC with Lifecycle controller 4.40.10.00
Intel XXV710 NIC 19.5.12
PERC H740P Adapter 50.9.4-3025
Dell EMC SC, Unity and PowerMax tested software and firmware versions lists the Dell EMC Storage Center, SC series storage
software and Dell EMC Unity firmware versions that were tested for the Dell Technologies Reference Architecture for Red Hat
OpenStack Platform.
Table 9. Dell EMC SC, Unity and PowerMax PowerStore tested software and firmware versions
Product Version
Dell EMC SC storage center software 2016 R2 Build 16.2.1.228
SC series storage firmware 6.6.11.9
Dell EMC Unity 380F firmware 5.0.6.0.5.008
Dell EMC PowerMax 2000 firmware 5978.221.221
Dell EMC PowerStore 1000T 1.0.3.0.5.007
Dell EMC Network Switches tested firmware versions lists the default S3048-ON, and optional S4048-ON management switch
firmware versions that were tested for the Dell Technologies Reference Architecture for Red Hat OpenStack Platform.
NOTE: Contact your Dell EMC sales representative for detailed parts lists.
Topics:
• Dell EMC PowerEdge R650 server
• Dell EMC PowerEdge R750 server
• Dell EMC PowerEdge XR11 server
• Dell EMC PowerEdge XR12 server
• Dell EMC PowerEdge R640 server
• Dell EMC PowerEdge R740xd
• Dell EMC PowerEdge R6515 server
• Dell EMC PowerEdge R7515 server
• Dell EMC VxFlex R740xd servers
• Dell EMC Dell EMC PowerEdge XE2420 server
Server options 13
Dell EMC PowerEdge XR12 server
Dell EMC PowerEdge XR12 server, which is powered by 3rd Generation Intel® Xeon® Scalable processors, is a high-
performance, high-capacity server for demanding workloads at the edge. This is one of Dell Technologies' latest 2U single-
socket servers designed to run complex workloads using highly scalable memory, I/O, and network options, with up to eight
DDR4 DIMMs, up to three PCIe Gen4 enabled expansion slots, up to four storage bays, and four integrated 25 GbE LAN on
Motherboard (LoM) ports. It has a reduced form factor and is NEBS3 compliant, which makes it ideal for more challenging
deployment models at the far edge, where space and environmental conditions become more demanding.
14 Server options
The VxFlex R740xd is the platform of choice for Red Hat Ceph storage for this Architecture Guide - Version 16.1. This platform
has also been validated for compute and HCI roles.
Server options 15
4
Configuring PowerEdge R-Series hardware
This section describes manually configuring PowerEdge R-Series server hardware for the Dell Technologies Reference
Architecture for Red Hat OpenStack Platform:
● IPMI configuration
● BIOS configuration
● RAID configuration
Topics:
• Configuring the SAH node
• Configuring Overcloud nodes
iDRAC settings
NOTE: Duplicate these settings on the SAH nodes BIOS configuration before deployment.
iDRAC Specification for SAH Nodes lists and describes iDRAC settings that need to be set in BIOS.
NOTE: Dell EMC Unity or SC series storage provides the Glance support through Cinder.
Dell EMC Manila driver framework (EMCShareDriver) delivers a shared filesystem in OpenStack. The plugin-based Dell EMC
Manila driver design is compatible with various plugins to control Dell EMC Unity storage products.
The Dell EMC Unity plugin manages the Dell EMC Unity storage system for shared filesystems.
The Dell EMC Unity driver is a REST API. Dell EMC Unity storage system Manila backends are a one-to-one managed
storage system. Each Manila backend configures a Unity storage system. See https://docs.openstack.org/manila/train/admin/
emc_unity_driver.html
Swift provides an object storage interface to VMs and other OpenStack consumers. Unlike block storage where the guest is
provided a block device of a given format and is accessible within the cluster, object storage is not provided through the guest.
Object storage is generally implemented as a HTTP/HTTPS-based service through a web server. Client implementations within
the guest or external OpenStack clients would interact with Swift without any configuration required of the guest other than
providing the requisite network access. For example, a VM within OpenStack can put data into Swift, and later external clients
could pull that data for additional processing.
NOTE: Swift in this document refers to the Swift interfaces, not the Swift implementation, of the protocol.
2 Available through a Ceph cluster with the Swift API enabled or through a custom Professional Services engagement.
18 Storage options
As with other OpenStack services, there are client and server components for each storage service. The server component
can be modified to use a particular type of storage rather than the default. For example, Cinder uses local disks as the storage
back-end by default. The Dell Technologies Reference Architecture for Red Hat OpenStack Platform modifies the default
configuration for these services.
All virtual machines will need a virtual drive that is used for the OS. Two options are available:
● Ephemeral disks
● Boot from volume or snapshot, hosted on Red Hat Ceph storage, Dell EMC Unity storage or SC series storage arrays.
Ephemeral disks are virtual drives that are created when a VM is created, and destroyed when the VM is removed. The virtual
drives can be stored on the local drives of the Nova host or on a shared file system, such as Ceph Rados Block Device RBD).
During the planning process, place ephemeral on local or shared storage with a shared backend for live migration.
Boot from volume/snapshot will use one of the Cinder backends.
The Dell Technologies Reference Architecture for Red Hat OpenStack Platform includes alternate implementations of Cinder
that enable the cluster to fit many needs. Cinder has been validated using each of the back-ends independently. Multi back-ends
have been validated, consisting of two or all of the following:
● Local storage
● Red Hat Ceph storage
● Dell EMC Unity storage
Local storage
When using local storage, each compute node will host the ephemeral volumes associated with each virtual machine. Cinder
will utilize LVMs that are shared by NFS for independent volumes, utilizing the local storage subsystem. With the hardware
configuration for the compute nodes see Compute node Dell EMC PowerEdge R740 using the eight 600GB disks in a RAID 10,
there will be approximately 2 TB of storage available.
Storage options 19
Red Hat Ceph storage
The Dell Technologies Reference Architecture for Red Hat OpenStack Platform includes Red Hat Ceph storage, which is a
scale-out, distributed, software-defined storage system. Red Hat Ceph storage is used as backend storage for Nova, Cinder,
and Glance. Storage nodes run the Red Hat Ceph storage software, and compute nodes and controller nodes run the Red Hat
Ceph storage block client.
Red Hat Ceph storage also provides object storage for OpenStack VMs and other clients external to OpenStack. The object
storage interface is an implementation of:
● The OpenStack Swift RESTful API (basic data access model)
● The Amazon S3 RESTful API
The object storage interface is provided by Red Hat Ceph storage RADOS Gateway software running on the controller nodes.
Client access to the object storage is distributed across all controller nodes in order to provide High availability (HA) and I/O
load balancing.
Red Hat Ceph storage is containerized in RHOSP16.1 (Mon, Mgr, Object Gateway, and Object Storage Daemon). Each OSD has
an associated physical drive where the data is stored, and a journal where write operations are staged prior to being committed.
● When a client reads data from the Red Hat Ceph storage cluster the OSDs fetch the data directly from the drives.
● When a client writes data to the storage cluster, the OSDs write the data to their journals prior to committing the data.
OSD journals can be located in a separate partition on the same physical drive where the data is stored, or they can be located
on a separate high-performance drive, such as an SSD optimized for write-intensive workloads. For the Architecture Guide, a
recommended ratio of one SSD to four hard disks is used to achieve optimal performance. It should be noted that as of this
writing, using a greater ratio will result in server performance degradation.
In a cost-optimized solution, the greatest storage density is achieved by separate SSD journal drives, and populating every
available physical drive bay with a high-capacity HDD.
In a throughput-optimized solution, a few drive bays can be populated with high performance SSDs that will host the journals for
the OSD HDDs. For example, in an Dell EMC PowerEdge R740xd system with 20 drive bays available for Red Hat Ceph storage,
four bays are used for SSD journal drives and 16 bays for HDD data drives. This is based upon the current industry guideline of a
4:1 HDD to SSD journal ratio.
NOTE: Other storage options are available upon request from Professional Services.
The Dell EMC Unity platforms accelerate application infrastructure with All-Flash unified storage platforms that simplify
operations while reducing cost and datacenter footprint.
The following completely refreshed systems deliver the next-generation Dell EMC Unity XT family -- the no-compromise
midrange storage built new from the ground up to deliver unified storage speed and efficiency and built for a multi-cloud world.
● Dell EMC Unity 350F, 450F, 550F, 650F (This RA has validated the Unity 350F and the 550F platforms)
● Dell EMC Unity XT 380F,480F,680F,880F (This RA has validated the Unity XT 380F platform)
Reference https://www.delltechnologies.com/en-us/storage/unity.htm for more information about Dell EMC Unity storage.
Reference https://access.redhat.com/ecosystem/software/3172821 for more information about certifications for Unity.
Reference https://docs.openstack.org/cinder/train/configuration/block-storage/drivers/dell-emc-unity-driver.html for more
information about Dell EMC Unity Cinder driver.
Reference https://docs.openstack.org/manila/train/admin/emc_unity_driver.html for more information about the Dell EMC
Unity Manila driver.
20 Storage options
NOTE: Other storage options are available upon request from Professional Services.
Reference https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/
dell_storage_center_back_end_guide/index for more information.
Reference https://access.redhat.com/ecosystem/software/1610473 for more information about certification for Dell EMC SC
series storage.
NOTE: Other storage options are available upon request provided by Professional Services.
With end-to-end NVMe, storage class memory (SCM) for persistent storage, real-time machine learning and up to 350GB per
second data transfer, PowerMax features high-speed to power your most critical workloads.
● VMAX2 Series: VMAX 40K, 20K, 10K
● VMAX3 Series: VMAX 400K, 200K, 100K
● VMAX All Flash Series: VMAX 250F, 450F and 850F
● PowerMax Series 2000 and 8000 (This RA has validated the PowerMax Series 2000 platforms)
Reference https://www.delltechnologies.com/en-us/storage/powermax.htm for more information about Dell EMC PowerMax
storage NVMe Data Storage.
Reference https://access.redhat.com/ecosystem/software/1572113 for more information about certifications for Dell EMC
VMAX and PowerMax.
Reference https://docs.openstack.org/cinder/train/configuration/block-storage/drivers/dell-emc-powermax-driver.html for
more information about Dell EMC Unity Cinder driver.
Storage options 21
6
Network architecture
This Reference Architecture Guide supports consistency in rapid deployments through minimal network configuration.
Topics:
• Network architecture overview
• Infrastructure layouts
• Network components
Infrastructure layouts
● Core network infrastructure - The connectivity of aggregation switches to the core for external connectivity.
● Data network infrastructure - The server NICs, Leaf switches, and the aggregation switches.
● Management network infrastructure - The BMC management network, consisting of iDRAC ports and the out-of-band
management ports of the switches, is aggregated into a 1-rack unit (RU) S3048-ON switch in one of the three racks in the
cluster. This 1-RU switch in turn can connect to one of the aggregation or core switches to create a separate network with a
separate VLAN.
Network components
The data network is primarily composed of the ToRToR and the aggregation switches. The following component blocks make up
this network:
● Server nodes
● Leaf Switches
● Spine switches
● Layer-2 and Layer-3 Switching
● VLANs
● Management network services
● Dell EMC OpenSwitch solution
Server nodes
In order to create a highly-available solution, the network must be resilient to loss of a single network switch, NIC, or bad cable.
To achieve this, the network configuration uses bonding across the servers and switches.
There are several types (or modes) of bonding, but only one is recommended for the solution. The , compute nodes, Red Hat
Ceph storage nodes, and Solution Admin Host (SAH) can use 802.3ad or LACP (mode = 4).
NOTE: Other modes, such as balance-rr (mode=0), balance-xor (mode=2), broadcast (mode=3), balance-tlb
(mode=5), and balance-alb (mode=6), are not supported. Please check with your technician for current support status
of active-backup (mode = 1).
All nodes' endpoints are terminated to switch ports that have been configured for LACP bonding mode across two S5232F-ON
ToR switches for 25GbE/100GbE configured with a Virtual Link Trunking interconnect (VLTi) across them.
22 Network architecture
Please contact your Dell EMC sales representative for other viable options.
A single port is an option when bonding is not required. However, it is neither used nor validated in the Dell Technologies
Reference Architecture for Red Hat OpenStack Platform. The need to eliminate single points of failure is taken into
consideration as part of the design, and this option has been eliminated wherever possible.
Please contact your Dell EMC sales representative for other configurations.
Leaf switches
Dell EMC’s recommended architecture uses VLT for HA between the two Leaf switches, which enables the servers to terminate
their Link Aggregation Group (LAG) interfaces (or bonds) into two different switches instead of one. This configuration enables
active-active bandwidth utilization and provides redundancy within the rack if one Leaf switch fails or requires maintenance. Dell
EMC recommended Leaf switch is 10/25/100GbE connectivity – S5232F-ON.
The Leaf switches are responsible for providing the different network connections such as tenant networks and storage
networks between the compute, controller, and storage nodes of the OpenStack deployment.
Spine switches
NOTE: Please contact your Dell EMC sales representative for aggregation switch recommendations.
VLANs
This Reference Architecture Guide implements at a minimum nine (9) separate Layer 2 VLANs:
Network architecture 23
● External network VLAN for tenants—Sets up a network that will support the floating IPs and default external gateway
for tenants and virtual machines. This connection is through a router external to the cluster.
● Internal networks VLAN for tenants—Sets up the backend networks for OpenStack Nova Compute and the VMs to use.
● Management/Out of Band (OOB) network—iDRAC connections can be routed to an external network. All OpenStack HA
controllerss need direct access to this network for IPMI operations.
● Private API network VLAN—Used for communication between s, the Red Hat OpenStack Director, and compute nodes for
Private API and cluster communications.
● Provisioning network VLAN—Connects a from all nodes into the fabric, used for setup and provisioning of the OpenStack
servers and access to the Red Hat Ceph storage dashboard.
● Public API network VLAN—Sets up the network connection to a router that is external to the cluster. The network is
used by the front-end network for routable traffic to individual VMs, access to the OpenStack API, RADOS Gateway, and
the Horizon . Depending upon the network configuration these networks may be either shared or routed, as needed. The Red
Hat OpenStack Director requires access to the Public API network.
● Storage Clustering network VLAN—Used by all Storage nodess for replication and data checks (Red Hat Ceph storage
clustering).
● Storage network VLAN—Used by all nodes for the data plane reads/writes to communicate to OpenStack Storage; setup,
and provisioning of the Red Hat Ceph storage cluster; and when included, the Dell EMC Unity storage or SC series storage
arrays.
● Tenant tunnel network VLAN—Used by tenants for encapsulated networks such as, or tunnels, in place of the Internal
networks VLAN for tenants.
24 Network architecture
● Support for a managed switch that supports SSH and serial line configuration
● Support for SNMP v3 support
Network architecture 25
7
Network Function Virtualization (NFV)
support
This Reference Architecture Guide supports the following Network Function Virtualization (NFV) features:
● NUMA
● Hugepages
● OVS-DPDK
● SR-IOV
● SR-IOV with Open vSwitch (OVS) offload
● DVR
NOTE: All the NFV features on AMD computes are supported with processor mode NPS=1 (NUMA nodes per Socket ). So,
before deployments, make sure AMD servers are set to NPS Mode 1 in the BIOS
NOTE: All of the above NFV features can be enabled by using the JetPack version 16.1 automation toolkit.
NOTE: Red Hat OpenStack Platform version 16.1 has Open Virtual Network (OVN) controller as the default software
defined networking solution for supplying network services to OVN OpenStack. However, OVN has limited NFV feature
support at this time. JetPack version 16.1 automation toolkit installs Red Hat OpenStack Platform version 16.1 with OVS
controller as default, to fully support all the NFV features.
Topics:
• NUMA Optimization and CPU pinning
• Hugepages
• OVS-DPDK
• SR-IOV
• SR-IOV OVS-hardware-offload with VF-LAG
• Distributed Virtual Router (DVR)
Hugepages
The Dell Technologies Reference Architecture for Red Hat OpenStack Platform version 16.1 provides the ability to enable
hugepages support on the compute nodes at the core or the edge site(s) in the solution.
NOTE: Hugepages can be enabled by the JetPack version 16.1 automation toolkit.
OVS-DPDK
The Dell Technologies Reference Architecture for Red Hat OpenStack Platform version 16.1 provides the ability to enable
OVS-DPDK support based on two ports or four ports on the compute nodes at the core or the (s) in the solution.
Open vSwitch (OVS) is a multilayer software/virtual switch used to interconnect virtual machines in the same host and
between different hosts. OVS makes use of the kernel for packet forwarding through a data path known as fastpath which
consists of a simple flow table with action rules for the received packets. Exception packets or packets with no corresponding
forwarding rule in the flow table are sent to the user space (slowpath). Switching between two memory spaces creates a
lot of overhead, thus making the user space slowpath. User space makes a decision and updates the flow table in the kernel
space accordingly so that it can be used in the future.
The OVS kernel module acts as a cache for the user space. And just like a cache, its performance increases as the number of
rules increase in the user space.
DPDK (Data-Plane Development Kit) eliminates packet buffer copies. It does this by running a dedicated poll-mode driver, and
allocating hugepages for use as a packet buffer, then passing pointers to the packets. The elimination of copies leads to higher
performance. OVS, when enabled to use DPDK-controller physical NIC interfaces, experiences a tremendous boost to packet
delivery performance. It is also advantageous that both OVS and DPDK can operate in userspace, thus reducing kernel switches
and improving packet processing efficiencies.
NOTE: Enabling OVS-DPDK requires both Hugepages and NUMA (CPU pinning) to be enabled.
NOTE: OVS-DPDK can be enabled by the JetPack version 16.1 automation toolkit.
SR-IOV
The Dell Technologies Reference Architecture for Red Hat OpenStack Platform 16.1 provides the ability to enable SR-IOV
support based on two ports and/or four ports on the compute nodes at the core or the site(s) in the solution.
Single root I/O virtualization (SR-IOV) is an extension to the PCI Express (PCIe) specification. SR-IOV enables a single PCIe
device to appear as multiple, separate virtual devices. Traditionally in a virtualized environment, a packet has to go through an
extra layer of the hypervisor, that results in multiple CPU interrupts per packet. These extra interrupts cause a bottleneck in
high traffic environments. SR-IOV enabled devices have the ability to dedicate isolated access to its resources among various
PCIe hardware functions. These functions are later assigned to the virtual machines which allow direct memory access (DMA)
to the network data.
Enabling SR-IOV requires both NUMA and Hugepages as SR-IOV workloads need stable CPU performance in a production
environment.
By default, SR-IOV is not enabled in the Dell EMC PowerEdge R-Series system BIOS. When SR-IOV is deployed, virtual
functions are not created on the NIC interfaces.
Use the following steps to set the virtualization mode to SR-IOV and VF count to 64 in the Dell EMC PowerEdge R-Series
system BIOS device settings for Mellanox NICs:
1. Enter system BIOS during boot by pressing F2.
2. Select Device Settings.
3. Select the Network Interface Card.
NOTE: SR-IOV has been validated with Mellanox ConnectX-5 / Intel XXV 710 NICs. Below is an example of SR-IOV
configuration with Mellanox NIC
4. Select the Device Level Configuration option.
NOTE: SR-IOV offloading has been tested with Mellanox ConnectX-5 100G NICs only.
NOTE: For SR-IOV offload with VF-LAG functionality to work, both ports of the bond should originate from a single
Mellanox ConnectX-5 NIC.
NOTE: SR-IOV offloading can be enabled by the JetPack version 16.1 automation toolkit.
Topics:
• Barbican
• Octavia
• Satellite
Barbican
Barbican is the secrets manager for Red Hat OpenStack Platform. You can use the Barbican API and command line to centrally
manage the certificates, keys, and passwords used by OpenStack services.
Symmetric encryption keys include - Asymmetric keys and certificates include -
● Block Storage (cinder) volume encryption
● Ephemeral disk encryption
● Object Storage (swift) encryption
● Glance image signing and verification
NOTE: Barbican can be enabled by the JetPack version 16.1 automation toolkit.
Octavia
Octavia is the OpenStack Load Balancer as a Service (LBaaS)version 2 implementation for the Red Hat OpenStack Platform.
It accomplishes its delivery of load balancing services by managing a fleet of virtual machines, collectively known as amphorae,
which it spins up on demand.
Load balancing methods -
● Round robin - Rotates requests evenly between multiple instances.
● Source IP - Requests from a unique source IP address are consistently directed to the same instance.
● Least connections - Allocates requests to the instance with the least number of active connections.
NOTE: Octavia can be enabled by the JetPack version 16.1 automation toolkit.
Satellite
JetPack support for Red Hat Satellite 6.5 gives the user the ability to deploy from a Satellite instance. All nodes within the Dell
Technologies Reference Architecture for Red Hat OpenStack Platform can register and pull required packages and container
images from the Satellite instance, instead of from the Red Hat Content Delivery Network (CDN).
NOTE: Satellite support can be enabled by the JetPack version 16.1 automation toolkit.
Overview
DCN, as part of 16.1, leverages OpenStack features like Availability Zones (AZ) and provisioning over routed L3 networks with
Ironic, to enable deployment of compute nodes to remote locations. For example, a service provider may deploy several DCN
sites to scale out a virtual Radio Access Network (vRAN) implementation.
DCN has several caveats that must be considered when planning remote compute site deployment(s):
● Only Compute can be run at an Edge site, other services such as persistent block storage are not supported.
● Image considerations - Overcloud images for bare-metal provisioning of the remote compute nodes are pulled from the
undercloud. Also, instance images for VMs running on nodes will initially be fetched from the control plane the first time they
are used. Subsequent instances will use the locally cached image. Images are large files, implying a fast, reliable connection
to the Red Hat Director node and control plane is required.
● Networking:
○ Latency - a round-trip between the control plane and remote site must be under 100ms or stability of the system could
become compromised.
○ Drop-outs - If an site temporarily loses its connection to the control plane, then no OpenStack control plane API or CLI
operations can be executed until connectivity is restored to the site. For example, existing workloads will continue to
run, but no new instances can be started until the connection is restored. Any control functions like snapshotting, live
migration, and so on cannot occur until the link between the central cloud and site is restored, as all control features are
dependent on the control plane being able to communicate with the site.
NOTE: Connectivity issues are DCN site specific. Losing connection to one DCN site does not affect other DCN
sites.
○ This guide recommends using provider networks for DCN workloads at this time. Depending on the type of workloads you
are running on the nodes, and existing networking policies, there are several ways for configuring instance IP addressing:
■ Static IPs using config-drive in conjunction with cloud-init - Utilizing config-drive as the metadata API server
leverages the virtual media capabilities of Nova, which means there are no Neutron metadata agents or DCHP relay
required to assign an IP address to instances.
■ DHCP relay - Forwards DHCP requests to Neutron at central site.
NOTE: A separate DHCP relay instance is required for each provider network.
■ External DCHP server at site - in this case instance IP addresses are not managed by Neutron.
Hardware options
To reduce time spent on specifying hardware for an initial edge deployment, this Reference Architecture Guide offers a full
solution using validated Dell EMC PowerEdge server hardware designed to allow a wide range of configuration options, including
optimized configurations for compute nodes.
Dell EMC recommends starting an edge site deployment using components from this Reference Architecture Guide - Version
16.1. These hardware and operations processes comprise a flexible foundation upon which to expand as your site deployment(s)
grow, so your investment is protected.
As noted throughout this Reference Architecture Guide - Version 16.1, Dell EMC constantly adds capabilities to expand this
offering, and other hardware may be available at the time of this reading. Please contact your Dell EMC sales representative for
more information on new hardware releases.
Service layout
During a DCN site deployment, a subset of OpenStack OpenStack services will be installed each compute node.
Deployment overview
This is an overview of the DCN deployment process that can be utilized for planning purposes:
NOTE: The DCN feature can be enabled by the JetPack version 16.1 automation toolkit and can enable multiple DCN sites.
1. Hardware setup:
● Rack and stack
● Cabling
● iDRAC setup
● PXE NIC configuration
● Server BIOS and RAID configuration
● Switch configuration
2. Software setup at each DCN site:
● Install a DCHP relay(s) used to forward DCHP traffic to Neutron running on the central cloud control plane
○ At least one DCHP relay is required for provisioning compute nodes at the DCN site
○ Where the DHCP relay(s) are installed is up to the customer
● Discover edge site nodes
● Import discovered nodes into Red Hat OpenStack Director
● Configure overcloud files for each DCN site
● Provision DCN site
● Validate DCN nodes' networking
3. Environment tests
● Once Red Hat OpenStack Director has completed a DCN site deployment, a typical set of validation tasks will include:
4. Create a flavor,
5. Launch an instance,
● Launch an instance using the new flavor and DCN site provider network, for example:
6. Validate instance,
● Validate instance is running and that networking and addressing is correct.
NOTE: See 25GbE cluster network logical architecture for Edge profile for a logical architectural layout.
Solution expansion
The Dell Technologies Reference Architecture for Red Hat OpenStack Platform can be expanded by:
● Adding compute nodes to an existing DCN site deployment(s). No more than 20 compute nodes per site are supported at
this time.
● Deploy additional DCN site(s)
NOTE: Currently, Reference Architecture Guide version 16.1 supports up to a total of 700 compute nodes across all sites,
including the core OpenStack installation. For other expansion details, please speak with your Dell EMC sales representative.
Service layout
During the deployment each service configured by the Dell Technologies Reference Architecture for Red Hat OpenStack
Platform needs to reside upon a particular hardware type. For each server platform, two types of nodes have been designed:
● Dell EMC PowerEdge R650 or Dell EMC PowerEdge R750 for compute nodes, controller nodes, Solution Admin Host
(SAH)s, or infrastructure hardware type.
● Dell EMC PowerEdge R750 for storage nodes.
Red Hat OpenStack Director is designed for flexibility, enabling you to try different configurations in order to find the optimal
service placement for your workload. Overcloud: Node type to services presents the recommended layout of each service.
The Red Hat OpenStack Director is deployed to the Solution Admin Host (SAH) as an individual VM. This enables the VM to
control its respective resources.
36 Operational notes
Table 15. Overcloud: Node type to services (continued)
Node to deploy Service
OpenStack controllers Cinder-volume
OpenStack controllers Database-server
OpenStack controllers Glance-Image
OpenStack controllers HAproxy (Load Balancer)
OpenStack controllers Heat
OpenStack controllers Keystone-server
OpenStack controllers Neutron-server
OpenStack controllers Nova-controller
OpenStack controllers Nova dashboard-server
Three or more compute nodes Nova-multi-compute
OpenStack controllers Pacemaker
OpenStack controllers RabbitMQ-server (Messaging)
OpenStack controllers Barbican (Secure manangement of secrets)
OpenStack controllers Octavia (LBaaS)
OpenStack controllers Red Hat Ceph storage RADOS gateway
OpenStack controllers Red Hat Ceph storage Monitor a
a. The number of OSDs that can be supported with three Controller nodes is listed at 1,000 (https://access.redhat.com/
articles/1548993)
Deployment overview
This is an overview of the deployment process that can be utilized for planning purposes:
Hardware setup:
● Rack and stack
● Cabling
● iDRAC setup
● PXE NIC configuration
● Server BIOS and RAID configuration
● Switch configuration
Software setup:
● Deploy SAH for provisioning services:
○ Deploy Red Hat OpenStack Director Virtual Server (VM) to the SAH.
● Discover nodes
● Import discovered nodes into Red Hat OpenStack Director
● Configure overcloud files
Operational notes 37
● Provision overcloud
● Validate all nodes networking
● Post-deployment, including but not limited to:
○ Enabling fencing
○ Enabling local storage for ephemeral
Environment tests
● Tempest can be used to validate the deployment. At minimum the following tests should be performed:
○ Project creation
○ User creation
○ Network creation
○ Image upload and launch
○ Floating IPaAssignment
○ Basic network testing
○ Volume creation and attachment to VM
○ Object storage upload, retrieval and deletion
○ Deletion of all artifacts created during validation.
38 Operational notes
11
Solution architecture
This core architecture provides prescriptive guidance and recommendations, jointly engineered by Dell EMC and Red Hat, for
deploying Dell Technologies Reference Architecture for Red Hat OpenStack Platform version 16.1 with Dell EMC infrastructure.
The goals are to:
● Provide practical system design guidance and recommended configurations
● Develop tools to use with OpenStack for day-to-day usage and management
● Develop networking configurations capable of supporting your production system
The development of this architecture builds upon the experience and engineering skills of Dell EMC and Red Hat, and
encapsulates best practices developed in numerous real-world deployments. The designs and configurations in this architecture
have been tested in Dell EMC and Red Hat labs to verify system functionality and operational robustness.
The solution consists of the components shown in Solution with 25GbE/100GbE, Red Hat Ceph storage cluster, optional Dell
EMC Unity storage, optional Dell EMC PowerMax storage and optional SC series storage, and represents the base upon which
all optional components and expansion of the Dell Technologies Reference Architecture for Red Hat OpenStack Platform are
built.
Topics:
• Solution common settings
• Solution with 25GbE/100GbE networking overview
Solution architecture 39
Table 16. OpenStack node type to network 802.1q tagging
Network Solution Admin Host OpenStack controller Red Hat Ceph storage
External Network VLAN for Not Connected Connected, tagged Not Connected
Tenants (Floating IP Network)
iDRAC physical connection to Connected, Untagged Connected, Untagged Connected, Untagged
the Management/OOB VLAN
Internal Networks VLAN for Not Connected Connected, Tagged Not Connected
Tenants
Management/OOB Network Connected, Tagged Not Connected Not Connected
VLAN
Private API Network VLAN Connected, Tagged Connected, Tagged Not Connected
Provisioning VLAN Connected, Tagged Connected, Untagged Connected, Untagged
Public API Network VLAN Connected, Tagged Connected, Tagged Not Connected
Storage Clustering VLAN Not Connected Not Connected Connected, Tagged
Storage Network VLAN Connected, Tagged Connected, Tagged Connected, Tagged
Tenant Tunnel Network Not Connected Connected, Tagged Not Connected
Table 17. OpenStack Compute Node for xSP and CSP profile to network 802.1q tagging
Network xSP OpenStack compute CSP - OpenStack compute NFV
External Network VLAN for Tenants Not Connected Connected, Tagged
(Floating IP Network)
iDRAC physical connection to the Connected, Untagged Connected, Untagged
Management/OOB VLAN
Internal Networks VLAN for Tenants Connected, Tagged Connected, Tagged
Management/OOB Network VLAN Not Connected Not Connected
Private API Network VLAN Connected, Tagged Connected, Tagged
Provisioning VLAN Connected, Untagged Connected, Untagged
Public API Network VLAN Not Connected Not Connected
Storage Clustering VLAN Not Connected Not Connected
Storage Network VLAN Connected, Tagged Connected, Tagged
Tenant Tunnel Network Connected, Tagged Connected, Tagged
40 Solution architecture
Table 18. Storage node type to network 802.1q tagging (continued)
Network Dell EMC Unity Dell EMC SC series storage Dell EMC SC series storage
Enterprise Manager array
Public API Network VLAN Not Connected Connected, Untagged Not Connected
Storage Network VLAN Connected, Untagged Connected, Untagged Connected, Untagged
Storage Clustering VLAN Not Connected Not Connected Not Connected
Tenant Tunnel Network Not Connected Not Connected Not Connected
Table 19. OpenStack node types for HCI profile to network 802.1q tagging
Network HCI OpenStack controller OpenStack
External Network VLAN for Tenants Not Connected Connected, tagged
(Floating IP Network)
iDRAC physical connection to the Connected, Untagged Connected, Untagged
Management/OOB VLAN
Internal Networks VLAN for Tenants Connected, Tagged Connected, Tagged
Management/OOB Network VLAN Not Connected Not Connected
Private API Network VLAN Connected, Tagged Connected, Tagged
Provisioning VLAN Connected, Tagged Connected, Untagged
Public API Network VLAN Not Connected Not Connected
Storage Clustering VLAN Not Connected Not Connected
Storage Network VLAN Connected, Tagged Connected, Tagged
Tenant Tunnel Network Connected, Tagged Connected, Tagged
Solution architecture 41
Ceph storage troubleshooting and servicing tools and utilities. Red Hat Ceph Storage Dashboard is installed on the Controllers.
Note that:
● The SAH must have access to the Controller and Storage nodes through the Private API Access VLAN in order to manage
Red Hat Ceph storage.
● The Controller nodes must have access to the Storage nodes through the Storage Network VLAN in order for the MON
processes on the Controller nodes to be able to query the Red Hat Ceph storage MON processes, for the cluster state and
configuration.
● The Compute nodes must have access to the Storage nodes through the Storage Network VLAN in order for the Red Hat
Ceph storage client on that node to interact with the storage nodes, OSDs, and the Red Hat Ceph storage MON processes.
● The Storage nodes must have access to the Storage Network VLAN, as previously stated, and to the Storage Cluster
Network VLAN.
42 Solution architecture
Table 21. OpenStack compute node for xSP and CSP profile to network 802.1q tagging
Network xSP OpenStack compute CSP - OpenStack compute
NFV
External network VLAN for tenants (floating Not connected Connected, tagged
IP network)
iDRAC physical connection to the Connected, untagged Connected, untagged
Management/OOB VLAN
Internal networks VLAN for tenants Connected, tagged Connected, tagged
Management/OOB network VLAN Not connected Not connected
Private API network VLAN Connected, tagged Connected, tagged
Provisioning VLAN Connected, untagged Connected, untagged
Public API network VLAN Not connected Not connected
Storage clustering VLAN Not connected Not connected
Storage network VLAN Connected, tagged Connected, tagged
Tenant tunnel network Connected, tagged Connected, tagged
Table 23. OpenStack node types for HCI profile to network 802.1q tagging
Network HCI OpenStack
External network VLAN for tenants (floating IP network) Not connected Connected, tagged
iDRAC physical connection to the Management/OOB Connected, untagged Connected, untagged
VLAN
Internal networks VLAN for tenants Connected, tagged Connected, tagged
Management/OOB network VLAN Not connected Not connected
Private API network VLAN Connected, tagged Connected, tagged
Provisioning VLAN Connected, tagged Connected, untagged
Public API network VLAN Not connected Not connected
Storage clustering VLAN Not connected Not connected
Storage network VLAN Connected, tagged Connected, tagged
Solution architecture 43
Table 23. OpenStack node types for HCI profile to network 802.1q tagging (continued)
Network HCI OpenStack
Tenant tunnel network Connected, untagged Connected, tagged
44 Solution architecture
See Solution Admin Host (SAH) Dell EMC PowerEdge R650, Controller node Dell EMC PowerEdge R650, Compute node Dell
EMC PowerEdge R650 or Compute node Dell EMC PowerEdge R750 hardware configurations. The Solution includes:
● Node 1: Solution Admin Host (SAH) with Red Hat OpenStack Director
● Nodes 2 - 4: Dell EMC PowerEdge R650 OpenStack controllers
● Nodes 5 - 7: Dell EMC PowerEdge R650 or Dell EMC PowerEdge R750 Nova compute nodes
● Nodes 8 - 10: Dell EMC PowerEdge R750 storage nodes
● Network Switches: Two (2) Dell EMC Networking S5232F-ON, and one (1) Dell EMC Networking S3048-ON
NOTE: The following rack is not to scale but shows the node types and usage.
Figure 4. Solution with 25GbE/100GbE, Red Hat Ceph storage cluster, optional SC series storage, Dell EMC Unity,
Dell EMC PowerStore, and Dell EMC PowerMax storage.
NOTE: The following rack is not to scale but shows the node types and usage.
Solution architecture 45
Figure 5. Solution with 25GbE/100GbE and Dell EMC PowerEdge XE2420
46 Solution architecture
Figure 6. Solution with 25GbE/100GbE, PowerFlex cluster, optional SC series storage, Dell EMC Unity storage, Dell
EMC PowerStore, and Dell EMC PowerMax.
Solution architecture 47
NOTE: The following rack is not to scale but shows the node types and usage.
For the CSP Profile with OVS Offload and Ceph storage logical network:
48 Solution architecture
Figure 8. 25GbE/100GbE cluster network logical architecture for CSP OVS Offload Ceph storage profile
Solution architecture 49
Figure 9. 25GbE/100GbE cluster network logical architecture for CSP and PowerFlex profile
50 Solution architecture
Figure 10. 25GbE/100GbE cluster network logical architecture for HCI profile
Solution architecture 51
Figure 11. 25GbE cluster network logical architecture for profile
52 Solution architecture
12
Bill of materials (BOM)
This guide provides bill of material information necessary to purchase the proper hardware to deploy the Dell Technologies
Reference Architecture for Red Hat OpenStack Platform.
NOTE: For cable, racks, power please contact your Dell EMC support representative.
Topics:
• Nodes overview
• Bill of Materials for Dell EMC PowerEdge R-Series solution
• Bill of Materials for Dell EMC PowerEdge R-Series — DCN
• Subscriptions and network switches in the solution
Nodes overview
The minimum hardware needed is:
● One Solution Admin Host (SAH)
● Three controller nodes
● Three compute nodes
● Three storage servers
Please consult with your Dell EMC sales representative to ensure proper preparation and submission of your hardware and
software orders.
Table 24. Solution Admin Host (SAH) Dell EMC PowerEdge R650
Machine function SAH node
Platform Dell EMC PowerEdge R650 (one qty)
CPU 2 x Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz
RAM (minimum) 192 GB DDR-4 2933 MHz
LOM 1 x Broadcom Gigabit Ethernet BCM5720
Add-in network 2 x Mellanox ConnectX-5 EN 25GbE Dual-port SFP28 Adapter
Disk 8 x 1117 GB 10k SAS 12Gbps
Table 27. Compute node Dell EMC PowerEdge R650 or Dell EMC PowerEdge R750
Machine function Compute nodes
Platform Dell EMC PowerEdge R650 (three qty)
CPU 2 x Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz
RAM (minimum) 192 GB DDR-4 2933 MHz
Add-in network 2 x Mellanox ConnectX-5 EN 25GbE Dual-port SFP28 Adapter
Disk 8 x 1117 GB 10k SAS 12Gbps
Storage controller 1 x PERC H755 Front
RAID RAID 10
NOTE: Be sure to consult your Dell EMC account representative before changing the recommended hardware
configurations.
NOTE: For cable, racks, power please contact your Dell EMC support representative.
Topics:
• Bill of Materials for Dell EMC PowerEdge R-Series - Mellanox
• Bill of Materials for Dell EMC PowerEdge R-Series solution — Intel NICs
• Bill of Materials for Dell EMC PowerEdge R740xd — PowerFlex
• Bill of Materials for Dell EMC PowerEdge R-Series solution — Hyper-Converged Infrastructure
NOTE: * When choosing Dell EMC PowerEdge R740xd as compute nodes, all nodes must have the same number of disks.
NOTE: Be sure to consult your Dell EMC account representative before changing the recommended hardware
configurations.
NOTE: Be sure to consult your Dell EMC account representative before changing the recommended hardware
configurations.
NOTE: When using Intel ® NICs, Open vSwitch (OVS) hardware offloading is not supported. All other NFV features
documented in this Reference Architecture Guide are supported. Be sure to consult your Dell EMC account representative
before changing the recommended hardware configurations.
NOTE: When using Intel ® NICs, Open vSwitch (OVS) hardware offloading is not supported. All other NFV features
documented in this Reference Architecture Guide are supported. Be sure to consult your Dell EMC account representative
before changing the recommended hardware configurations.
NOTE: Be sure to consult your Dell EMC account representative before changing the recommended hardware
configurations.
NOTE: NFV features support is experimental in a HCI profile. All other NFV features documented in this Reference
Architecture Guide are supported. Be sure to consult your Dell EMC account representative before changing the
recommended hardware configurations.
References 69
15
Glossary
API—Application Programing Interface is a specification that defines how software components can interact.
BMC/iDRAC Enterprise—Baseboard management controller. An on-board microcontroller that monitors the system for critical
events by communicating with various sensors on the system board, and sends alerts and log events when certain parameters
exceed their preset thresholds.
BOSS— The Boot Optimized Storage Solution (BOSS) enables customers to segregate operating system and data on server-
internal storage. This is helpful in the Hyper-Converged Infrastructure (HCI) and Software Defined Storage (SDS) arenas, to
separate operating system drives from data drives, and implement hardware RAID mirroring (RAID1) for OS drives.
CDH—See http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-145.pdf. Cloud computing is a model for
enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g.,
networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management
effort or service provider interaction.
Cluster—A set of servers dedicated to OpenStack that can be attached to multiple distribution switches.
Compute node—The hardware configuration that best supports the hypervisor server or Nova compute roles.
DPDK— DPDK (Data-Plane Development Kit) eliminates packet buffer copies..
DevOps— Development Operations (DevOps) is an operational model for managing data centers using improved automated
deployments, shortened lead times between fixes, and faster mean time to recovery. See https://en.wikipedia.org/wiki/
DevOps.
DIMM—Dual In-line Memory Module.
DNS— The domain name system (DNS) defines how Internet domain names are located, and translated into Internet Protocol
(IP) addresses.
FQDD— A fully qualified device descriptor (FQDD) is a method used to describe a particular component within a system or
subsystem, and is used for system management and other purposes.
FQDN— A fully qualified domain name (FQDN) is the portion of an Internet Uniform Resource Locator (URL) that fully
identifies the server to which an Internet request is addressed. The FQDN includes the second-level domain name, such as
"dell.com", and any other levels as required.
GUI— Graphical User Interface. A visual interface for human interaction with the software, taking inputs and generating easy to
understand visual outputs.
Hypervisor—Software that runs virtual machines (VMs).
IaaS—Infrastructure as a Service.
Infrastructure node—Systems that handle the control plane and deployment functions.
ISV—Independent Software Vendor.
JBOD—Just a Bunch of Disks.
LAG—Link Aggregation Group.
LOM—LAN on motherboard.
LVM—Logical Volume Management.
ML2—The Modular Layer 2 plug-in is a framework that allows OpenStack to utilize different layer 2 networking technologies
NFS— The Network File System (NFS) is a distributed filesystem that allows a computer user to access, manipulate, and store
files on a remote computer, as though they resided on a local file directory.
NIC—Network Interface Card.
Node—One of the servers in the cluster.
NUMA—Non-Uniform Memory Access.
Overcloud—The functional cloud that is available to run guest VMs and workloads.
Pod—An installation comprised of three racks, and consisting of servers, storage, and networking.
70 Glossary
REST— REST - Representational State Transfer (also ReST). Relies upon stateless, client-server, cacheable communications
protocol to access the API.
RHOSP—Red Hat OpenStack Platform.
RPC—Remote Procedure Call.
SAH—The Solution Admin Host (SAH) is a physical server that supports VMs for the overcloudUndercloud machines needed
for the cluster to be deployed and operated.
SDS—Software-defined storage (SDS) is an approach to computer data storage in which software is used to manage policy-
based provisioning and management of data storage, independent of the underlying hardware.
SDN—Software-defined Network (SDN) is where the software will define, create, use and destroy different networks as
needed.
Stamp—A stamp is the collection of all servers and network switches in the solution.
Storage Node—The hardware configuration that best supports SDS functions such as Red Hat Ceph storage.
ToR—Top-of-rack switch/router.
U— U used in the definition of the size of server, example 1U or 2U. A "U" is a unit of measure equal to 1.75 inches in height.
Undercloud—The undercloud is the system used to control, deploy, and monitor the Overcloud - it is a single node OpenStack
deployment completely under the administrators control. The undercloud is not HA configured.
VLT—A Virtual Link Trunk (VLT) is the combined port channel between an attached device (ToR switch) and the VLT peer
switches.
VLTi—A Virtual Link Trunk Interconnect (VLTi) is an interconnect used to synchronize states between the VLT peer switches.
Both endpoints must be the same speed, i.e. 40Gb → 40Gb; 1G interfaces are not supported.
VM—Virtual Machine - a simulation of a computer system.
Glossary 71