VCF Vxrail
VCF Vxrail
You can find the most up-to-date technical documentation on the VMware by Broadcom website at:
https://docs.vmware.com/
VMware by Broadcom
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com
©
Copyright 2019-2024 Broadcom. All Rights Reserved. The term “Broadcom” refers to Broadcom Inc.
and/or its subsidiaries. For more information, go to https://www.broadcom.com. All trademarks, trade
names, service marks, and logos referenced herein belong to their respective companies.
VMware by Broadcom 2
Contents
VMware by Broadcom 3
VMware Cloud Foundation on Dell VxRail Guide
Prepare Your Microsoft Certificate Authority to Allow SDDC Manger to Manage Certificates
50
Install Microsoft Certificate Authority Roles 50
Configure the Microsoft Certificate Authority for Basic Authentication 51
Create and Add a Microsoft Certificate Authority Template 52
Assign Certificate Management Privileges to the SDDC Manager Service Account 53
Configure a Microsoft Certificate Authority in SDDC Manager 55
Install Microsoft CA-Signed Certificates using SDDC Manager 56
Configure VMware Cloud Foundation to Use OpenSSL CA-Signed Certificates 58
Configure OpenSSL-signed Certificates in SDDC Manager 58
Install OpenSSL-signed Certificates using SDDC Manager 60
Install Third-Party CA-Signed Certificates Using Server Certificate and Certificate Authority Files
62
Install Third-Party CA-Signed Certificates in VMware Cloud Foundation Using a Certificate
Bundle 64
Remove Old or Unused Certificates from SDDC Manager 68
VMware by Broadcom 4
VMware Cloud Foundation on Dell VxRail Guide
VMware by Broadcom 5
VMware Cloud Foundation on Dell VxRail Guide
Import the Workspace ONE Access Certificate to VMware Aria Suite Lifecycle 141
Add Workspace ONE Access Passwords to VMware Aria Suite Lifecycle 143
Deploy a Standard Workspace ONE Access Instance Using VMware Aria Suite Lifecycle
144
Deploy Clustered Workspace ONE Access Instance Using VMware Aria Suite Lifecycle 146
Configure an Anti-Affinity Rule and a Virtual Machine Group for a Clustered Workspace ONE
Access Instance 148
Configure NTP on Workspace ONE Access 149
Configure the Domain and Domain Search Parameters on Workspace ONE Access 150
Configure an Identity Source for Workspace ONE Access 150
Add the Clustered Workspace ONE Access Cluster Nodes as Identity Provider Connectors
152
Assign Roles to Active Directory Groups for Workspace ONE Access 153
Assign Roles to Active Directory Groups for VMware Aria Suite Lifecycle 153
VMware by Broadcom 6
VMware Cloud Foundation on Dell VxRail Guide
VMware by Broadcom 7
VMware Cloud Foundation on Dell VxRail Guide
Configure Microsoft ADFS as the Identity Provider in the SDDC Manager UI 225
Configure Identity Federation in VMware Cloud Foundation Using Okta 228
Create an OpenID Connect application for VMware Cloud Foundation in Okta 229
Configure Okta as the Identity Provider in the SDDC Manager UI 230
Update the Okta OpenID Connect application with the Redirect URI from SDDC Manager
234
Create a SCIM 2.0 Application for Using Okta with VMware Cloud Foundation 234
Assign Okta Users and Groups as Administrators in SDDC Manager, vCenter Server, and
NSX Manager 236
Configure Identity Federation in VMware Cloud Foundation Using Microsoft Entra ID 241
Create an OpenID Connect application for VMware Cloud Foundation in Microsoft Entra
ID 242
Configure Microsoft Entra ID as the Identity Provider in the SDDC Manager UI 244
Update the Microsoft Entra ID OpenID Connect application with the Redirect URI from
SDDC Manager 247
Create a SCIM 2.0 Application for Using Microsoft Entra ID with VMware Cloud
Foundation 248
Assign Microsoft Entra ID Users and Groups as Administrators in SDDC Manager, vCenter
Server, and NSX Manager 252
Add a User or Group to VMware Cloud Foundation 255
Remove a User or Group 256
Create a Local Account 256
Create an Automation Account 258
VMware by Broadcom 8
VMware Cloud Foundation on Dell VxRail Guide
VMware by Broadcom 9
VMware Cloud Foundation on Dell VxRail Guide
VMware by Broadcom 10
VMware Cloud Foundation on Dell VxRail Guide
VMware by Broadcom 11
VMware Cloud Foundation on Dell VxRail Guide
VMware by Broadcom 12
About VMware Cloud Foundation
on Dell VxRail 1
The VMware Cloud Foundation on Dell VxRail Guide provides information on managing the
integration of VMware Cloud Foundation and Dell VxRail. As this product is an integration of
VMware Cloud Foundation and Dell VxRail, the expected results are obtained only when the
configuration is done from both the products. This guide covers all the information regarding the
VMware Cloud Foundation workflows. For the instructions on configuration to be done on Dell
VxRail, this guide provides links to the Dell VxRail documentation.
Intended Audience
The VMware Cloud Foundation on Dell VxRail Guide is intended for the system administrators of
the VxRail environments who want to adopt VMware Cloud Foundation. The information in this
document is written for experienced data center system administrators who are familiar with:
n IP networks
Additionally, you should be familiar with these software products, software components, and
their features:
Related Publications
The Planning and Preparation Workbook provides detailed information about the software, tools,
and external services that are required to deploy VMware Cloud Foundation on Dell EMC VxRail.
VMware by Broadcom 13
VMware Cloud Foundation on Dell VxRail Guide
The VMware Cloud Foundation on Dell Release Notes provide information about each release,
including:
n Resolved issues
n Known issues
The VMware Cloud Foundation on Dell VxRail API Reference Guide provides information about
using the API.
VMware by Broadcom 14
VMware Cloud Foundation on Dell
VxRail 2
VMware Cloud Foundation on Dell VxRail enables VMware Cloud Foundation on top of the Dell
VxRail platform.
An administrator of a VMware Cloud Foundation on Dell VxRail system performs tasks such as:
n Manage certificates.
n Troubleshoot issues and prevent problems across the physical and virtual infrastructure.
VMware by Broadcom 15
Prepare a VxRail Environment
for Cloud Builder Appliance
Deployment
3
Before you can deploy the VMware Cloud Builder Appliance on the VxRail cluster, you must
complete the following tasks.
Procedure
For detailed information about how to image the VxRail management nodes, contact Dell
Support.
The VxRail first run for the management cluster consists of the following tasks:
n The discovery of the VxRail Nodes occurs. All the nodes that were imaged are detected.
VMware by Broadcom 16
VMware Cloud Foundation on Dell VxRail Guide
Note You specify the vSphere Lifecycle Manager method (vSphere Lifecycle Manager images or
vSphere Lifecycle Manager baselines) during the VxRail first run or in the vCenter Server VxRail
UI after the first run, but before bring-up. See the Dell VxRail documentation for details.
Note vSAN Express Storage Architecture (ESA) requires vSphere Lifecycle Manager images.
n vCenter
n vSAN
n VxRail Manager
VMware by Broadcom 17
Deploy VMware Cloud Builder
Appliance 4
VMware Cloud Builder is a virtual appliance that is used to deploy and configure the first cluster
of the management domain and transfer inventory and control to SDDC Manager. During the
deployment process, the VMware Cloud Builder appliance validates network information you
provide in the deployment parameter workbook such as DNS, network (VLANS, IPs, MTUs), and
credentials.
This procedure describes deploying the VMware Cloud Builder appliance to the cluster that was
created during the VxRail first run.
Prerequisites
Before you deploy the VMware Cloud Builder appliance, verify that your environment fulfills the
requirements for this process.
Prerequisite Value
Installation Packages Verify that you download the OVA file(s) for VMware
Cloud Builder.
Network n Verify that the static IP address and FQDN for the
VMware Cloud Builder appliance are available.
n Verify that connectivity is in place from the VMware
Cloud Builder appliance and the management VLAN
used in the deployment.
The VMware Cloud Builder appliance must be on the same management network as the hosts to
be used. It must also be able to access all required external services, such as DNS and NTP.
VMware by Broadcom 18
VMware Cloud Foundation on Dell VxRail Guide
Procedure
3 In the navigator, select the cluster that was created during the VxRail first run.
6 Browse to the VMware Cloud Builder appliance OVA, select it, and click Open.
7 Click Next.
8 Enter a name for the virtual machine, select a target location, and click Next.
9 Select the cluster you created during the VxRail first run and click Next.
12 On the Select Storage page, select the storage for the VMware Cloud Builder appliance and
click Next.
13 On the Select networks dialog box, select the management network and click Next.
VMware by Broadcom 19
VMware Cloud Foundation on Dell VxRail Guide
14 On the Customize template page, enter the following information for the VMware Cloud
Builder appliance and click Next:
Setting Details
Admin Username The admin user name cannot be one of the following pre-defined user
names:
n root
n bin
n daemon
n messagebus
n systemd-bus-proxy
n systemd-journal-gateway
n systemd-journal-remote
n systemd-journal-upload
n systemd-network
n systemd-resolve
n systemd-timesync
n nobody
n sshd
n named
n rpc
n tftp
n ntp
n smmsp
n cassandra
Admin Password/Admin Password The admin password must be a minimum of 15 characters and include at
confirm least one uppercase, one lowercase, one digit, and one special character.
Supported special characters:
@ ! # $ % ? ^
Root password/Root password The root password must be a minimum of 15 characters and include at
confirm least one uppercase, one lowercase, one digit, and one special character.
Supported special characters:
@ ! # $ % ? ^
Hostname Enter the hostname for the VMware Cloud Builder appliance.
Network 1 IP Address Enter the IP address for the VMware Cloud Builder appliance.
Default Gateway Enter the default gateway for the VMware Cloud Builder appliance.
DNS Servers IP address of the primary and secondary DNS servers (comma separated).
Do not specify more than two servers.
VMware by Broadcom 20
VMware Cloud Foundation on Dell VxRail Guide
Setting Details
DNS Domain Search Paths Comma separated. For example vsphere.local, sf.vsphere.local.
Note Make sure your passwords meet the requirements specified above before clicking
Finish or your deployment will not succeed.
16 After the VMware Cloud Builder appliance is deployed, SSH in to the VM with the admin
credentials provided in step 14.
18 Verify that the VMware Cloud Builder appliance has access to the required external services,
such as DNS and NTP by performing forward and reverse DNS lookups for each host and the
specified NTP servers.
VMware by Broadcom 21
Deploy the Management Domain
Using VMware Cloud Builder 5
The VMware Cloud Foundation deployment process is referred to as bring-up. You specify
deployment information specific to your environment such as networks, hosts, license keys, and
other information in the deployment parameter workbook and upload the file to the VMware
Cloud Builder appliance to initiate bring-up of the management domain.
During bring-up, the management domain is created on the ESXi hosts specified in the
deployment parameter workbook. The VMware Cloud Foundation software components
are automatically deployed, configured, and licensed using the information provided. The
deployment parameter workbook can be reused to deploy multiple VMware Cloud Foundation
instances of the same version.
The following procedure describes how to perform bring-up of the management domain using
the deployment parameter workbook. You can also perform bring-up using a custom JSON
specification. See the VMware Cloud Foundation API Reference Guide for more information.
Externalizing the vCenter Server that gets created during the VxRail first run is automated as part
of the bring-up process.
Procedure
1 In a web browser, log in to the VMware Cloud Builder appliance administration interface:
https://Cloud_Builder_VM_FQDN.
2 Enter the admin credentials you provided when you deployed the VMware Cloud Builder
appliance and then click Log In.
3 On the End-User License Agreement page, select the I Agree to the End User License
Agreement check box and click Next.
4 Select VMware Cloud Foundation on Dell EMC VxRail and click Next.
If there are any gaps, ensure they are fixed before proceeding to avoid issues during the
bring-up process. You can download or print the prerequisite list for reference.
6 Download the deployment parameter workbook from the Broadcom Support portal and fill it
in with the required information.
7 Click Next.
VMware by Broadcom 22
VMware Cloud Foundation on Dell VxRail Guide
8 Click Select File, browse to the completed workbook, and click Open to upload the
workbook.
To access the bring-up log file, SSH to the VMware Cloud Builder appliance as admin and
open the /opt/vmware/bringup/logs/vcf-bringup-debug.log file.
If there is an error during the validation and the Next button is grayed out, you can either
make corrections to the environment or edit the deployment parameter workbook and
upload it again. Then click Retry to perform the validation again.
If any warnings are displayed and you want to proceed, click Acknowledge and then click
Next.
During the bring-up process, the vCenter Server, NSX, and SDDC Manager appliances are
deployed and the management domain is created. The status of the bring-up tasks is
displayed in the UI.
After bring-up is completed, a green bar is displayed indicating that bring-up was successful.
A link to the SDDC Manager UI is also displayed. If there are errors during bring-up, see
Chapter 6 Troubleshooting VMware Cloud Foundation Deployment.
11 Click Download to download a detailed deployment report. This report includes information
on assigned IP addresses and networks that were configured in your environment.
Before you begin filling in the deployment parameter workbook, download the workbook from
the Broadcom Support portal.
The fields in yellow contain sample values that you should replace with the information for your
environment. If a cell turns red, the required information is missing, or validation input has failed.
Important The deployment parameter workbook is not able to fully validate all inputs due
to formula limitations of Microsoft Excel. Some validation issues may not be reported until you
upload the deployment parameter workbook to the VMware Cloud Builder appliance.
Note Do not copy and paste content between cells in the deployment parameter workbook,
since this may cause issues.
VMware by Broadcom 23
VMware Cloud Foundation on Dell VxRail Guide
VxRail Prerequistes
n The VxRail first run is completed and vCenter Server and VxRail Manager VMs are deployed.
n The vCenter Server version matches the build listed in the Cloud Foundation Bill of Materials
(BOM). See the VMware Cloud Foundation Release Notes for the BOM.
Credentials Worksheet
The Credentials worksheet details the accounts and initial passwords for the VMware Cloud
Foundation components. You must provide input for each yellow box. A red cell may indicate
that validations on the password length has failed.
Input Required
Update the Default Password field for each user (including the automation user in the last row).
Passwords can be different per user or common across multiple users. The tables below provide
details on password requirements.
Password Requirements
VxRail Manager service account (mystic) Standard. The service account password must be different than the
VxRail Manager root account password.
ESXi Host root account This is the password which you configured on the hosts during ESXi
installation.
VMware by Broadcom 24
VMware Cloud Foundation on Dell VxRail Guide
Password Requirements
NSX user interface and default CLI admin 1 Length 12-127 characters
account 2 Must include:
n mix of uppercase and lowercase letters
n a number
n a special character, such as @ ! # $ % ^ or ?
n at least five different characters
3 Must not include: * { } [ ] ( ) / \ ' " ` ~ , ; : . < >
VMware by Broadcom 25
VMware Cloud Foundation on Dell VxRail Guide
Password Requirements
With VMware Cloud Foundation 5.1 and later, you have the ability to create separate distibuted
port groups for management VM (for example, vCenter Server and NSX Manager) traffic and
ESXi host management traffic. You can configure this during the VxRail first run.
Management Enter the VLAN You cannot Enter the CIDR Enter the Enter MTU for
Network ID. change the notation for gateway IP for the management
The VLAN ID can portgroup name the management the managment network only.
vMotion Network
be between 0 prefix. network only. network only.
Note VxRail
vSAN Network and 4094.
Note VxRail Note VxRail Manager
Note The VLAN Manager Manager configures the
ID for Uplink 1 configures the configures the vMotion and
and Uplink 2 vMotion and vMotion and vSAN networks.
Networks must vSAN networks. vSAN networks.
The MTU can
be unique and
be between 1500
not used by any
and 9000.
other network
type.
VMware by Broadcom 26
VMware Cloud Foundation on Dell VxRail Guide
System vSphere Distributed Switch Used for NSX Overlay and VLAN Traffic
In VxRail Manager, you can choose to create one or two vSphere Distributed Switches (vDS)
for system traffic and to map physical NICs (pNICs) to those vSphere Distributed Switches. The
following fields are used to specify which system vDS and vmnics to use for NSX traffic (NSX
Overlay, NSX VLAN, Edge Overlay, and Uplink networks). You can also choose to create two
additional vDSes to use for NSX traffic. The Transport Zone Type indicates the type of NSX
traffic the vDS will be associated with (Overlay, VLAN, or Overlay/VLAN).
System vSphere Distributed Switch - Name Enter the name of the vDS to use for overlay traffic.
System vSphere Distributed Switch - vmnics to be used for Enter the vmnics to use for overlay traffic.
overlay traffic
System vSphere Distributed Switch - Transport Zone Type Select Overlay, VLAN, or Overlay/VLAN.
Secondary System vSphere Distributed Switch for NSX Overlay and VLAN Traffic
Choose Yes to use a secondary system vDS for overlay/VLAN traffic.
Secondary System vSphere Distributed Switch - Name Enter the name of the secondary system vSphere
Distributed Switch (vDS).
Secondary System vSphere Distributed Switch - vmnics Enter the vmnics to assign to the secondary system vDS.
For example: vmnic4, vmnic5
Secondary System vSphere Distributed Switch - Transport Select Overlay, VLAN, or Overlay/VLAN.
Zone Type
New vSphere Distributed Switch - Name Enter a name for the new vSphere Distributed Switch
(vDS).
New vSphere Distributed Switch - vmnics Enter the vmnics to assign to the new vDS. For example:
vmnic4, vmnic5
New vSphere Distributed Switch - MTU Size Enter the MTU size for the new vDS. Default value is 9000.
New vSphere Distributed Switch - Transport Zone Type Select Overlay, VLAN, or Overlay/VLAN.
VMware by Broadcom 27
VMware Cloud Foundation on Dell VxRail Guide
Enter host names for each of the four ESXi hosts. Enter IP Address for each of the four ESXi hosts.
1 In a web browser, log in to the ESXi host using the VMware Host Client.
2 In the navigation pane, click Manage and click the Services tab.
4 Connect to the VMware Cloud Builder appliance using an SSH client such as Putty.
5 Enter the admin credentials you provided when you deployed the VMware Cloud Builder
appliance.
6 Retrieve the ESXi SSH fingerprints by entering the following command replacing hostname
with the FQDN of the first ESXi host:
7 In the VMware Host Client, select the TSM-SSH service for the ESXi host and click Stop.
9 Retrieve the vCenter Server SSH fingerprint by entering the following command replacing
hostname with the FQDN of your vCenter Server:
10 Retrieve the vCenter Server SSL thumbprint by entering the following command replacing
hostname with the FQDN of your vCenter Server:
openssl s_client -connect hostname:443 < /dev/null 2> /dev/null | openssl x509 -sha256
-fingerprint -noout -in /dev/stdin
11 Retrieve the VxRail Manager SSH fingerprint by entering the following command replacing
hostname with the FQDN of your VxRail Manager:
12 Retrieve the VxRail Manager SSL thumbprint by entering the following command replacing
hostname with the FQDN of your VxRail Manager:
openssl s_client -connect hostname:443 < /dev/null 2> /dev/null | openssl x509 -sha256
-fingerprint -noout -in /dev/stdin
VMware by Broadcom 28
VMware Cloud Foundation on Dell VxRail Guide
For the management domain and VI workload domains with uniform L2 clusters, you can choose
to use static IP addresses instead. Make sure the IP range includes enough IP addresses for the
number of hosts that will use the static IP Pool. The number of IP addresses required depends
on the number of pNICs on the ESXi hosts that are used for the vSphere Distributed Switch that
handles host overlay networking. For example, a host with four pNICs that uses two pNICs for
host overlay traffic requires two IP addresses in the static IP pool..
Parameter Value
VLAN ID Enter a VLAN ID for the NSX host overlay network. The
VLAN ID can be between 0 and 4094.
Configure NSX Host Overlay Using a Static IP Pool Select No to use DHCP.
Parameter Value
VLAN ID Enter a VLAN ID for the NSX host overlay network. The
VLAN ID can be between 0 and 4094.
Configure NSX Host Overlay Using a Static IP Pool Select Yes to use a static IP pool.
CIDR Notation Enter CIDR notation for the NSX Host Overlay network.
Gateway Enter the gateway IP address for the NSX Host Overlay
network.
NSX Host Overlay Start IP Enter the first IP address to include in the static IP pool.
NSX Host Overlay End IP Enter the last IP address to include in the static IP pool.
VMware by Broadcom 29
VMware Cloud Foundation on Dell VxRail Guide
Parameter Value
Note If you have only one DNS server, enter n/a in this cell.
Note If you have only one NTP server, enter n/a in this cell.
Parameter Value
DNS Zone Name Enter root domain name for your SDDC management components.
Note VMware Cloud Foundation expects all components to be part of the same DNS zone.
Parameter Value
Enable Customer Select an option to activate or deactivate CEIP across vSphere, NSX, and vSAN during bring-
Experience up.
Improvement
Program (“CEIP”)
Parameter Value
Enable FIPS Security Select an option to activate or deactivate FIPS security mode during bring-up. VMware
Mode on SDDC Cloud Foundation supports Federal Information Processing Standard (FIPS) 140-2. FIPS
Manager 140-2 is a U.S. and Canadian government standard that specifies security requirements
for cryptographic modules. When you enable FIPS compliance, VMware Cloud Foundation
enables FIPS cipher suites and components are deployed with FIPS enabled.
To learn more about support for FIPS 140-2 in VMware products, see https://
www.vmware.com/security/certifications/fips.html.
Note This option is only available for new VMware Cloud Foundation installations and the
setting you apply during bring-up will be used for future upgrades. You cannot change the
FIPS security mode setting after bring-up.
VMware by Broadcom 30
VMware Cloud Foundation on Dell VxRail Guide
2 If you select Yes, in the License Keys section, update the red fields with your license keys.
Ensure the license key matches the product listed in each row and that the license key is valid
for the version of the product listed in the VMware Cloud Foundation BOM. The license key
audit during bring-up validates both the format and validity of the key.
Note When using the per-TiB license for vSAN, be aware that VI workload domain
components like vCenter and NSX Manager will also consume the TiB capacity.
3 If you select No, the VMware Cloud Foundation components are deployed in evaluation
mode.
Important After bring-up, you must switch to licensed mode by adding component license
keys in the SDDC Manager UI or adding and assigning a solution license key in the vSphere
Client. See the VMware Cloud Foundation Administration Guide for information about adding
component license keys in the SDDC Manager UI. See Managing vSphere Licenses for more
information about adding and applying a solution license key for VMware ESXi and vCenter
Server in the vSphere Client. If you are using a solution license key, you must also add a
separate VMware vSAN license key for vSAN clusters. See Configure License Settings for a
vSAN Cluster.
This section of the deployment parameter workbook contains sample configuration information,
but you can update them with names that meet your naming standards.
Note All host names entries within the deployment parameter workbook expect the short name.
VMware Cloud Builder takes the host name and the DNS zone provided to calculate the FQDN
value and performs validation prior to starting the deployment. The specified host names and IP
addresses must be resolvable using the DNS servers provided, both forward (hostname to IP)
and reverse (IP to hostname), otherwise the bring-up process will fail.
VMware by Broadcom 31
VMware Cloud Foundation on Dell VxRail Guide
vCenter Server Enter a host name for the vCenter Enter the IP address for the
Server. vCenter Server that is part of the
management VLAN.
Parameter Value
Note You specify the vSphere Lifecycle Manager method (vSphere Lifecycle Manager images
or vSphere Lifecycle Manager baselines) for the vSAN cluster during the VxRail first run. vSAN
Express Storage Architecture (ESA) requires vSphere Lifecycle Manager images.
Select the architecture model you plan to use. If you choose Consolidated, specify the names for
the vSphere resource pools. You do not need to specify resource pool names if you are using the
standard architecture model. See Introducing VMware Cloud Foundation for more information
about these architecture models.
Parameter Value
Resource Pool SDDC Management Specify the vSphere resource pool name for management
VMs.
Resource Pool User Edge Specify the vSphere resource pool name for user
deployed NSX VMs in a consolidated architecture.
Resource Pool User VM Specify the vSphere resource pool name for user
deployed workload VMs.
Note Resource pools are created with Normal CPU and memory shares.
VMware by Broadcom 32
VMware Cloud Foundation on Dell VxRail Guide
Parameter Value
vSAN Datastore Name Enter vSAN datastore name for your management
components.
Note You specify the vSAN storage architecture (vSAN ESA or vSAN OSA) during the VxRail
first run. To use vSAN Express Storage Architecture (ESA) you must use vSphere Lifecycle
Manager images for managing the lifecycle of ESXi hosts in the primary cluster of management
domain.
If the VMware Cloud Builder appliance does not have direct internet access, you can configure a
proxy server to download the vSAN HCL JSON. A recent version of the HCL JSON file is required
for vSAN ESA.
Parameter Value
Proxy Username
Proxy Password
Parameter Value
NSX Management Cluster VIP Enter the host name and IP address for the NSX Manager
VIP.
The host name can match your naming standards but
must be registered in DNS with both forward and reverse
resolution matching the specified IP.
NSX Virtual Appliance Node #1 Enter the host name and IP address for the first node in
the NSX Manager cluster.
VMware by Broadcom 33
VMware Cloud Foundation on Dell VxRail Guide
Parameter Value
NSX Virtual Appliance Node #2 Enter the host name and IP address for the second node
in the NSX Manager cluster.
NSX Virtual Appliance Node #3 Enter the host name and IP address for the third node in
the NSX Manager cluster.
NSX Virtual Appliance Size Select the size for the NSX Manager virtual appliances.
The default is medium.
Parameter Value
SDDC Manager Hostname Enter a host name for the SDDC Manager VM.
SDDC Manager IP Address Enter an IP address for the SDDC Manager VM.
Cloud Foundation Management Domain Name Enter a name for the management domain. This name will
appear in Inventory > Workload Domains in the SDDC
Manager UI.
VMware by Broadcom 34
Troubleshooting VMware Cloud
Foundation Deployment 6
During the deployment stage of VMware Cloud Foundation you can use log files and the
Supportability and Serviceability (SoS) Tool to help with troubleshooting.
Note After a successful bring-up, you should only run the SoS Utility on the SDDC Manager
appliance. See Supportability and Serviceability (SoS) Tool in the VMware Cloud Foundation
Administration Guide.
The SoS Utility is not a debug tool, but it does provide health check operations that can facilitate
debugging a failed deployment.
To run the SoS Utility in VMware Cloud Builder, SSH in to the VMware Cloud Builder appliance
using the admin administrative account, then enter su to switch to the root user, and navigate to
the /opt/vmware/sddc-support directory and type ./sos followed by the options required for
your desired operation.
VMware by Broadcom 35
VMware Cloud Foundation on Dell VxRail Guide
Option Description
Option Description
--force Allows SoS operations from theVMware Cloud Builder appliance after bring-
up.
Note In most cases, you should not use this option. Once bring-up is
complete, you can run the SoS Utility directly from the SDDC Manager
appliance.
--skip-known-host-check Skips the specified check for SSL thumbprint for host in the known host.
VMware by Broadcom 36
VMware Cloud Foundation on Dell VxRail Guide
Option Description
--no-clean-old-logs Use this option to prevent the tool from removing any output from a
previous collection run.
By default, before writing the output to the directory, the tool deletes
the prior run's output files that might be present. If you want to retain
the older output files, specify this option.
--rvc-logs Collects logs from the Ruby vSphere Console (RVC) only. RVC is an
interface for ESXi and vCenter.
Note If the Bash shell is not enabled in vCenter, RVC log collection will
be skipped .
Note RVC logs are not collected by default with ./sos log collection.
Option Description
--jsongenerator-input Specify the path to the input file to be used by the JSON generator utility.
JSONGENERATORINPUT For example: /tmp/vcf-ems-deployment-parameter.xlsx.
VMware by Broadcom 37
VMware Cloud Foundation on Dell VxRail Guide
Note The health check options are primarily designed to run on the SDDC Manager appliance.
Running them on the VMware Cloud Builder appliance requires the --force parameter, which
instructs the SoS Utility to identify the SDDC Manager appliance deployed by VMware Cloud
Builder during the bring-up process, and then execute the health check remotely. For example:
Option Description
--certificate-health Verifies that the component certificates are valid (within the expiry
date).
--general-health Verifies ESXi entries across all sources, checks the Postgres DB
operational status for hosts, checks ESXi for error dumps, and gets NSX
Manager and cluster status.
--ntp-health Verifies whether the time on the components is synchronized with the
NTP server in the VMware Cloud Builder appliance.
--run-vsan-checks Runs proactive vSAN tests to verify the ability to create VMs within the
vSAN disks.
Sample Output
The following text is a sample output from an --ntp-health operation.
User passed --force flag, Running SOS from Cloud Builder VM, although Bringup is completed
and SDDC Manager is available. Please expe ct failures with SoS operations.
Health Check : /var/log/vmware/vcf/sddc-support/healthcheck-2020-02-11-23-03-53-24681
Health Check log : /var/log/vmware/vcf/sddc-support/healthcheck-2020-02-11-23-03-53-24681/
sos.log
SDDC Manager : sddc-manager.vrack.vsphere.local
NTP : GREEN
VMware by Broadcom 38
VMware Cloud Foundation on Dell VxRail Guide
+-----+-----------------------------------------+------------+-------+
| SL# | Area | Title | State |
+-----+-----------------------------------------+------------+-------+
| 1 | ESXi : esxi-1.vrack.vsphere.local | ESX Time | GREEN |
| 2 | ESXi : esxi-2.vrack.vsphere.local | ESX Time | GREEN |
| 3 | ESXi : esxi-3.vrack.vsphere.local | ESX Time | GREEN |
| 4 | ESXi : esxi-4.vrack.vsphere.local | ESX Time | GREEN |
| 5 | vCenter : vcenter-1.vrack.vsphere.local | NTP Status | GREEN |
+-----+-----------------------------------------+------------+-------+
Legend:
The following text is sample output from a --vm-screenshots log collection operation.
User passed --force flag, Running SOS from Cloud Builder VM, although Bringup is completed
and SDDC Manager is available. Please expect failures with SoS operations.
Logs : /var/log/vmware/vcf/sddc-support/sos-2018-08-24-10-50-20-8013
Log file : /var/log/vmware/vcf/sddc-support/sos-2018-08-24-10-50-20-8013/sos.log
Log Collection completed successfully for : [VMS_SCREENSHOT]
VMware Cloud Builder has a number of components which are used during the bring-up process,
each component generates a log file which can be used for the purpose of troubleshooting. The
components and their purpose are:
n JsonGenerator: Used to convert the deployment parameter workbook into the required
configuration file (JSON) that is used by the Bringup Validation Service and Bringup Service.
n Bringup Service: Used to perform the validation of the configuration file (JSON), the ESXi
hosts and infrastructure where VMware Cloud Foundation will be deployed, and to perform
the deployment and configuration of the management domain components and the first
cluster.
n Supportability and Serviceability (SoS) Utility: A command line utility for troubleshooting
deployment issues.
VMware by Broadcom 39
VMware Cloud Foundation on Dell VxRail Guide
vcf-bringup-debug.log /var/log/vmware/vcf/bringup/
rest-api-debug.log /var/log/vmware/vcf/bringup/
VMware by Broadcom 40
Getting Started with SDDC
Manager 7
You use SDDC Manager to perform administration tasks on your VMware Cloud Foundation
instance. The SDDC Manager UI provides an integrated view of the physical and virtual
infrastructure and centralized access to manage the physical and logical resources.
You work with the SDDC Manager UI by loading it in a web browser. For the list of supported
browsers and versions, see the Release Notes.
Prerequisites
To log in, you need the SDDC Manager IP address or FQDN and the password for the single-
sign on user (for example administrator@vsphere.local). You added this information to the
deployment parameter workbook before bring-up.
Procedure
n https://FQDN where FQDN is the fully-qualified domain name of the SDDC Manager
appliance.
2 Log in to the SDDC Manager UI with vCenter Server Single Sign-On user credentials.
Results
You are logged in to SDDC Manager UI and the Dashboard page appears in the web browser.
VMware by Broadcom 41
VMware Cloud Foundation on Dell VxRail Guide
This dashboard appears when you log into SDDC Manager. It provides a walk-through for initial
configuration, including the recommended order for completing each task. After completing the
walk-through, a banner at the top of the screen offers a tour of the SDDC Manager UI.
You can skip sections and exit out of the guided setup at any point. This dashboard automatically
shows unless you click "Don't show onboarding screen again" and close the page. Clicking this
option also prevents the optional guided tour from automatically displaying in the future.
Use the Help Icon in the upper-right corner of the page to later access the onboarding dashboard
and guided tour.
You use the navigation bar to move between the main areas of the user interface.
Navigation Bar
The navigation bar is available on the left side of the interface and provides a hierarchy for
navigating to the corresponding pages.
VMware by Broadcom 42
VMware Cloud Foundation on Dell VxRail Guide
VMware by Broadcom 43
VMware Cloud Foundation on Dell VxRail Guide
Procedure
1 In the SDDC Manager UI, click the logged-in account name in the upper right corner.
VMware by Broadcom 44
VMware Cloud Foundation on Dell VxRail Guide
VMware by Broadcom 45
Configure the Customer
Experience Improvement Program
Settings for VMware Cloud
8
Foundation
The Customer Experience Improvement Program provides VMware with information that allows
VMware to improve its products and services, to fix problems, and to advise you on how best to
deploy and use our products. As part of the CEIP, VMware collects technical information about
your organization’s use of the VMware products and services regularly in association with your
organization’s VMware license keys. This information does not personally identify any individual.
For additional information regarding the CEIP, refer to the Trust & Assurance Center at http://
www.vmware.com/trustvmware/ceip.html.
You can activate or deactive CEIP across all the components deployed in VMware Cloud
Foundation by the following methods:
n When you log into SDDC Manager for the first time, a pop-up window appears. The Join the
VMware Customer Experience Program option is selected by default. Deselect this option if
you do not want to join CEIP. Click Apply.
n You can activate or deactivate CEIP from the Administration tab in the SDDC Manager UI.
Procedure
VMware by Broadcom 46
VMware Cloud Foundation on Dell VxRail Guide
2 To activate CEIP, select the Join the VMware Customer Experience Improvement Program
option.
3 To deactivate CEIP, deselect the Join the VMware Customer Experience Improvement
Program option.
VMware by Broadcom 47
Managing Certificates in VMware
Cloud Foundation 9
You can use the SDDC Manager UI to manage certificates in a VMware Cloud Foundation
instance, including integrating a certificate authority, generating and submitting certificate signing
requests (CSR) to a certificate authority, and downloading and installing certificates.
Starting with VMware Cloud Foundation 5.2.1, you can also manage certificates using the vSphere
Client.
n vCenter Server
n NSX Manager
n VMware Avi Load Balancer (formerly known as NSX Advanced Load Balancer)
n SDDC Manager
n VxRail Manager
Note Use VMware Aria Suite Lifecycle to manage certificates for the other VMware Aria
Suite components.
It is recommended that you replace all certificates after completing the deployment of the
VMware Cloud Foundation management domain. After you create a new VI workload domain,
you can replace certificates for the appropriate components as needed.
VMware by Broadcom 48
VMware Cloud Foundation on Dell VxRail Guide
n Install Third-Party CA-Signed Certificates Using Server Certificate and Certificate Authority
Files
The SDDC Manager UI provides a banner notification for any certificates that are expiring in the
next 30 days.
Procedure
2 On the Workload Domains page, from the table, in the domain column click the domain you
want to view.
This tab lists the certificates for each resource type associated with the workload domain. It
displays the following details:
n Resource type
n Resource hostname
VMware by Broadcom 49
VMware Cloud Foundation on Dell VxRail Guide
n Valid From
n Valid Until
4 To view certificate details, expand the resource next to the Resource Type column.
Complete the below tasks to manage Microsoft CA-Signed certificates using SDDC Manager.
You use SDDC Manager to generate the certificate signing request (CSRs) and request a signed
certificate from the Microsoft Certificate Authority. SDDC Manager is then used to install the
signed certificates to SDDC components it manages. In order to achieve this the Microsoft
Certificate Authority must be configured to allow integration with SDDC Manager.
Note When connecting SDDC Manager to Microsoft Active Directory Certificate Services, ensure
that Web Enrollment role is installed on the same machine where the Certificate Authority role
is installed. SDDC Manager can't request and sign certificates automatically if the two roles
(Certificate Authority and Web Enrollment roles) are installed on different machines.
VMware by Broadcom 50
VMware Cloud Foundation on Dell VxRail Guide
Procedure
1 Log in to the Microsoft Certificate Authority server by using a Remote Desktop Protocol
(RDP) client.
Password ad_admin_password
b From the Dashboard, click Add roles and features to start the Add Roles and Features
wizard.
f On the Select server roles page, under Active Directory Certificate Services, select
Certification Authority and Certification Authority Web Enrollment and click Next.
Prerequisites
The Microsoft Certificate Authority and IIS must be installed on the same server.
Procedure
1 Log in to the Active Directory server by using a Remote Desktop Protocol (RDP) client.
Password ad_admin_password
VMware by Broadcom 51
VMware Cloud Foundation on Dell VxRail Guide
b From the Dashboard, click Add roles and features to start the Add Roles and Features
wizard.
f On the Select server roles page, under Web Server (IIS) > Web Server > Security, select
Basic Authentication and click Next.
3 Configure the certificate service template and CertSrv web site, for basic authentication.
a Click Start > Run, enter Inetmgr.exe and click OK to open the Internet Information
Services Application Server Manager.
b Navigate to your_server > Sites > Default Web Site > CertSrv.
f In the Actions pane, under Manage Website, click Restart for the changes to take effect.
Procedure
1 Log in to the Active Directory server by using a Remote Desktop Protocol (RDP) client.
Password ad_admin_password
VMware by Broadcom 52
VMware Cloud Foundation on Dell VxRail Guide
3 In the Certificate Template Console window, under Template Display Name, right-click Web
Server and select Duplicate Template.
4 In the Properties of New Template dialog box, click the Compatibility tab and configure the
following values.
Setting Value
5 In the Properties of New Template dialog box, click the General tab and enter a name for
example, VMware in the Template display name text box.
6 In the Properties of New Template dialog box, click the Extensions tab and configure the
following.
d Click the Enable this extension check box and click OK.
f Click the Signature is proof of origin (nonrepudiation) check box, leave the defaults for
all other options and click OK.
7 In the Properties of New Template dialog box, click the Subject Name tab, ensure that the
Supply in the request option is selected, and click OK to save the template.
8 Add the new template to the certificate templates of the Microsoft CA.
b In the Certification Authority window, expand the left pane, right-click Certificate
Templates, and select New > Certificate Template to Issue.
c In the Enable Certificate Templates dialog box, select VMware, and click OK.
Prerequisites
n Create a user account in Active Directory with Domain Users membership. For example,
svc-vcf-ca.
VMware by Broadcom 53
VMware Cloud Foundation on Dell VxRail Guide
Procedure
1 Log in to the Microsoft Certificate Authority server by using a Remote Desktop Protocol
(RDP) client.
Password ad_admin_password
2 Configure least privilege access for a user account on the Microsoft Certificate Authority.
e In the Permissions for .... section configure the permissions and click OK.
Read Deselected
Manage CA Deselected
3 Configure least privilege access for the user account on the Microsoft Certificate Authority
Template.
VMware by Broadcom 54
VMware Cloud Foundation on Dell VxRail Guide
e In the Permissions for .... section configure the permissions and click OK.
Read Selected
Write Deselected
Enroll Selected
Autoenroll Deselected
Prerequisites
n Verify connectivity between SDDC Manager and the Microsoft Certificate Authority Server.
See VMware Ports and Protocols.
n Verify that the Microsoft Certificate Authority Server has the correct roles installed on the
same machine where the Certificate Authority role is installed. See Install Microsoft Certificate
Authority Roles.
n Verify the Microsoft Certificate Authority Server has been configured for basic authentication.
See Configure the Microsoft Certificate Authority for Basic Authentication.
n Verify a valid certificate template has been configured on the Microsoft Certificate Authority.
See Create and Add a Microsoft Certificate Authority Template.
n Verify least privileged user account has been configured on the Microsoft Certificate
Authority Server and Template. See Assign Certificate Management Privileges to the SDDC
Manager Service Account.
n Verify that time is synchronized between the Microsoft Certificate Authority and the SDDC
Manager appliance. Each system can be configured with a different timezone, but it is
recommended that they receive their time from the same NTP source.
Procedure
2 Click Edit.
VMware by Broadcom 55
VMware Cloud Foundation on Dell VxRail Guide
Setting Value
CA Server URL Specify the URL for the issuing certificate authority.
This address must begin with https:// and end with
certsrv. For example, https://ca.rainpole.io/certsrv.
Template Name Enter the issuing certificate template name. You must
create this template in Microsoft Certificate Authority.
For example, VMware.
Procedure
2 On the Workload Domains page, from the table, in the domain column click the workload
domain you want to view.
VMware by Broadcom 56
VMware Cloud Foundation on Dell VxRail Guide
a From the table, select the check box for the resource type for which you want to
generate a CSR.
Option Description
Key Size Select the key size (2048 bit, 3072 bit, or 4096 bit)
from the drop-down menu.
Organization Name Type the name under which your company is known.
The listed organization must be the legal registrant of
the domain name in the certificate request.
State Type the full name (do not abbreviate) of the state,
province, region, or territory where your company is
legally registered.
d (Optional) On the Subject Alternative Name dialog, enter the subject alternative name(s)
and click Next.
a From the table, select the check box for the resource type for which you want to
generate a signed certificate for.
VMware by Broadcom 57
VMware Cloud Foundation on Dell VxRail Guide
c In the Generate Certificates dialog box, from the Select Certificate Authority drop-down
menu, select Microsoft.
a From the table, select the check box for the resource type for which you want to install a
signed certificate.
Complete the following tasks to be able to manage OpenSSL-signed certificates issued by SDDC
Manager.
Procedure
2 Click Edit.
VMware by Broadcom 58
VMware Cloud Foundation on Dell VxRail Guide
Setting Value
State Enter the full name (do not abbreviate) of the state,
province, region, or territory where your company is
legally registered.
VMware by Broadcom 59
VMware Cloud Foundation on Dell VxRail Guide
Procedure
2 On the Workload Domains page, from the table, in the domain column click the workload
domain you want to view.
a From the table, select the check box for the resource type for which you want to
generate a CSR.
VMware by Broadcom 60
VMware Cloud Foundation on Dell VxRail Guide
Option Description
Key Size Select the key size (2048 bit, 3072 bit, or 4096 bit)
from the drop-down menu.
Organization Name Type the name under which your company is known.
The listed organization must be the legal registrant of
the domain name in the certificate request.
State Type the full name (do not abbreviate) of the state,
province, region, or territory where your company is
legally registered.
d (Optional) On the Subject Alternative Name dialog, enter the subject alternative name(s)
and click Next.
You can enter multiple values separated by comma (,), semicolon (;), or space ( ). For
NSX, you can enter the subject alternative name for each node along with the Virtual IP
(primary) node.
a From the table, select the check box for the resource type for which you want to
generate a signed certificate.
c In the Generate Certificates dialog box, from the Select Certificate Authority drop-down
menu, select OpenSSL.
VMware by Broadcom 61
VMware Cloud Foundation on Dell VxRail Guide
a From the table, select the check box for the resource type for which you want to install a
signed certificate.
If you prefer to use the legacy method for installing third-party CA-signed certificates, see Install
Third-Party CA-Signed Certificates in VMware Cloud Foundation Using a Certificate Bundle.
Procedure
2 On the Workload Domains page, from the table, in the domain column click the workload
domain you want to view.
a From the table, select the check box for the resource type for which you want to
generate a CSR.
VMware by Broadcom 62
VMware Cloud Foundation on Dell VxRail Guide
Option Description
Key Size Select the key size (2048 bit, 3072 bit, or 4096 bit)
from the drop-down menu.
Organization Name Type the name under which your company is known.
The listed organization must be the legal registrant of
the domain name in the certificate request.
State Type the full name (do not abbreviate) of the state,
province, region, or territory where your company is
legally registered.
d (Optional) On the Subject Alternative Name dialog, enter the subject alternative name(s)
and click Next.
You can enter multiple values separated by comma (,), semicolon (;), or space ( ). For
NSX, you can enter the subject alternative name for each node along with the Virtual IP
(primary) node.
Note Wildcard subject alternative name, such as *.example.com are not recommended.
6 When the downloads complete, request signed certificates from your third-party Certificate
Authority for each .csr.
7 After you receive the signed certificates, open the SDDC Manager UI and click Upload and
Install.
8 In the Install Signed Certificates dialog box, select the resource for which you want to install
a signed certificate.
The drop-down menu includes all resources for which you have generated and downloaded
CSRs.
VMware by Broadcom 63
VMware Cloud Foundation on Dell VxRail Guide
-----BEGIN CERTIFICATE-----
<certificate content>
-----END CERTIFICATE------
-----BEGIN CERTIFICATE-----
<Intermediate certificate content>
-----END CERTIFICATE------
-----BEGIN CERTIFICATE-----
<Root certificate content>
-----END CERTIFICATE-----
10 Click Validate.
If validation fails, resolve the issues and try again, or click Remove to skip the certificate
installation.
11 To install a signed certificate for another resource, click Add Another and repeat steps 8-10
for each resource.
12 Once all signed certificates have been validated successfully, click Install.
VMware by Broadcom 64
VMware Cloud Foundation on Dell VxRail Guide
Prerequisites
VMware Cloud Foundation 4.5.1 introduces a new method for installing third-party CA-signed
certificates. By default, VMware Cloud Foundation use the new method. See Install Third-Party
CA-Signed Certificates Using Server Certificate and Certificate Authority Files for information
using the new method. If you prefer to use the legacy method, you must modify your
preferences.
1 In the SDDC Manager UI, click the logged in user and select Preferences.
Uploading CA-signed certificates from a third-party Certificate Authority using the legacy method
requires that you collect the relevant certificate files in the correct format and then create a
single .tar.gz file with the contents. It's important that you create the correct directory structure
within the .tar.gz file as follows:
n The name of the top-level directory must exactly match the name of the workload domain as
it appears in the list on the Inventory > Workload Domains. For example, sfo-m01.
n The PEM-encoded root CA certificate chain file (must be named rootca.crt) must
reside inside this top-level directory. The rootca.crt chain file contains a root certificate
authority and can have n number of intermediate certificates.
For example:
-----BEGIN CERTIFICATE-----
<Intermediate1 certificate content>
-----END CERTIFICATE------
-----BEGIN CERTIFICATE-----
<Intermediate2 certificate content>
-----END CERTIFICATE------
-----BEGIN CERTIFICATE-----
<Root certificate content>
-----END CERTIFICATE-----
In the above example, there are two intermediate certificates, intermediate1 and
intermediate2, and a root certificate. Intermediate1 must use the certificate issued by
intermediate2 and intermediate2 must use the certificate issued by Root CA.
VMware by Broadcom 65
VMware Cloud Foundation on Dell VxRail Guide
n The root CA certificate chain file, intermediate certificates, and root certificate must
contain the Basic Constraints field with value CA:TRUE.
n This directory must contain one sub-directory for each component resource for which
you want to replace the certificates.
n Each sub-directory must contain the corresponding .csr file, whose name must exactly
match the resource as it appears in the Resource Hostname column in the Inventory >
Workload Domains > Certificates tab.
n Each sub-directory must contain a corresponding .crt file, whose name must exactly
match the resource as it appears in the Resource Hostname column in the Inventory
> Workload Domains > Certificates tab. The content of the .crt files must end with a
newline character.
n Server certificate (NSX_FQDN.crt) must contain the Basic Constraints field with value
CA:FALSE.
n If the NSX certificate contains HTTP or HTTPS based CRL Distribution Point it must be
reachable from the server.
n The extended key usage (EKU) of the generated certificate must contain the EKU of the
CSR generated.
Note All resource and hostname values can be found in the list on the Inventory > Workload
Domains > Certificates tab.
Procedure
2 On the Workload Domains page, from the table, in the domain column click the workload
domain you want to view.
VMware by Broadcom 66
VMware Cloud Foundation on Dell VxRail Guide
a From the table, select the check box for the resource type for which you want to
generate a CSR.
Option Description
Key Size Select the key size (2048 bit, 3072 bit, or 4096 bit)
from the drop-down menu.
Organization Name Type the name under which your company is known.
The listed organization must be the legal registrant of
the domain name in the certificate request.
State Type the full name (do not abbreviate) of the state,
province, region, or territory where your company is
legally registered.
d (Optional) On the Subject Alternative Name dialog, enter the subject alternative name(s)
and click Next.
You can enter multiple values separated by comma (,), semicolon (;), or space ( ). For
NSX, you can enter the subject alternative name for each node along with the Virtual IP
(primary) node.
Note Wildcard subject alternative name, such as *.example.com are not recommended.
5 Download and save the CSR files to the directory by clicking Download CSR.
a Verify that the different .csr files have successfully generated and are allocated in the
required directory structure.
b Request signed certificates from a Third-party Certificate authority for each .csr.
VMware by Broadcom 67
VMware Cloud Foundation on Dell VxRail Guide
c Verify that the newly acquired .crt files are correctly named and allocated in the required
directory structure.
d Create a new .tar.gz file of the directory structure ready for upload to SDDC Manager. For
example: <domain name>.tar.gz.
8 In the Upload and Install Certificates dialog box, click Browse to locate and select the newly
created <domain name>.tar.gz file and click Open.
9 Click Upload.
10 If the upload is successful, click Install Certificate. The Certificates tab displays a status of
Certificate Installation is in progress.
See Delete Trusted Certificate in the VMware Cloud Foundation API Reference Guide for more
information.
Procedure
For more information about roles, see Chapter 24 Managing Users and Groups in VMware
Cloud Foundation.
5 In the Response, click TrustedCertificate and copy the alias for the certificate you want
to remove.
VMware by Broadcom 68
Managing License Keys in
VMware Cloud Foundation 10
You can add component license keys in the SDDC Manager UI or add a solution license key in
vSphere Client.
Starting with VMware Cloud Foundation 5.1.1, you can license VMware Cloud Foundation
components using a solution license key or individual component license keys.
Note VMware Cloud Foundation 5.1.1 supports a combination of solution and component license
keys. For example, Workload Domain 1 can use component license keys and Workload Domain
2 can use the solution license key.
For more information about the VCF solution license key, VMware vSphere 8 Enterprise Plus for
VCF, see https://knowledge.broadcom.com/external/article?articleNumber=319282.
SDDC Manager does not manage the solution license key. If you are using a solution license
key, VMware Cloud Foundation components are deployed in evaluation mode and then you
use the vSphere Client to add and assign the solution key. See Managing vSphere Licenses for
information about using a solution license key for VMware ESXi and vCenter Server. If you are
using a solution license key, you must also add a separate VMware vSAN license key for vSAN
clusters. See Configure License Settings for a vSAN Cluster.
Note VMware vCenter Server, VMware NSX, VMware Aria Suite components, and VMware HCX
are all licensed when you assign a solution license key to a vCenter Server.
Use the SDDC Manager UI to manage component license keys. If you entered component license
keys in the deployment parameter workbook that you used to create the management domain,
those component license keys appear in the Licensing screen of the SDDC Manager UI. You can
add additional component license keys to support your requirements. You must have adequate
license units available before you create a VI workload domain, add a host to a vSphere cluster,
or add a vSphere cluster to a workload domain. Add the necessary component license keys
before you begin any of these tasks.
VMware by Broadcom 69
VMware Cloud Foundation on Dell VxRail Guide
SDDC Manager does not manage solution license keys. See Chapter 10 Managing License Keys in
VMware Cloud Foundation for more information about solution license keys.
Procedure
6 Click Add.
What to do next
If you want to replace an existing license with a newly added license, you must add and assign
the new license in the management UI (for example, vSphere Client or NSX Manager) of the
component whose license you are replacing.
Procedure
2 Click the vertical ellipsis (three dots) next to the license key and click Edit Description.
3 On the Edit License Key Description dialog, edit the description and click Save.
VMware by Broadcom 70
VMware Cloud Foundation on Dell VxRail Guide
Procedure
2 Click the vertical ellipsis (three dots) next to the license key you want to delete and click
Remove.
Results
The component license key is removed from the SDDC Manager inventory
n vCenter Server
n VMware NSX
n VMware vSAN
n ESXi
Updates are specific to the selected workload domain. If you want to update component license
keys for multiple workload domains, you must update each workload domain separately.
Prerequisites
The new component license key(s) must already be added to the SDDC Manager inventory. See
Add a Component License Key in the SDDC Manager UI.
Procedure
VMware by Broadcom 71
VMware Cloud Foundation on Dell VxRail Guide
For VMware vSAN and ESXi, you must select the clusters that you want to update with new
license keys.
VMware by Broadcom 72
ESXi Lockdown Mode
11
You can activate or deactivate normal lockdown mode in VMware Cloud Foundation to increase
the security of your ESXi hosts.
To activate or deactivate normal lockdown mode in VMware Cloud Foundation, you must
perform operations through the vCenter Server. For information on how to activate or
deactivate normal lockdown mode, see "Lockdown Mode" in vSphere Security at https://
docs.vmware.com/en/VMware-vSphere/index.html.
You can activate normal lockdown mode on a host after the host is added to workload
domain. VMware Cloud Foundation creates service accounts that can be used to access the
hosts. Service accounts are added to the Exception Users list during the bring-up or host
commissioning. You can rotate the passwords for the service accounts using the password
management functionality in the SDDC Manager UI.
VMware by Broadcom 73
Managing Storage in VMware
Cloud Foundation 12
To create and manage a workload domain, VMware Cloud Foundation requires at least one
shared storage type for all ESXi hosts within a cluster. This initial shared storage type, known
as principal storage, is configured during VxRail first run. Additional shared storage, known as
supplemental storage, can be added using the vSphere Client after a cluster has been created.
Principal Storage
During the VxRail first run, you configure the initial shared storage type. This initial shared
storage type is known as principal storage. Once created, the principal storage type for a cluster
cannot be changed. However, a VI workload domain can include multiple clusters with unique
principal storage types.
n vSAN
Note You cannot convert vSAN OSA to vSAN ESA or vice versa.
Supplemental Storage
Additional shared storage, known as supplemental storage, can be manually added or removed
using the vSphere Client after a cluster has been created. All supplemental storage must be listed
in the VMware Compatibility Guide. Multiple supplemental storage types can be presented to a
cluster in the management domain or any VI workload domain.
VMware Cloud Foundation supports using the vSphere Client to add the following datastore
types to a cluster:
n vSphere VMFS
VMware by Broadcom 74
VMware Cloud Foundation on Dell VxRail Guide
Consolidated Workload
Storage Type Domain Management Domain VI Workload Domain
Supplemental No No No
n A minimum of three ESXi hosts that meet the vSAN hardware, cluster, software, networking
and license requirements. For information, see the vSAN Planning and Deployment Guide.
n Perform a VxRail first run specifying the vSAN configuration settings. For information on the
VxRail first run, contact Dell Support.
n A valid vSAN license. See Chapter 10 Managing License Keys in VMware Cloud Foundation.
You cannot use vSAN ESA without a qualifying license.
In some instances SDDC Manager may be unable to automatically mark the host disks as
capacity. Follow the Mark Flash Devices as Capacity Using ESXCLI procedure in the vSAN
Planning and Deployment Guide.
n To use vSAN as principal storage for a new cluster, perform the VxRail first run and then add
the VxRail cluster. See Add a VxRail Cluster to a Workload Domain Using the SDDC Manager
UI.
VMware by Broadcom 75
VMware Cloud Foundation on Dell VxRail Guide
vSAN OSA clusters can mount a remote datastore from other vSAN OSA clusters.
vSAN ESA clusters can mount a remote datastore from other vSAN ESA clusters.
n A direct or proxy internet connection OR a downloaded copy of the vSAN HCL JSON file
Note SDDC Manger will keep the HCL file updated if it has direct or proxy internet
connection.
A vSAN compute cluster can mount a datastore from one of the following cluster types:
n vSAN OSA
n vSAN ESA
Once you mount a remote datastore on a vSAN compute cluster, you can only mount additional
datastores of the same cluster type.
NOTE: Datastores on clusters created outside of VMware Cloud Foundation cannot be mounted
on VCF-created clusters. Likewise, clusters created outside of VMware Cloud Foundation cannot
mount a datastore from a VCF-created cluster.
VMware by Broadcom 76
VMware Cloud Foundation on Dell VxRail Guide
Fibre Channel can be used as supplemental storage for the management domain and
consolidated workload domains, however it can be used as principal storage for VI workload
domains and can also be used a principal storage in a management domain converted from
vSphere infrastructure.
Consolidated Workload
Storage Type Domain Management Domain VI Workload Domain
Note If you are using VMFS on FC as principal storage, and your VI workload domain is
using vSphere Lifecycle Manager images as the update method, then only two hosts are
required. Workload Management requires a vSphere cluster with a minimum of three ESXi
hosts.
n Perform a VxRail first run specifying the VMFS on FC configuration settings. For information
on the VxRail first run, contact Dell Support.
n To use Fibre Channel as principal storage for a new cluster, perform the VxRail first run and
then add the VxRail cluster. See Add a VxRail Cluster to a Workload Domain Using the SDDC
Manager UI
n To use Fibre Channel as supplemental storage, see the vSphere Storage Guide.
VMware by Broadcom 77
VMware Cloud Foundation on Dell VxRail Guide
VMware Cloud Foundation supports sharing remote datastores with HCI Mesh for VI workload
domains.
You can create HCI Mesh by mounting remote vSAN datastores on vSAN clusters and enable
data sharing from the vCenter Server. It can take upto 5 minutes for the mounted remote vSAN
datastores to appear in the .
It is recommended that you do not mount or configure remote vSAN datastores for vSAN
clusters in the management domain.
For more information on sharing remote datastores with HCI Mesh, see Sharing Remote
Datastores with HCI Mesh.
Note After enabling HCI Mesh by mounting remote vSAN datastores, you can migrate VMs from
the local datastore to a remote datastore. Since each cluster has its own VxRail Manager VM, you
should not migrate VxRail Manager VMs to a remote datastore.
VMware by Broadcom 78
Managing Workload Domains in
VMware Cloud Foundation 13
Workload domains are logical units that carve up the compute, network, and storage resources
of the VMware Cloud Foundation system. The logical units are groups of ESXi hosts managed by
vCenter Server instances with specific characteristics for redundancy and VMware best practices.
n VMware vSAN
n Tag Management
VMware by Broadcom 79
VMware Cloud Foundation on Dell VxRail Guide
When you deploy a new VI workload domain, VMware Cloud Foundation deploys a new vCenter
Server for that workload domain. The vCenter Server is associated with a vCenter Single Sign-On
Domain (SSO) to determine the local authentication space. Prior to VMware Cloud Foundation
5.0, the management vCenter Server and all VI workload domain vCenter Servers were members
of a single vSphere SSO domain, joined together with vCenter Enhanced Linked Mode. Starting
with VMware Cloud Foundation 5.0, when you deploy a new VI workload domain, you can
choose to join the management domain SSO domain, or create a new SSO domain.
n Deploys a vCenter Server Appliance for the new VI workload domain within the management
domain. By using a separate vCenter Server instance per VI workload domain, software
updates can be applied without impacting other VI workload domains. It also allows for each
VI workload domain to have additional isolation as needed.
n For the first VI workload domain, the workflow deploys a cluster of three NSX Managers
in the management domain and configures a virtual IP (VIP) address for the NSX Manager
cluster. The workflow also configures an anti-affinity rule between the NSX Manager VMs
to prevent them from being on the same host for high availability. Subsequent VI workload
domains can share an existing NSX Manager cluster or deploy a new one. To share an
NSX Manager cluster, the VI workload domains must use the same update method. The VI
workload domains must both use vSphere Lifecycle Manager (vLCM) images, or they must
both use vLCM baselines.
n By default, VI workload domains do not include any NSX Edge clusters and are isolated.
To provide north-south routing and network services, add one or more NSX Edge clusters
to a VI workload domain. See Chapter 14 Managing NSX Edge Clusters in VMware Cloud
Foundation .
Note Starting with VMware Cloud Foundation 5.2, when you deploy a new VI workload domain,
it uses the same versions of vCenter Server and NSX Manager that the management domain
uses. For example, if you applied an async patch to the vCenter Server in the management
domain, a new VI workload domain will deploy the same patched version of vCenter Server.
n If you plan to use DHCP for the NSX host overlay network, a DHCP server must be configured
on the NSX host overlay VLAN for the VI workload domain. When VMware NSX creates NSX
Edge tunnel endpoints (TEPs) for the VI workload domain, they are assigned IP addresses
from the DHCP server.
Note If you do not plan to use DHCP, you can use a static IP pool for the NSX host overlay
network. The static IP pool is created or selected as part of VI workload domain creation.
VMware by Broadcom 80
VMware Cloud Foundation on Dell VxRail Guide
Note If you are using VMFS on FC as principal storage, and the VI workload domain is using
vSphere Lifecycle Manager images as the update method, then only two hosts are required.
Workload Management requires a vSphere cluster with a minimum of three ESXi hosts.
n The install bundles for the versions of NSX Manager and vCenter Server that are running
in the management domain must be available in SDDC Manager before you can create
a VI workload domain. For example, if you have patched the versions of NSX Manager
and/or vCenter Server in the management domain to a version higher than what is
listed in the BOM, you must download the new install bundles. You can refer to https://
knowledge.broadcom.com/external/article?legacyId=88287 for information about the install
bundles required for specific async patches.
n Decide on a name for your VI workload domain. Each VI workload domain must have a
unique name. It is good practice to include the region and site information in the name
because resource object names (such as host and vCenter names) are generated based on
the VI workload domain name. The name can be three to 20 characters long and can contain
any combination of the following:
n Numbers
Note Spaces are not allowed in any of the names you specify when creating a VI workload
domain.
Although the individual VMware Cloud Foundation components support different password
requirements, you must set passwords following a common set of requirements across all
components:
n Minimum length: 12
n Maximum length: 16
n At least one lowercase letter, one uppercase letter, a number, and one of the following
special characters: ! @ # $ ^ *
n A dictionary word
n A palindrome
VMware by Broadcom 81
VMware Cloud Foundation on Dell VxRail Guide
n Verify that you have the completed Planning and Preparation Workbook with the VI
workload domain deployment option included.
n The IP addresses and Fully Qualified Domain Names (FQDNs) for the vCenter Server and NSX
Manager instances must be resolvable by DNS.
n If you are using VMFS on FC storage for the VI workload domain, you must configure zoning,
mount the associated volumes and create the datastore on the hosts.
n To use the License Now option, you must have valid license keys for the following products:
n VMware NSX
n vSphere
Because vSphere and vSAN licenses are per CPU, ensure that you have sufficient licenses
for the ESXi hosts to be used for the VI workload domain. See Chapter 10 Managing
License Keys in VMware Cloud Foundation.
n If you plan to deploy a VI workload domain that has its vSphere cluster at a remote location,
you must meet the following requirements:
n Dedicated WAN connectivity is required between central site and remote site.
n Primary and secondary active WAN links are recommended for connectivity from the
central site to the remote site. The absence of WAN links can lead to two-failure states,
WAN link failure, or NSX Edge node failure, which can result in unrecoverable VMs and
application failure at the remote site.
n Minimum bandwidth of 10 Mbps and latency of 100 ms is required between the central
site and remote site. The network at the remote site must be able to reach the
management network at the central site. DNS and NTP server must be available locally at
or reachable from the remote site.
n See VMware Cloud Foundation Edge Design Considerations for more information about
design options for deploying scalable edge solutions.
VMware by Broadcom 82
VMware Cloud Foundation on Dell VxRail Guide
Prerequisites
n The VxRail Manager static IP, 192.168.10.200, must be reachable and the UI available
Procedure
Option Description
What to do next
Update the VxRail Manager certificate. See Update the VxRail Manager Certificate.
Prerequisites
Procedure
1 Using SSH, log in to VxRail Manager VM using the management IP address, with the user
name mystic and default mystic password.
2 Type su to switch to the root account and enter the default root password.
./generate_ssl.sh VxRail-Manager-FQDN
VMware by Broadcom 83
VMware Cloud Foundation on Dell VxRail Guide
When you use the product UI, you complete the steps in the SDDC Manager UI. Starting with
VMware Cloud Foundation 5.1, you can use the SDDC Manager UI to create a VI workload domain
with advanced switch configurations.
Alternatively, you can use the Workflow Optimization script to create a VI workload domain.
See Create a VxRail VI Workload Domain Using the Workflow Optimization Script. The Workflow
Optimization script supports using a JSON file for cluster configuration.
To create a VI workload domain that uses a static IP pool for the Host Overlay Network TEPs
for L3 aware and stretch clusters, you must use the VMware Cloud Foundation API. See Create a
Domain in the VMware Cloud Foundation on Dell VxRail API Reference Guide.
The SDDC Manager UI supports running multiple VxRail VI workload domain creation tasks in
parallel.
Procedure
2 Click + Workload Domain and then select VI-VxRail Virtual Infrastructure Setup.
3 Make sure the prerequisties are met. See Prerequisites for a Workload Domain. To continue,
click GET STARTED.
VMware by Broadcom 84
VMware Cloud Foundation on Dell VxRail Guide
4 Select the type of storage to use for this workload domain. Click SELECT.
Note vSAN Express Storage Architecture (ESA) requires vSphere Lifecycle Manager images.
VMware by Broadcom 85
VMware Cloud Foundation on Dell VxRail Guide
Option Description
General Info Provide basic information about the workload domain, including the SSO
domain. When you create a VI workload domain, you can join it to the
management domain's vCenter Single Sign-On domain or a new vCenter
Single Sign-On domain that is not used by any other workload domain.
Joining a new vCenter Single Sign-On domain enables a VI workload domain
to be isolated from the other workload domains in your VMware Cloud
Foundation instance. The vCenter Single Sign-On domain for a VI workload
domain determines the local authentication space.
n Virtual Infrastructure Name - The name must be unique and contain
between 3 and 20 characters. The VI name can include letters, numbers,
and hyphens, but it cannot include spaces.
n Datacenter Name
n SSO domain
n Create New SSO Domain
Note
n vLCM images are managed by VxRail Manager.
n vSAN Express Storage Architecture (ESA) requires vSphere Lifecycle
Manager images.
n Two-node clusters are not supported in a VI workload domain that
uses vSphere Lifecycle Manager baselines.
If you are creating a new SSO domain, provide the following information:
n Enter the domain name, for example mydomain.local.
Note Ensure that the domain name does not contain any upper-case
letters.
n Set the password for the SSO administrator account.
VMware by Broadcom 86
VMware Cloud Foundation on Dell VxRail Guide
Option Description
Host Selection Add ESXi hosts with similar or identical configurations across all cluster
members, including similar or identical storage configurations. A minimum
of 3 hosts are required.
a Select the ESXi hosts to add and click Provide Host Details.
b Enter the FQDNs and passwords for the hosts.
c Click Resolve Hosts IP address.
d Click Next.
Cluster Enter a name for the first cluster in the new workload domain.
The name must be unique and contain between 3 and 80 characters. The
cluster name can include letters, numbers, and hyphens, and it can include
spaces.
VMware by Broadcom 87
VMware Cloud Foundation on Dell VxRail Guide
Option Description
Networking Provide information about the NSX Manager cluster to use with the VI
workload domain. If you already have an NSX Manager cluster for a different
VI workload domain, you can reuse that NSX Manager cluster or create a
new one.
n Create New NSX instance
Note
n You must create an NSX Manager instance if this is the first VI
workload domain in your VMware Cloud Foundation instance.
n You must create a new NSX Manager instance if your VI workload
domain is joining a new SSO domain.
Note
n You cannot share an NSX Manager instance between VI workload
domains that are in different SSO domains.
n If you are creating a new SSO domain for the VI workload domain,
the NSX Manager instance will be shared with one or more VI
workload domains in different SSO domains.
n In order to share an NSX Manager instance, the VI workload domains
must use the same update method. The VI workload domains must
both use vSphere Lifecycle Manager baselines or they must both use
vSphere Lifecycle Manager images.
Note NSX Managers for workload domains that are in the process
of deploying are not able to be shared and do not appear in the list
of available NSX Managers.
VMware by Broadcom 88
VMware Cloud Foundation on Dell VxRail Guide
Option Description
Switch Configuration Provide the distributed switch configuration to be applied to the hosts in
the VxRail cluster. Select a predefined vSphere distributed switch (VDS)
configuration profile or create a custom switch configuration.
For custom switch configuration, specify:
n VDS name
n MTU
n Number of uplinks
n Uplink to vmnic mapping
Click Configure Network Traffic to configure the following networks:
n Management
n vMotion
n vSAN
n Host Discovery
n System VM
For each network, specify:
n Distributed port group name
n MTU
n Load balancing policy
n Active and standby links
For the NSX network, specify:
n Operational mode
n Transport zone type
n NSX-Overlay Transport Zone Name
n For NSX Overlay, enter a VLAN ID and select the IP assignment type for
the Host Overlay Network TEPs.
Note For DHCP, a DHCP server must be configured on the NSX host
overlay (Host TEP) VLAN. When NSX creates TEPs for the VI workload
domain, they are assigned IP addresses from the DHCP server.
For static IP Pool, you can re-use an existing IP pool or create a new one.
Make sure the IP range includes enough IP addresses for the number
of hosts that will use the static IP Pool. The number of IP addresses
required depends on the number of pNICs on the ESXi hosts that
are used for the vSphere Distributed Switch that handles host overlay
networking. For example, a host with four pNICs that uses two pNICs for
host overlay traffic requires two IP addresses in the static IP pool.
n Teaming policy uplink mapping
n NSX Uplink Profile Name
n Teaming policy
n Active and standby links
VMware by Broadcom 89
VMware Cloud Foundation on Dell VxRail Guide
Option Description
Note After you assign a solution key for vCenter Server, VMware
NSX automatically uses that solution license key.
6 On the Validation page, wait until all of the inputs have been successfully validated and then
click Finish.
If validation is unsuccessful, you cannot proceed. Use the Back button to modify your
settings and try again.
The Workflow Optimzation script uses the VMware Cloud Foundation on Dell VxRail API to
perform all of the steps to create a VI workload domain in one place. See Create a Domain with
Workflow Optimization for more information about the API.
Prerequisites
Make sure that the Prerequisites for a Workload Domain are met before using the Workflow
Optimization script.
VMware by Broadcom 90
VMware Cloud Foundation on Dell VxRail Guide
Procedure
See https://community.broadcom.com/vmware-code/viewdocument/vcf-on-vxrail-workflow-
optimization-8.
Deleting a VI workload domain also removes the components associated with the VI workload
domain from the management domain. This includes the vCenter Server instance and the NSX
Manager cluster instances.
Note If the NSX Manager cluster is shared with any other VI workload domains, it will not be
deleted.
Caution Deleting a workload domain is an irreversible operation. All clusters and virtual
machines within the VI workload domain are deleted and the underlying datastores are
destroyed.
It can take up to 20 minutes for a VI workload domain to be deleted. During this process, you
cannot perform any operations on workload domains.
Prerequisites
n If remote vSAN datastores are mounted on a cluster in the VI workload domain, then the
VI workload domain cannot be deleted. To delete such VI workload domains, you must first
migrate any virtual machines from the remote datastore to the local datastore and then
unmount the remote vSAN datastores from vCenter Server.
n If you require access after deleting a VI workload domain, back up the data. The datastores
on the VI workload domain are destroyed when it is deleted.
n Migrate the virtual machines that you want to keep to another workload domain using cross
vCenter vMotion.
VMware by Broadcom 91
VMware Cloud Foundation on Dell VxRail Guide
n Delete any workload virtual machines created outside VMware Cloud Foundation before
deleting the VI workload domain.
n Delete any NSX Edge clusters hosted on the VI workload domain. See KB 78635.
Procedure
2 Click the vertical ellipsis (three dots) next to the VI workload domain you want to delete and
click Delete Domain.
3 On the Delete Workload Domain dialog box, click Delete Workload Domain.
A message indicating that the VI workload domain is being deleted appears. When the
removal process is complete, the VI workload domain is removed from the domains table.
What to do next
If you delete an isolated VI workload domain that created an NSX Manager cluster that is shared
with another isolated VI workload domain, you need to register NSX Manager as a relying partner
to the remaining VI workload domain. See https://kb.vmware.com/s/article/95445.
Procedure
Tip
Click the show or hide columns icon to view additional information about the workload
domains, including the SSO domain.
2 In the workload domains table, click the name of the workload domain.
VMware by Broadcom 92
VMware Cloud Foundation on Dell VxRail Guide
Services SDDC software stack components deployed for the workload domain's virtual environment
and their IP addresses. Click a component name to navigate to that aspect of the virtual
environment. For example, click vCenter Server to reach the vSphere Client for that workload
domain.
All the capabilities of a VMware SDDC are available to you in the VI workload domain's
environment, such as creating, provisioning, and deploying virtual machines, configuring the
software-defined networking features, and so on.
Hosts Names, IP addresses, status, associated clusters, and capacity utilization of the hosts in the
workload domain and the network pool they are associated with.
Clusters Names of the clusters, number of hosts in the clusters, and their capacity utilization.
Edge Clusters Names of the NSX Edge clusters, NSX Edge nodes, and their status.
Certificates Default certificates for the VMware Cloud Foundation components. For more information, see
Chapter 9 Managing Certificates in VMware Cloud Foundation.
VMware by Broadcom 93
VMware Cloud Foundation on Dell VxRail Guide
To add a VxRail cluster to a workload domain, you can use the SDDC Manager UI or the
Workflow Optimization script.
Method Details
Prerequisites
n Image the workload domain nodes. For information on imaging the nodes, refer to Dell EMC
VxRail documentation.
n The IP addresses and Fully Qualified Domain Names (FQDNs) for the ESXi hosts, VxRail
Manager, and NSX Manager instances must be resolvable by DNS.
n If you are using DHCP for the NSX Host Overlay Network, a DHCP server must be configured
on the NSX Host Overlay VLAN of the management domain. When VMware NSX creates
TEPs for the VI workload domain, they are assigned IP addresses from the DHCP server.
Procedure
1 In the navigation pane, click Inventory > Workload Domains. The Workload Domains page
displays information for all workload domains.
2 In the workload domains table, hover your mouse in the VxRail workload domain row.
A set of three dots appears on the left of the workload domain name.
4 Make sure the prerequisites are met. To continue, click Get Started.
VMware by Broadcom 94
VMware Cloud Foundation on Dell VxRail Guide
5 Select the type of storage to use for this workload domain. Click Select.
For vSAN storage, you can enable vSAN ESA if the workload domain is using vSphere
Lifecycle Manager images.
Option Description
Click Connect and confirm the SSL fingerprint of the VxRail Manager.
n VxRail Manager Admin Credentials
n Admin Username
n Admin Password
n Confirm Admin Password
n VxRail Manager Root Credentials
n Root Username
n Root Password
n Confirm Root Password
Host Selection Add ESXi hosts with similar or identical configurations across all cluster
members, including similar or identical storage configurations. A minimum
of 3 hosts are required.
a Select the ESXi hosts to add and click Provide Host Details.
b Enter the FQDNs and passwords for the hosts.
c Click Resolve Hosts IP address.
d Click Next.
Cluster Enter a name for the first cluster that will be created in this new workload
domain.
The name must be unique and contain between 3 and 80 characters. The
cluster name can include letters, numbers, and hyphens, and it can include
spaces.
VMware by Broadcom 95
VMware Cloud Foundation on Dell VxRail Guide
Option Description
Switch Configuration Provide the distributed switch configuration to be applied to the hosts in
the VxRail cluster. Select a predefined vSphere distributed switch (VDS)
configuration profile or create a custom switch confirguration.
For custom switch configuration, specify:
n VDS name
n MTU
n Number of uplinks
n Uplink to vmnic mapping
Click Configure Network Traffic to configure the following networks:
n Management
n vMotion
n vSAN
n Host Discovery
n System VM
For each network, specify:
n Distributed port group name
n MTU
n Load balancing policy
n Active and standby links
For the NSX network, specify:
n Operational mode
n Transport zone type
n NSX-Overlay Transport Zone Name
n For NSX Overlay, enter a VLAN ID and select the IP assignment type for
the Host Overlay Network TEPs.
Note For DHCP, a DHCP server must be configured on the NSX host
overlay (Host TEP) VLAN. When NSX creates TEPs for the VI workload
domain, they are assigned IP addresses from the DHCP server.
For static IP Pool, you can re-use an existing IP pool or create a new one.
Make sure the IP range includes enough IP addresses for the number
of hosts that will use the static IP Pool. The number of IP addresses
required depends on the number of pNICs on the ESXi hosts that
are used for the vSphere Distributed Switch that handles host overlay
networking. For example, a host with four pNICs that uses two pNICs for
host overlay traffic requires two IP addresses in the static IP pool.
n Teaming policy uplink mapping
n NSX Uplink Profile Name
n Teaming policy
n Active and standby links
VMware by Broadcom 96
VMware Cloud Foundation on Dell VxRail Guide
Option Description
7 On the Validation page, wait until all of the inputs have been successfully validated.
If validation is unsuccessful, you cannot proceed. Use the Back button to modify your
settings and try again.
8 Click Finish.
If the vSphere cluster hosts an NSX Edge cluster, you can only add new hosts with the same
management, uplink, host TEP, and Edge TEP networks (L2 uniform) as the existing hosts.
If the cluster to which you are adding hosts uses a static IP pool for the Host Overlay Network
TEPs, that pool must include enough IP addresses for the hosts you are adding. The number of
IP addresses required depends on the number of pNICs on the ESXi hosts that are used for the
vSphere Distributed Switch that handles host overlay networking. For example, a host with four
pNICs that uses two pNICs for host overlay traffic requires two IP addresses in the static IP pool.
Prerequisites
VMware by Broadcom 97
VMware Cloud Foundation on Dell VxRail Guide
n Discover and add new node(s) to the cluster using the VxRail Manager plugin for vCenter
Server. See the Dell documentation.
Procedure
2 In the workload domains table, click the name of the workload domain that you want to
expand.
4 Click the name of the cluster where you want to add a host.
This option only appears if the vSphere cluster hosts an NSX Edge cluster.
Option Description
L2 Uniform Select if all hosts you are adding to the vSphere cluster have the same
management, uplink, host TEP, and Edge TEP networks as the existing hosts
in the vSphere cluster.
L2 non-uniform and L3 You cannot proceed if you any of the hosts you are adding to the vSphere
cluster have different networks than the existing hosts in the vSphere
cluster. VMware Cloud Foundation does not support adding hosts to L2
non-uniform and L3 vSphere clusters that host an NSX Edge cluster.
7 On the Discovered Hosts page, enter the SSH password for the host and click Add.
8 On the Thumbprint Verification page, click to confirm the SSH thumbprints for the ESXi
hosts.
9 On the Validation page, wait until all of the inputs have been successfully validated.
If validation is unsuccessful, you cannot proceed. Use the Back button to modify your
settings and try again.
10 Click Finish.
VMware by Broadcom 98
VMware Cloud Foundation on Dell VxRail Guide
When a host is removed, the vSAN members are reduced. Ensure that you have enough hosts
remaining to facilitate the configured vSAN availability. Failure to do so might result in the
datastore being marked as read-only or in data loss.
Prerequisites
Use the vSphere Client to make sure that there are no critical alarms on the cluster from which
you want to remove the host.
Procedure
2 In the workload domains table, click the name of the workload domain that you want to
modify.
4 Click the name of the cluster from which you want to remove a host.
The details page for the cluster appears with a message indicating that the host is being
removed. When the removal process is complete, the host is removed from the hosts table
and deleted from vCenter Server.
You cannot delete the last cluster in a workload domain. Instead, delete the workload domain.
Prerequisites
n If vSAN remote datastores are mounted on the cluster, the cluster cannot be deleted. To
delete such clusters, you must first migrate any VMs from the remote datastore to the local
datastore and then unmount the vSAN remote datastores from vCenter Server.
n Delete any workload VMs created outside of VMware Cloud Foundation before deleting the
cluster.
n Migrate or backup the VMs and data on the datastore associated with the cluster to another
location.
n Delete the NSX Edge clusters hosted on the VxRail cluster or shrink the NSX Edge cluster
by deleting Edge nodes hosted on the VxRail cluster. You cannot delete Edge nodes if doing
so would result in an Edge cluster with fewer than two Edge nodes. For information about
deleting an NSX Edge cluster, see KB 78635.
VMware by Broadcom 99
VMware Cloud Foundation on Dell VxRail Guide
Procedure
The Workload Domains page displays information for all workload domains.
2 Click the name of the workload domain that contains the cluster you want to delete.
3 Click the Clusters tab to view the clusters in the workload domain.
5 Click the three dots next to the cluster name and click Delete VxRail Cluster.
6 Click Delete Cluster to confirm that you want to delete the cluster.
The details page for the workload domain appears with a message indicating that the cluster
is being deleted. When the removal process is complete, the cluster is removed from the
clusters table.
Procedure
2 Click the vertical ellipsis (three dots) in the Domain row for the workload domain you want to
rename and click Rename Domain.
3 Enter a new name for the workload domain and click Rename.
Procedure
The cluster detail page appears. The tabs on the page display additional information as
described in the table below.
Summary Displays information about resource usage, storage, and cluster tags.
Hosts Details about the ESXi hosts in the vSphere cluster. You can click a name in the FQDN column
to access the host summary page.
What to do next
You can add or remove a host, or access the vSphere Client from this page.
Prerequisites
n When the cluster belongs to a failed VI workload domain workflow, cluster workflow or host
workflow. If you try to rename a cluster that belongs to a failed workflow, restart of the failed
workflow will not be supported.
Procedure
3 Under the Clusters tab, click a cluster that you want to rename.
4 On the right side of the cluster's name, click ACTIONS > Rename Cluster.
You can also click the vertical ellipsis (three dots) in the clusters table for the cluster you want
to rename and click Rename Cluster.
The Rename Cluster window appears.
5 In the New Cluster Name textbox, enter a new name for the cluster and click RENAME.
6 Click DONE.
Results
In the Tasks panel, you can see the description and track the status of your newly renamed
cluster.
Tag Management
A tag is a label that you can apply to objects in the vSphere inventory. You can use tags to
capture a variety of metadata about your vSphere inventory and to organize and retrieve objects
quickly. You create tags and categories in the vSphere Client and then assign or remove tags for
your workload domains, clusters, and hosts in the SDDC Manager UI.
See vSphere Tags for more information about how to create and manage tags and categories.
If multiple vCenter Server instances in your VMware Cloud Foundation deployment are
configured to use Enhanced Linked Mode, tags and tag categories are replicated across all these
vCenter Server instances. This is the case for all VI workload domains that are joined to the same
SSO domain as the management domain. Isolated VI workload domains, that do not share the
management SSO domain, do not share its tags and categories.
Procedure
1 On the SDDC Manager UI, click Inventory > Workload Domains > Management and click the
Workload Domain.
Note If there are no tags shown in the Assign Tag window, click OPEN VSPHERE TAG
MANAGEMENT that redirects you to vSphere Client, to create new tags and tag categories.
See vSphere Tags for more information on the tagging functionality.
Procedure
1 On the SDDC Manager UI, click Inventory > Workload Domains > Management and click the
workload domain.
2 Under the Summary > Tags tile window, you will see tags listed with a cross mark beside the
tag names.
3 Click the cross mark of a tag that you want to remove in the Tags tile window.
Tag a Cluster
You can assign a tag to your cluster from the SDDC Manager UI by performing the following
steps:
Procedure
1 On the SDDC Manager UI, click Inventory > Workload Domains > Management > Workload
Domain > Clusters tab and click on the cluster.
Note If there are no tags shown in the Assign Tag window, click OPEN VSPHERE TAG
MANAGEMENT that redirects you to vSphere Client, to create new tags and tag categories.
See vSphere Tags for more information on the tagging functionality.
Procedure
1 On the SDDC Manager UI, click Inventory > Workload Domains > Management > Workload
Domain > Clusters tab and click on the cluster.
2 Under the Summary > Tags tile window, you will see tags listed with a cross mark beside the
tag names.
3 Click the cross mark of a tag that you want to remove in the Tags tile window.
Tag a Host
You can assign a tag to your host from the SDDC Manager UI by performing the following steps:
Procedure
1 On the SDDC Manager UI, click Inventory > Hosts tab and click on the host.
Note If there are no tags shown in the Assign Tag window, click OPEN VSPHERE TAG
MANAGEMENT that redirects you to vSphere Client, to create new tags and tag categories.
See vSphere Tags for more information on the tagging functionality.
Procedure
1 On the SDDC Manager UI, click Inventory > Hosts tab and click on the host.
2 Under the Summary > Tags tile window, you will see tags listed with a cross mark beside the
tag names.
3 Click the cross mark of a tag that you want to remove in the Tags tile window.
An NSX Edge cluster is a logical grouping of NSX Edge nodes run on a vSphere cluster. NSX
supports a 2-tier routing model.
By default, workload domains do not include any NSX Edge clusters and workloads are isolated,
unless VLAN-backed networks are configured in vCenter Server. Add one or more NSX Edge
clusters to a workload domain to provide software-defined routing and network services.
Note You must create an NSX Edge cluster on the default management vSphere cluster in order
to deploy VMware Aria Suite products.
You can add multiple NSX Edge clusters to the management or the VI workload domains for
scalability and resiliency. For VMware Cloud Foundation configuration maximums refer to the
VMware Configuration Maximums website.
Note Unless explicitly stated in this matrix, VMware Cloud Foundation supports the
configuration maximums of the underlying products. Refer to the individual product configuration
maximums as appropriate.
The north-south routing and network services provided by an NSX Edge cluster created for a
workload domain are shared with all other workload domains that use the same NSX Manager
cluster.
n Verify that separate VLANs and subnets are available for the NSX host overlay VLAN and
NSX Edge overlay VLAN. You cannot use DHCP for the NSX Edge overlay VLAN.
n Verify that the NSX host overlay VLAN and NSX Edge overlay VLAN are routed to each
other.
n For dynamic routing, set up two Border Gateway Protocol (BGP) peers on Top of Rack (ToR)
switches with an interface IP, BGP autonomous system number (ASN), and BGP password.
n Reserve a BGP ASN to use for the NSX Edge cluster’s Tier-0 gateway.
n Verify that DNS entries for the NSX Edge nodes are populated in the customer-managed
DNS server.
n The vSphere cluster hosting an NSX Edge cluster must include hosts with identical
management, uplink, NSX Edge overlay TEP, and NSX Edge overlay TEP networks (L2
uniform).
n The management network and management network gateway for the NSX Edge nodes must
be reachable from the NSX host overlay and NSX Edge overlay VLANs.
Note VMware Cloud Foundation 4.5 and later support deploying an NSX Edge cluster on a
vSphere cluster that is stretched. Edge nodes are placed on ESXi hosts in the first availability
zone (AZ1) during NSX Edge cluster deployment.
SDDC Manager does not enforce rack failure resiliency for NSX Edge clusters. Make sure that the
number of NSX Edge nodes that you add to an NSX Edge cluster, and the vSphere clusters to
which you deploy the NSX Edge nodes, are sufficient to provide NSX Edge routing services in
case of rack failure.
After you create an NSX Edge cluster, you can use SDDC Manager to expand or shrink it by
adding or deleting NSX Edge nodes.
Note If you deploy the NSX Edge cluster with the incorrect settings or need to delete an NSX
Edge cluster for another reason, see KB 78635.
Prerequisites
Procedure
2 In the Workload Domains page, click a domain name in the Domain column.
5 Enter the configuration settings for the NSX Edge cluster and click Next.
Setting Description
Edge Cluster Name Enter a name for the NSX Edge cluster.
MTU Enter the MTU for the NSX Edge cluster. The MTU can be 1600-9000.
Edge Cluster Profile Type Select Default or, if your environment requires specific Bidirectional
Forwarding Detection (BFD) configuration, select Custom.
Edge Cluster Profile Name Enter an NSX Edge cluster profile name. (Custom Edge cluster profile only)
BFD Allowed Hop Enter the number of multi-hop Bidirectional Forwarding Detection (BFD)
sessions allowed for the profile. (Custom Edge cluster profile only)
BFD Declare Dead Multiple Enter the number of number of times the BFD packet is not received before
the session is flagged as down. (Custom Edge cluster profile only)
BFD Probe Interval (milliseconds) BFD is detection protocol used to identify the forwarding path failures. Enter
a number to set the interval timing for BFD to detect a forwarding path
failure. (Custom Edge cluster profile only)
Standby Relocation Threshold Enter a standby relocation threshold in minutes. (Custom Edge cluster profile
(minutes) only)
Edge Root Password Enter and confirm the password to be assigned to the root account of the
NSX Edge appliance.
Setting Description
Edge Admin Password Enter and confirm the password to be assigned to the admin account of the
NSX Edge appliance.
Edge Audit Password Enter and confirm the password to be assigned to the audit account of the
NSX Edge appliance.
n At least 12 characters
n No dictionary words
n No palindromes
Setting Description
Edge Form Factor n Small: 4 GB memory, 2 vCPU, 200 GB disk space. The NSX Edge Small
VM appliance size is suitable for lab and proof-of-concept deployments.
n Medium: 8 GB memory, 4 vCPU, 200 GB disk space. The NSX Edge
Medium appliance size is suitable for production environments with load
balancing.
n Large: 32 GB memory, 8 vCPU, 200 GB disk space. The NSX Edge
Large appliance size is suitable for production environments with load
balancing.
n XLarge: 64 GB memory, 16 vCPU, 200 GB disk space. The NSX Edge
Extra Large appliance size is suitable for production environments with
load balancing.
Setting Description
Tier-0 Service High Availability In the active-active mode, traffic is load balanced across all members. In
active-standby mode, all traffic is processed by an elected active member. If
the active member fails, another member is elected to be active.
Workload Management requires Active-Active.
Some services are only supported in Active-Standby: NAT, load balancing,
stateful firewall, and VPN. If you select Active-Standby, use exactly two NSX
Edge nodes in the NSX Edge cluster.
Tier-0 Routing Type Select Static or EBGP to determine the route distribution mechanism for
the tier-0 gateway. If you select Static, you must manually configure the
required static routes in NSX Manager. If you select EBGP, VMware Cloud
Foundation configures eBGP settings to allow dynamic route distribution.
ASN Enter an autonomous system number (ASN) for the NSX Edge cluster. (for
EBGP only)
7 Enter the configuration settings for the first NSX Edge node and click Add Edge Node.
Setting Description
Edge Node Name (FQDN) Enter the FQDN for the NSX Edge node. Each node must have a unique
FQDN.
Note If the vSphere cluster you select already hosts management virtual
machines that are connected to the host Management port group, the
VM Management Portgroup VLAN and VM Management Portgroup VLAN
settings are not available.
Cluster Type Select L2 Uniform if all hosts in the vSphere cluster have identical
management, uplink, host TEP, and Edge TEP networks.
Select L2 non-uniform and L3 if any of the hosts in the vSphere cluster have
different networks.
First NSX VDS Uplink Click Advanced Cluster Settings to map the first NSX Edge node uplink
network interface to a physical NIC on the host, by specifying the ESXi
uplink. The default is uplink1.
When you create an NSX Edge cluster, SDDC Manager creates two trunked
VLAN port groups. The information you enter here determines the active
uplink on the first VLAN port group. If you enter uplink3, then uplink3 is the
active uplink and the uplink you specify for the second NSX VDS uplink is
the standby uplink.
The uplink must be prepared for overlay use.
Setting Description
Second NSX VDS Uplink Click Advanced Cluster Settings to map the second NSX Edge node uplink
network interface to a physical NIC on the host, by specifying the ESXi
uplink. The default is uplink2.
When you create an NSX Edge cluster, SDDC Manager creates two trunked
VLAN port groups. The information you enter here determines the active
uplink on the second VLAN port group. If you enter uplink4, then uplink4
is the active uplink and the uplink you specify for the first NSX VDS uplink is
the standby uplink.
The uplink must be prepared for overlay use.
Management IP (CIDR) Enter the management IP for the NSX Edge node in CIDR format. Each node
must have a unique management IP.
Management Gateway Enter the IP address for the management network gateway.
VM Management Portgroup VLAN If the VM Management port group exists on the vSphere distributed switch
of the vSphere cluster that you selected to host the Edge node, then the VM
Management port group VLAN is displayed and cannot be edited.
If the VM Management port group does not exist on the vSphere distributed
switch of the vSphere cluster that you selected to host the Edge node, enter
a VLAN ID to create a new VM Management port group or click Use ESXi
Management VMK's VLAN to use the host Management Network VLAN to
create a new VM Management port group.
VM Management Portgroup Name If the VM Management port group exists on the vSphere distributed switch
of the vSphere cluster that you selected to host the Edge node, then the VM
Management port group name is displayed and cannot be edited.
Otherwise, type a name for the new port group.
Edge TEP 1 IP (CIDR) Enter the CIDR for the first NSX Edge TEP. Each node must have a unique
Edge TEP 1 IP.
Edge TEP 2 IP (CIDR) Enter the CIDR for the second NSX Edge TEP. Each node must have a
unique Edge TEP 2 IP. The Edge TEP 2 IP must be different than the Edge
TEP 1 IP.
Edge TEP Gateway Enter the IP address for the NSX Edge TEP gateway.
Edge TEP VLAN Enter the NSX Edge TEP VLAN ID.
First Tier-0 Uplink VLAN Enter the VLAN ID for the first uplink.
This is a link from the NSX Edge node to the first uplink network.
First Tier-0 Uplink Interface IP Enter the CIDR for the first uplink. Each node must have unique uplink
(CIDR) interface IPs.
Peer IP (CIDR) Enter the CIDR for the first uplink peer. (EBGP only)
Peer ASN Enter the ASN for the first uplink peer. (EBGP only)
BGP Peer Password Enter and confirm the BGP password. (EBGP only).
Second Tier-0 Uplink VLAN Enter the VLAN ID for the second uplink.
This is a link from the NSX Edge node to the second uplink network.
Setting Description
Second Tier-0 Uplink Interface IP Enter the CIDR for the second uplink. Each node must have unique uplink
(CIDR) interface IPs. The second uplink interface IP must be different than the first
uplink interface IP.
Peer IP (CIDR) Enter the CIDR for the second uplink peer. (EBGP only)
ASN Peer Enter the ASN for the second uplink peer. (EBGP only)
BGP Peer Password Enter and confirm the BGP password. (EBGP only).
8 Click Add More Edge Nodes to enter configuration settings for additional NSX Edge nodes.
A minimum of two NSX Edge nodes is required. NSX Edge cluster creation allows up to 8 NSX
Edge nodes if the Tier-0 Service High Availability is Active-Active and two NSX Edge nodes
per NSX Edge cluster if the Tier-0 Service High Availability is Active-Standby.
Note All Edge nodes in the NSX Edge cluster must use the same VM Management port
group VLAN and name.
9 When you are done adding NSX Edge nodes, click Next.
11 If validation fails, use the Back button to edit your settings and try again.
To edit or delete any of the NSX Edge nodes, click the three vertical dots next to an NSX
Edge node in the table and select an option from the menu.
Example
The following example shows a scenario with sample data. You can use the example to guide
you in creating NSX Edge clusters in your environment. Refer to the Planning and Preparation
Workbook for a complete list of sample values for creating an NSX Edge cluster.
Legend:
VLANs
Tier-1 to Tier-0 Connection
Segment
ECMP
NSX
ASN 65005 Edge Cluster
Edge VM 1 Edge VM 2
Tier-0
Gateway
Active/ Active
Tier-1
Gateway
VM VM VM VM VM VM
What to do next
In NSX Manager, you can create segments connected to the NSX Edge cluster's tier-1 gateway.
You can connect workload virtual machines to these segments to provide north-south and east-
west connectivity.
You might want to add NSX Edge nodes to an NSX Edge cluster, for:
n When the Tier-0 Service High Availability is Active-Standby and you require more than two
NSX Edge nodes for services.
Note Only two of the NSX Edge nodes can have uplink interfaces, but you can add more
nodes without uplink interfaces.
n When the Tier-0 Service High Availability is Active-Active and you require more than 8 NSX
Edge nodes for services.
n When you add Supervisor Clusters to a Workload Management workload domain and need
to support additional tier-1 gateways and services.
The available configuration settings for a new NSX Edge node vary based on:
n The Tier-0 Service High Availability setting (Active-Active or Active-Standby) of the NSX
Edge cluster.
n The Tier-0 Routing Type setting (static or EBGP) of the NSX Edge cluster.
n Whether the new NSX Edge node is going to be hosted on the same vSphere cluster as the
existing NSX Edge nodes (in-cluster) or on a different vSphere cluster (cross-cluster).
Prerequisites
n Verify that separate VLANs and subnets are available for the NSX host overlay VLAN and
NSX Edge overlay VLAN. You cannot use DHCP for the NSX Edge overlay VLAN.
n Verify that the NSX host overlay VLAN and NSX Edge overlay VLAN are routed to each
other.
n For dynamic routing, set up two Border Gateway Protocol (BGP) peers on Top of Rack (ToR)
switches with an interface IP, BGP autonomous system number (ASN), and BGP password.
n Reserve a BGP ASN to use for the NSX Edge cluster’s Tier-0 gateway.
n Verify that DNS entries for the NSX Edge nodes are populated in the customer-managed
DNS server.
n The vSphere cluster hosting the NSX Edge nodes must include hosts with identical
management, uplink, NSX Edge overlay TEP, and NSX Edge overlay TEP networks (L2
uniform).
n The vSphere cluster hosting the NSX Edge nodes must have the same pNIC speed for NSX-
enabled VDS uplinks chosen for Edge overlay.
n All NSX Edge nodes in an NSX Edge cluster must use the same set of NSX-enabled VDS
uplinks. These uplinks must be prepared for overlay use.
n The NSX Edge cluster must be hosted on one or more vSphere clusters from the same
workload domain.
Procedure
2 In the Workload Domains page, click a domain name in the Domain column.
4 Click the vertical ellipsis menu for the Edge Cluster you want to expand and select Expand
Edge Cluster.
6 Enter and confirm the passwords for the NSX Edge cluster.
8 Enter the configuration settings for the new NSX Edge node and click Add Edge Node.
Setting Description
Edge Node Name (FQDN) Enter the FQDN for the NSX Edge node. Each node must have a unique
FQDN.
Note If the vSphere cluster you select already hosts management virtual
machines that are connected to the host Management port group, the
VM Management Portgroup VLAN and VM Management Portgroup VLAN
settings are not available.
Cluster Type Select L2 Uniform if all hosts in the vSphere cluster have identical
management, uplink, host TEP, and Edge TEP networks.
Select L2 non-uniform and L3 if any of the hosts in the vSphere cluster have
different networks.
Management IP (CIDR) Enter the management IP for the NSX Edge node in CIDR format. Each node
must have a unique management IP.
Management Gateway Enter the IP address for the management network gateway.
Setting Description
VM Management Portgroup VLAN For in-cluster expansion, the new Edge node uses the same VM
Management port group VLAN as the other Edge nodes in the Edge cluster.
For cross-cluster expansion:
n If the VM Management port group exists on the vSphere distributed
switch of the vSphere cluster that you selected to host the Edge node,
then the VM Management port group VLAN is displayed and cannot be
edited.
n If the VM Management port group does not exist on the vSphere
distributed switch of the vSphere cluster that you selected to host the
Edge node, enter a VLAN ID to create a new VM Management port
group or click Use ESXi Management VMK's VLAN to use the host
Management Network VLAN for the VM Management port group.
VM Management Portgroup Name For in-cluster expansion, the new Edge node uses the same VM
Management port group name as the other Edge nodes in the Edge cluster.
For cross-cluster expansion:
n If the VM Management port group exists on the vSphere distributed
switch of the vSphere cluster that you selected to host the Edge node,
then the VM Management port group name is displayed and cannot be
edited.
n Otherwise, type a name for the port group.
Edge TEP 1 IP (CIDR) Enter the CIDR for the first NSX Edge TEP. Each node must have a unique
Edge TEP 1 IP.
Edge TEP 2 IP (CIDR) Enter the CIDR for the second NSX Edge TEP. Each node must have a
unique Edge TEP 2 IP. The Edge TEP 2 IP must be different than the Edge
TEP 1 IP.
Edge TEP Gateway Enter the IP address for the NSX Edge TEP gateway.
Edge TEP VLAN Enter the NSX Edge TEP VLAN ID.
First NSX VDS Uplink Specify an ESXi uplink to map the first NSX Edge node uplink network
interface to a physical NIC on the host. The default is uplink1.
The information you enter here determines the active uplink on the first
VLAN port group used by the NSX Edge node. If you enter uplink3, then
uplink3 is the active uplink and the uplink you specify for the second NSX
VDS uplink is the standby uplink.
(cross-cluster only)
Note For in-cluster NSX Edge cluster expansion, new NSX Edge nodes use
the same NSX VDS uplinks as the other Edge nodes hosted on the vSphere
cluster.
Setting Description
Second NSX VDS Uplink Specify an ESXi uplink to map the second NSX Edge node uplink network
interface to a physical NIC on the host. The default is uplink2.
The information you enter here determines the active uplink on the second
VLAN port group used by the NSX Edge node. If you enter uplink4, then
uplink4 is the active uplink and the uplink you specify for the first NSX VDS
uplink is the standby uplink.
(cross-cluster only)
Note For in-cluster NSX Edge cluster expansion, new NSX Edge nodes use
the same NSX VDS uplinks as the other Edge nodes hosted on the vSphere
cluster.
Add Tier-0 Uplinks Optional. Click Add Tier-0 Uplinks to add tier-0 uplinks.
(Active-Active only)
First Tier-0 Uplink VLAN Enter the VLAN ID for the first uplink.
This is a link from the NSX Edge node to the first uplink network.
(Active-Active only)
First Tier-0 Uplink Interface IP Enter the CIDR for the first uplink. Each node must have unique uplink
(CIDR) interface IPs.
(Active-Active only)
Peer IP (CIDR) Enter the CIDR for the first uplink peer.
(EBGP only)
Peer ASN Enter the ASN for the first uplink peer.
(EBGP only)
Second Tier-0 Uplink VLAN Enter the VLAN ID for the second uplink.
This is a link from the NSX Edge node to the second uplink network.
(Active-Active only)
Second Tier-0 Uplink Interface Enter the CIDR for the second uplink. Each node must have unique uplink
IP(CIDR) interface IPs. The second uplink interface IP must be different than the first
uplink interface IP.
(Active-Active only)
Peer IP (CIDR) Enter the CIDR for the second uplink peer.
(EBGP only)
ASN Peer Enter the ASN for the second uplink peer.
(EBGP only)
9 Click Add More Edge Nodes to enter configuration settings for additional NSX Edge nodes.
n For an NSX Edge cluster with a Tier-0 Service High Availability setting of Active-Active,
up to 8 of the NSX Edge nodes can have uplink interfaces.
n For an NSX Edge cluster with a Tier-0 Service High Availability setting of Active-Standby,
up to 2 of the NSX Edge nodes can have uplink interfaces.
10 When you are done adding NSX Edge nodes, click Next.
12 If validation fails, use the Back button to edit your settings and try again.
To edit or delete any of the NSX Edge nodes, click the three vertical dots next to an NSX
Edge node in the table and select an option from the menu.
13 If validation succeeds, click Finish to add the NSX Edge node(s) to the NSX Edge cluster.
Prerequisites
n The NSX Edge cluster must be available in the SDDC Manager inventory and must be Active.
n The NSX Edge node must be available in the SDDC Manager inventory.
n The NSX Edge cluster must be hosted on one or more vSphere clusters from the same
workload domain.
n The NSX Edge cluster must contain more than two NSX Edge nodes.
n If the NSX Edge cluster was deployed with a Tier-0 Service High Availability of Active-Active,
the NSX Edge cluster must contain two or more NSX Edge nodes with two or more Tier-0
routers (SR component) after the NSX Edge nodes are removed.
n If selected edge cluster was deployed with a Tier-0 Service High Availability of Active-
Standby, you cannot remove NSX Edge nodes that are the active or standby node for the
Tier-0 router.
Procedure
2 In the Workload Domains page, click a domain name in the Domain column.
4 Click the vertical ellipsis menu for the Edge Cluster you want to expand and select Shrink
Edge Cluster.
7 If validation fails, use the Back button to edit your settings and try again.
Note You cannot remove the active and standby Edge nodes of a Tier-1 router at the same
time. You can remove one and then remove the other after the first operation is complete.
8 If validation succeeds, click Finish to remove the NSX Edge node(s) from the NSX Edge
cluster.
Starting with VMware Cloud Foundation 5.2, you can use SDDC Manager to deploy Avi Load
®
Balancer as a high availability cluster of three VMware Avi™ Controller instances, each running
on a separate VM.
Note Previous version of VMware Cloud Foundation support Avi Load Balancer, but do not
deploy or manage the Avi Controller Cluster.
The Avi Controller cluster functions as the control plane and stores and manages all policies
related to services and management. All Avi Controllers are deployed in the management
domain, even when the Avi Load Balancer is deployed in a VI workload domain.
When you deploy Avi Load Balancer in a workload domain, it is associated with the workload
domain's NSX Manager.
Note VMware Cloud Foundation 5.2 does not support deploying Avi Load Balancer on a
workload domain that shares its NSX Manager with another workload domain.
VMware Cloud Foundation does not deploy or manage the Service Engine VMs (SEs) that
function as the data plane. After deploying the Avi Controller cluster, you can use the Avi Load
Balancer UI/API, VMware Aria Automation, or Avi Kubernetes Operator to deploy virtual services
for an application, which creates the required Service Engine virtual machines. Service Engines
(SEs) are deployed in the workload domain in which the Avi Load Balancer is providing load
balancing services. All SEs deployed in a VI workload domain are managed by the Avi Controller
that is part of the Avi Load Balancer deployment that is associated with the corresponding NSX
instance managing the VI workload domain.
n VMware Cloud Foundation does not manage license updates for Avi Load Balancer.
n VMware Cloud Foundation does not manage backing up of Avi Load Balancer configuration
database. See the VMware Avi Load Balancer Documentation for information about
configuring scheduled and on-demand backups.
n VMware Cloud Foundation does not manage upgrading Avi Controller Cluster. See the
VMware Avi Load Balancer Documentation for information about upgrading.
n The lifecycle of the Avi Service Engines is managed by each Avi Controller Cluster. You
perform updates and upgrades in the Avi Load Balancer web interface, which has checks in
place to ensure that you can only upgrade to supported versions.
n If you upgraded from an earlier version of VMware Cloud Foundation and had deployed Avi
Load Balancer, SDDC Manager will not be aware of or manage that Avi Load Balancer. You
can use SDDC Manager to deploy additional Avi Load Balancers in such an environment.
n In order to use Avi Load Balancer for load balancing services in a vSphere IaaS Control
Plane environment, the Avi Load Balancer must be registered with the NSX Manager. See
Registering an Avi Load Balancer cluster with an NSX Manager instance.
For more information about how to use and manage Avi Load Balancer see:
n The Advanced Load Balancing for VMware Cloud Foundation validated solution
Avi Load Balancer was formerly known as NSX Advanced Load Balancer. The SDDC Manager UI
still refers to NSX Advanced Load Balancer.
You cannot deploy Avi Load Balancer on a workload domain that shares its NSX Manager with
another workload domain.
Prerequisites
Download the install bundle for a supported version of NSX Advanced Load Balancer. See
Downloading VMware Cloud Foundation Upgrade Bundles.
Procedure
2 In the Workload Domains page, click a domain name in the Domain column.
4 Select the NSX Advanced Load Balancer version and click Next.
Make sure that the management domain has enough resources for the selected size.
6 Enter the settings for the NSX Advanced Load Balancer Controller cluster and click Next.
Cluster VIP Enter the Avi Load Balancer Controller cluster IP address.
The Avi Load Balancer Controller cluster IP address is a single IP address
shared by the Avi Controllers within the cluster. It is the address to which
the web interface, CLI commands, and REST API calls are directed. As a
best practice, to access the Avi Controller, you must log in to the cluster IP
address instead of the IP addresses of individual Avi Controller nodes.
Note When creating a service account for the NSX Advanced Load
Balancer Controller cluster, VMware Cloud Foundation 5.2 combines the Avi
Load Balancer VIP host name and the NSX Manager VIP host name to create
the account, svc-<alb hostname>-<nsx hostname>. The total characters
cannot exceed 32. VCF 5.2.1 automatically truncates the service account
name to avoid deployment failures based on account name length.
What to do next
After the Avi Load Balancer Controller cluster deploys successfully, you can access the web
interface from the Services tab for the workload domain by clicking the NSX Advanced Load
Balancer link.
You can manage the Avi Load Balancer Controller cluster administrator password and certificate
using the SDDC Manager UI.
Prerequisites
Procedure
2 In the Workload Domains page, click a domain name in the Domain column.
You can create overlay-backed NSX segments or VLAN-backed NSX segments. Both options
create two NSX segments (Region-A and X-Region) on the NSX Edge cluster deployed in the
default management vSphere cluster. Those NSX segments are used when you deploy the
VMware Aria Suite products. Region-A segments are local instance NSX segments and X-Region
segments are cross-instance NSX segments.
Important You cannot create AVNs if the NSX for the management domain is part of an NSX
Federation.
In an overlay-backed segment, traffic between two VMs on different hosts but attached to the
same overlay segment have their layer-2 traffic carried by a tunnel between the hosts. NSX
instantiates and maintains this IP tunnel without the need for any segment-specific configuration
in the physical infrastructure. As a result, the virtual network infrastructure is decoupled from
the physical network infrastructure. That is, you can create segments dynamically without any
configuration of the physical network infrastructure.
This procedure describes creating overlay-backed NSX segments. If you want to create VLAN-
backed NSX segments instead, see Deploy VLAN-Backed NSX Segments.
Prerequisites
Create an NSX Edge cluster for Application Virtual Networks, using the recommended settings, in
the default management vSphere cluster. See Deploy an NSX Edge Cluster.
Procedure
6 Enter information for each of the NSX segments (Region-A and X-Region):
Option Description
Name Enter a name for the NSX segment. For example, Mgmt-RegionA01.
If validation does not succeed, verify and update the information you entered for the NSX
segments and click Validate Settings again.
Example
This procedure describes creating VLAN-backed NSX segments. If you want to create overlay-
backed NSX segments instead, see Deploy Overlay-Backed NSX Segments.
Prerequisites
Create an NSX Edge cluster for Application Virtual Networks, using the recommended settings, in
the default management vSphere cluster. See Deploy an NSX Edge Cluster.
Procedure
6 Enter information for each of the NSX segments (Region-A and X-Region):
Option Description
Name Enter a name for the NSX segment. For example, Mgmt-RegionA01.
If validation does not succeed, verify and update the information you entered for the NSX
segments and click Validate Settings again.
Example
When enabled on a vSphere cluster, vSphere IaaS Control Plane provides the capability to run
Kubernetes workloads directly on ESXi hosts and to create upstream Kubernetes clusters within
dedicated resource pools. vSphere IaaS Control Plane can also be enabled on the management
domain default cluster.
Note Starting with vSphere 8.0 Update 3, vSphere with Tanzu was renamed to vSphere IaaS
Control Plane.
You validate the underlying infrastructure for vSphere IaaS Control Plane from the SDDC
Manager UI and then complete the deployment in the vSphere Client. The SDDC Manager UI
refers to the vSphere IaaS Control Plane functionality as Kubernetes - Workload Management.
The Developer Ready Infrastructure for VMware Cloud Foundation validated solution provides
design, implementation, and operational guidance for a workload domain that runs vSphere with
Tanzu workloads in the Software-Defined Data Center (SDDC).
For more information about vSphere IaaS Control Plane, see What Is vSphere Iaas control plane?.
Prerequisites
Note If you deployed VMware Cloud Foundation with a consolidated architecture, you can
enable Workload Management on the management domain.
n An Workload Management ready NSX Edge cluster must be deployed on the workload
domain.
You must select Workload Management on the Use Case page of the Add Edge Cluster
wizard. See step 6 in Deploy an NSX Edge Cluster.
n All hosts in the vSphere cluster for which you enable Workload Management must be
licensed for vSphere IaaS Control Plane.
n Workload Management requires a vSphere cluster with a minimum of three ESXi hosts.
n In order to use Avi Load Balancer for load balancing services in a vSphere IaaS Control
Plane environment, the Avi Load Balancer must be registered with the NSX Manager. See
Registering an Avi Load Balancer cluster with an NSX Manager instance.
Procedure
3 Review the Workload Management prerequisites, click Select All, and click Begin.
4 Select the workload domain associated with the vSphere cluster where you want to enable
Workload Management.
The Workload Domain drop-down menu displays all Workload Management ready workload
domains, including the management domain.
vSphere clusters in the selected workload domain that are compatible with Workload
Management are displayed in the Compatible section. Incompatible clusters are displayed in
the Incompatible section, along with the reason for the incompatibility. If you want to get an
incompatible cluster to a usable state, you can exit the Workload Management deployment
wizard while you resolve the issue.
5 From the list of compatible clusters on the workload domain, select the cluster where you
want to enable Workload Management and click Next.
6 On the Validation page, wait for validation to complete successfully and click Next.
n vCenter Server validation (vCenter Server credentials, vSphere cluster object, and
version)
7 On the Review page, review your selections and click Complete in vSphere.
What to do next
Follow the deployment wizard within the vSphere Client to complete the Workload Management
deployment and configuration steps.
Procedure
Prerequisites
You must have added a VMware Tanzu license key to the Cloud Foundation license inventory.
See Add a Component License Key in the SDDC Manager UI.
Procedure
2 Click the dots to the left of the cluster for which you want to update the license and click
Update Workload Management license.
After the license update processing is completed, the Workload Management page is
displayed. The task panel displays the licensing task and its status.
VMware Aria Suite Lifecycle in VMware Cloud Foundation mode introduces the following
features:
n Automatic load balancer configuration. Load balancer preparation and configuration are no
longer a prerequisite when you use VMware Aria Suite Lifecycle to deploy or perform a
cluster expansion on Workspace ONE Access, VMware Aria Operations, or VMware Aria
Automation. Load balancer preparation and configuration take place as part of the deploy or
expand operation.
n Automatic infrastructure selection in the VMware Aria Suite Lifecycle deployment wizards.
When you deploy a VMware Aria Suite product through VMware Aria Suite Lifecycle,
infrastructure objects such as clusters and networks are pre-populated. They are fixed and
cannot be changed to ensure alignment with the VMware Cloud Foundation architecture.
n Cluster deployment for a new environment. You can deploy VMware Aria Operations for
Logs, VMware Aria Operations, or VMware Aria Automation in clusters. You can deploy
Workspace ONE Access either as a cluster or a single node. If you deploy Workspace ONE
Access as a single node, you can expand it to a cluster later.
n Consistent Bill Of Materials (BOM). VMware Aria Suite Lifecycle in VMware Cloud Foundation
mode only displays product versions that are compatible with VMware Cloud Foundation to
ensure product interoperability.
n Inventory synchronization between VMware Aria Suite Lifecycle and SDDC Manager. VMware
Aria Suite Lifecycle can detect changes made to VMware Aria Suite products and update
its inventory through inventory synchronization. When VMware Cloud Foundation mode is
enabled in VMware Aria Suite Lifecycle, inventory synchronization in VMware Aria Suite
Lifecycle also updates SDDC Manager’s inventory to get in sync with the current state of
the system.
n Product versions. VMware Cloud Foundation supports flexible VMware Aria Suite upgrades.
You can upgrade VMware Aria Suite products as new versions become available in VMware
Aria Suite Lifecycle. VMware Aria Suite Lifecycle will only allow upgrades to compatible and
supported versions of VMware Aria Suite products.
n Resource pool and advanced properties. The resources in the Resource Pools under the
Infrastructure Details are blocked by the VMware Aria Suite Lifecycle UI, so that the VMware
Cloud Foundation topology does not change. Similarly, the Advanced Properties are also
blocked for all products except for Remote Collectors. VMware Aria Suite Lifecycle also
auto-populates infrastructure and network properties by calling VMware Cloud Foundation
deployment API.
n Watermark.
By default, VMware Cloud Foundation uses NSX to create NSX segments and deploys VMware
Aria Suite Lifecycle and the VMware Aria Suite products to these NSX segments. Starting with
VMware Cloud Foundation 4.3, NSX segments are no longer configured during the management
domain bring-up process, but instead are configured using the SDDC Manager UI. The new
process offers the choice of using either overlay-backed or VLAN-backed segments. See
Chapter 16 Deploying Application Virtual Networks in VMware Cloud Foundation.
VMware Aria Suite Lifecycle runs in VMware Cloud Foundation mode, the integration ensures
awareness between the two components. You launch the deployment of VMware Aria Suite
products from the SDDC Manager UI and are redirected to the VMware Aria Suite Lifecycle UI
where you complete the deployment process.
Prerequisites
n Download the VMware Software Install Bundle for VMware Aria Suite Lifecycle from the
VMware Depot to the local bundle repository. See Downloading VMware Cloud Foundation
Upgrade Bundles.
n Allocate an IP address for the VMware Aria Suite Lifecycle virtual appliance on the cross-
instance NSX segment and prepare both forward (A) and reverse (PTR) DNS records.
n Allocate an IP address for the NSX standalone Tier-1 Gateway on the cross-instance NSX
segment. This address is used for the service interface of the standalone NSX Tier 1 Gateway
created during the deployment. The Tier 1 Gateway is used for load-balancing of specific
VMware Aria Suite products and Workspace ONE Access.
n Verify the Prerequisite Checklist sheet in the Planning and Preparation Workbook.
Procedure
2 Click Deploy.
4 On the Network Settings page, review the settings and click Next.
5 On the Virtual Appliance Settings page, enter the settings and click Next.
Setting Description
Virtual Appliance: FQDN The FQDN for the VMware Aria Suite Lifecycle virtual
appliance.
NSX Tier 1 Gateway: IP Address A free IP Address within the cross-instance virtual
network segment.
System Administrator Create and confirm the password for the VMware Aria
Suite Lifecycle administrator account, vcfadmin@local.
The password created is the credential that allows
SDDC Manager to connect to VMware Aria Suite
Lifecycle.
SSH Root Account Create and confirm a password for the VMware Aria
Suite Lifecycle virtual appliance root account.
6 On the Review Summary page, review the installation configuration settings and click Finish.
The VMware Aria Suite page displays the following message: Deployment in progress.
If the deployment fails, this page displays a deployment status of Deployment failed. In
this case, you can click Restart Task or Rollback.
7 (Optional) To view details about the individual deployment tasks, in the Tasks panel at the
bottom, click each task.
Procedure
2 On the Workload Domain page, from the table, in the domain column click the management
domain.
4 From the table, select the check box for the VMware Aria Suite Lifecycle resource type, and
click Generate CSRs.
5 On the Details page, enter the following settings and click Next.
Settings Description
Key Size Select the key size (2048 bit, 3072 bit, or 4096 bit)
from the drop-down menu.
Organization Name Type the name under which your company is known.
The listed organization must be the legal registrant of
the domain name in the certificate request.
State Type the full name (do not abbreviate) of the state,
province, region, or territory where your company is
legally registered.
6 On the Subject Alternative Name page, leave the default SAN and click Next.
8 After the successful return of the operation, click Generate signed certificates.
9 In the Generate Certificates dialog box, from the Select Certificate Authority drop-down
menu, select Microsoft.
You add the cross-instance data center, and the associated management domain vCenter Server
for the deployment of the global components, such as the clustered Workspace ONE Access.
Procedure
1 In a web browser, log in to VMware Aria Suite Lifecycle with the vcfadmin@local user by
using the user interface (https://<vrslcm_fqdn>).
4 Click Add datacenter, enter the values for the global data center, and click Save.
Setting Value
5 Add the management domain vCenter Server to the global data center.
a On the Datacenters page, expand the global data center and click Add vCenter.
b Enter the management domain vCenter Server information and click Validate.
Setting Value
7 In the navigation pane, click Requests and verify that the state of the vCenter data collection
request is Completed.
Prerequisites
n Download the installation binary directly from VMware Aria Suite Lifecycle. See "Configure
Product Binaries" in the VMware Aria Suite Lifecycle Installation, Upgrade, and Management
Guide for the version of VMware Aria Suite Lifecycle listed in the VMware Cloud Foundation
BOM.
n Allocate IP addresses:
One IP address from the cross-instance NSX segment Five IP addresses from the cross-instance NSX segment
and prepare both forward (A) and reverse (PTR) DNS and prepare both forward (A) and reverse (PTR) DNS
records. records.
n Three IP addresses for the clustered Workspace ONE
Access instance.
n One IP address for the embedded Postgres database
for the Workspace ONE Access instance.
n One IP address for the NSX external load balancer
virtual server for clustered Workspace ONE Access
instance.
n Verify the Prerequisite Checklist sheet in the Planning and Preparation Workbook.
n Download the CertGenVVS tool and generate the signed certificate for the Workspace ONE
Access instance. See KB 85527.
This procedure uses the PowerShell Module for VMware Validated Solutions to generate the
required certificates from a Microsoft Active Directory Certificate Services. However, the module
also supports generating certificate signing requests (CSRs) for third party certificate authorities
for import to the VMware Aria Suite Lifecycle locker.
Prerequisites
n Install the PowerShell module for VMware Validated Solutions together with the supporting
modules to request an SSL certificate from your Microsoft Certificate Authority.
n Verify that you have OpenSSL 3.0 or later installed on the system that will run the PowerShell
module. The OpenSSL Wiki has a list of third-party pre-compiled binaries for Microsoft
Windows.
Procedure
1 Generate an SSL certificate using the PowerShell module for VMware Validated Solutions.
a Start PowerShell.
b Replace the sample values in the variables below and run the commands in the
PowerShell console.
$commonName = "xint-idm01.rainpole.io"
$subjectAltNames = "xint-idm01.rainpole.io, xint-idm01a.rainpole.io, xint-
idm01b.rainpole.io, xint-cidm01c.rainpole.io"
$encryptionKeySize = 2048
$certificateExpiryDays = 730
$orgName = "rainpole"
$orgUnitName = "Platform Engineering"
$orgLocalityName = "San Francisco"
$orgStateName = "California"
$orgCountryCode = "US"
$caType = "msca"
$caFqdn = "rpl-ad01.rainpole.io"
$caUsername = "Administrator"
$caPassword = "VMw@re1!"
$caTemplate = "VMware"
$outputPath = ".\certificates\"
$csrFilePath = Join-Path $outputPath "$commonName.csr"
$keyFilePath = Join-Path $outputPath "$commonName.key"
$crtFilePath = Join-Path $outputPath "$commonName.crt"
$rootCaFilePath = Join-Path $outputPath "$caFqdn-rootCa.pem"
2 Add the generated SSL certificate to the VMware Aria Suite Lifecycle locker.
e On the Import certificate page, enter a name for the Workspace ONE Access certificate
according to your VMware Cloud Foundation Planning and Preparation Workbook.
f Click Browse file, navigate to the Workspace ONE Access certificate file (.pem), and click
Open.
You add the following passwords for the corresponding local administrative accounts.
Password description VMware Aria Suite Workspace ONE Workspace ONE Workspace ONE
Lifecycle global Access administrator Access configuration Access root user
environment default administrator
password
Used for Workspace
ONE Access
appliance sshuser.
Note You do not need to provide a user name when adding passwords. You can leave the User
Name field blank when configuring settings.
Procedure
1 In a web browser, log in to VMware Aria Suite Lifecycle with the vcfadmin@local user by
using the user interface (https://<vrslcm_fqdn>).
5 On the Add password page, configure the settings and click Add.
Procedure
1 In a web browser, log in to VMware Aria Suite Lifecycle with the vcfadmin@local user by
using the user interface (https://<vrslcm_fqdn>).
4 On the Create environment page, configure the settings and click Next.
Setting Value
5 On the Select product page, select the check box for VMware Identity Manager, configure
these values, and click Next.
Setting Value
6 On the Accept license agreements page, scroll to the bottom and accept the license
agreement, and then click Next.
7 On the Certificate page, from the Select certificate drop-down menu, select the Workspace
One Access certificate, and click Next.
8 On the Infrastructure page, verify and accept the default settings, and click Next.
9 On the Network page, verify and accept the default settings, and click Next.
10 On the Products page, configure the deployment properties of Workspace ONE Access and
click Next.
Setting Value
12 On the Manual validations page, select the I took care of the manual steps above and am
ready to proceed check box and click Run precheck.
13 Review the validation report, remediate any errors, and click Re-run precheck.
14 Wait for all prechecks to complete with Passed messages and click Next.
15 On the Summary page, review the configuration details. To back up the deployment
configuration, click Export configuration.
17 Monitor the steps of the deployment graph until all stages become Completed.
Procedure
1 In a web browser, log in to VMware Aria Suite Lifecycle with the vcfadmin@local user by
using the user interface (https://<vrslcm_fqdn>).
4 On the Create environment page, configure the settings and click Next.
Setting Value
5 On the Select product page, select the check box for VMware Identity Manager, configure
these values, and click Next.
Setting Value
6 On the Accept license agreements page, scroll to the bottom and accept the license
agreement, and then click Next.
7 On the Certificate page, from the Select certificate drop-down menu, select the Clustered
Workspace One Certificate, and click Next.
8 On the Infrastructure page, verify and accept the default settings, and click Next.
9 On the Network page, verify and accept the default settings, and click Next.
10 On the Products page, configure the deployment properties of clustered Workspace ONE
Access and click Next.
Setting Value
b In the Cluster Virtual IP section, click Add Load Balancer and configure its settings.
Setting Value
Load Balancer IP Use the IP address from your VMware Cloud Foundation Planning and
Preparation Workbook.
Load Balancer FQDN Use the FQDN from your VMware Cloud Foundation Planning and
Preparation Workbook.
Setting Value
VM Name Enter a VM Name for Enter a VM Name for Enter a VM Name for
vidm-primary. vidm-secondary-1. vidm-secondary-2.
FQDN Enter the FQDN for Enter the FQDN for Enter the FQDN for
vidm-primary vidm-secondary-1. vidm-secondary-2.
IP address Enter the IP Address for Enter the IP Address for Enter the IP Address for
vidm-primary. vidm-secondary-1. vidm-secondary-2.
e For each node, click advanced configuration and click Select Root Password.
12 On the Manual validations page, select the I took care of the manual steps above and am
ready to proceed check box and click Run precheck.
13 Review the validation report, remediate any errors, and click Re-run precheck.
14 Wait for all prechecks to complete with Passed messages and click Next.
15 On the Summary page, review the configuration details. To back up the deployment
configuration, click Export configuration.
17 Monitor the steps of the deployment graph until all stages become Completed.
Procedure
1 In a web browser, log in to the management domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
2 In the Hosts and Clusters inventory, expand the management domain vCenter Server and
data center.
4 Create the anti-affinity rule for the clustered Workspace ONE Access virtual machines.
Setting Value
Name <management-domain-name>-anti-affinity-rule-wsa
n vidm-secondary-1_VM
n vidm-secondary-2_VM
5 Create a virtual machine group for the clustered Workspace ONE Access nodes.
Setting Value
Type VM Group
n vidm-secondary-1_VM
n vidm-secondary-2_VM
Procedure
1 In a web browser, log in to the Workspace ONE Access instance with the admin user by
using the appliance configuration interface (https://<wsa_node_fqdn>:8443/cfg/login).
Setting Description
4 If you deployed a cluster, repeat this procedure for the remaining clustered Workspace ONE
Access nodes.
Procedure
1 Log in to the cross-region Workspace ONE Access instance by using a Secure Shell (SSH)
client.
vi /etc/resolv.conf
4 Add entries for Domain and search to the end of the file and save the file. For example:
Domain rainpole.io
search rainpole.io sfo.rainpole.io
5 If you deployed a clustered Workspace ONE Access instance, repeat this procedure for the
remaining nodes in the cluster.
Procedure
1 In a web browser, log in to Workspace ONE Access by using the administration interface to
the System Domain with configadmin user (https://<wsa_fqdn>/admin).
3 Click the Directories tab, and from the Add directory drop-down menu, select Add Active
Directory over LDAP/IWA.
4 On the Add directory page, configure the following settings, click Test connection and click
Save and next.
Setting Value
This Directory requires all connections to use If you want to secure communication between
STARTTLS (Optional) Workspace ONE Access and Active Directory select
this option and paste the Root CA certificate in the SSL
Certificate box.
Bind user password Enter the password for the Bind user.
For example: svc-wsa-ad_password.
5 On the Select the domains page, review the domain name and click Next.
6 On the Map user attributes page, review the attribute mappings and click Next.
7 On the Select the groups (users) you want to sync page, enter the
distinguished name for the folder containing your groups (For example OU=Security
Groups,DC=sfo,DC=rainpole,DC=io) and click Select.
8 For each Group DN you want to include, select the group to use by Workspace ONE Access
for each of the roles, and click Save then Next.
Directory Admin
ReadOnly Admin
Content Admin
Content Developers
9 On the Select the Users you would like to sync page, enter the distinguished name for the
folder containing your users (e.g. OU=Users,DC=sfo,DC=rainpole,DC=io) and click Next.
10 On the Review page, click Edit, from the Sync frequency drop-down menu, select Every 15
minutes, and click Save.
This procedure in only applicable if you deployed a clustered Workspace ONE Access instance. It
does not apply to a standard Workspace ONE Access instance.
Procedure
1 In a web browser, log in to the clustered Workspace ONE Access instance by using
the administration interface to the System Domain with configadmin user (https://
<wsa_cluster_fqdn>/admin).
5 On the WorkspaceIDP__1 details page, under Connector(s) from the Add a connector drop-
down menu, select vidm-secondary-1_VM, configure the settings, and click Add connector.
Setting Value
Connector vidm-secondary-1_VM
Bind to AD Checked
7 In the IdP Hostname text box, enter the FQDN of the NSX load balancer virtual server for
Workspace ONE Access cluster.
8 Click Save.
You assign the following administrator roles to the corresponding user groups.
Procedure
1 In a web browser, log in to Workspace ONE Access by using the administration interface to
the System Domain with configadmin user (https://<wsa_fqdn>/admin).
b In the Users / User Groups search box, enter the name of the Active Directory group you
want to assign the role to, select the group, and click Save.
c Repeat this step to configure the Directory Admin and the ReadOnly Admin roles.
You assign the following administrative roles to corresponding Active Directory groups.
VMware Aria Suite Lifecycle Role Example Active Directory Group Name
Procedure
1 In a web browser, log in to VMware Aria Suite Lifecycle with the vcfadmin@local user by
using the user interface (https://<vrslcm_fqdn>).
3 In the navigation pane, click User management and click Add user / group.
4 On the Select users / groups page, in the search box, enter the name of the group you want
to assign the role to, select the Active Directory group, and click Next.
5 On the Select roles page, select the VCF Role role, and click Next.
7 Repeat this procedure to assign roles to the Content Release Manager and Content
Developer user groups.
Important If you plan to deploy VMware Aria Suite components, you must deploy Application
Virtual Networks before you configure NSX Federation. See Chapter 16 Deploying Application
Virtual Networks in VMware Cloud Foundation.
NSX Federation is supported between VCF and non-VCF deployments. If you choose to federate
NSX between VCF and non-VCF deployments, you are responsible for the deployment and
lifecycle of the NSX Global Managers, as well as maintaining version interoperability between
VCF-owned NSX Local Managers, non-VCF NSX Local Managers, and the NSX Global Manager.
n Password Management for NSX Global Manager Cluster in VMware Cloud Foundation
n Backup and Restore of NSX Global Manager Cluster in VMware Cloud Foundation
Global Manager: a system similar to NSX Manager that federates multiple Local Managers.
Local Manager: an NSX Manager system in charge of network and security services for a VMware
Cloud Foundation instance.
Cross-instance: the object spans more than one instance. You do not directly configure the span
of a segment. A segment has the same span as the gateway it is attached to.
Tunnel End Point (TEP): the IP address of a transport node (Edge node or Host) used for Geneve
encapsulation within an instance.
Remote Tunnel End Points (RTEP): the IP address of a transport node (Edge node only) used for
Geneve encapsulation across instances.
standalone tier-1 gateway Configured in the Local Local Manager Single VMware Cloud
Manager and used for Foundation instance
services such as the Load
Balancer.
local-instance tier-1 Configured in the Global Global Manager Single VMware Cloud
gateway Manager at a single Foundation instance
location, this is a global
tier-1 gateway used for
segments that exist within
a single VMware Cloud
Foundation Instance.
cross-instance tier-1 Configured in the Global Global Manager Multiple VMware Cloud
gateway Manager, this is a global Foundation instance
Tier-1 gateway used for
segments that exist across
multiple VMware Cloud
instances.
See VMware Configuration Maximums for your version of NSX for information about the
maximum number of supported federated NSX Managers and other NSX federation maximums.
Note VI workload domains that share an NSX Manager are considered a single location.
Some tasks described in this section are to be performed on the first NSX instance while others
need to be performed on each NSX instance that is being federated. See the table below for
more information.
Enable high availability for NSX Federation Control Plane 1 Create Global Manager Clusters for VMware Cloud
on one additional instance Foundation
2 Replacing Global Manager Cluster Certificates in
VMware Cloud Foundation
Each additional instance 1 Prepare Local Manager for NSX Federation in VMware
Cloud Foundation
2 Add Location to Global Manager
3 Stretching Segments between VMware Cloud
Foundation Instances:
a Delete Existing Tier-0 Gateways in Additional
Instances
b Connect Additional VMware Cloud Foundation
Instances to Cross-Instance Tier-0 Gateway
c Connect Local Tier-1 Gateway to Cross-Instance
Tier-0 Gateway
d Add Additional Instance as Locations to the Cross-
Instance Tier-1 Gateway
Procedure
Procedure
3 Create Anti-Affinity Rule for Global Manager Cluster in VMware Cloud Foundation
Create an anti-affinity rule to ensure that the Global Manager nodes run on different ESXi
hosts. If an ESXi host is unavailable, the Global Manager nodes on the other hosts continue
to provide support for the NSX management and control planes.
What to do next
Procedure
1 Download the NSX OVF file from the VMware download portal.
5 Select Local file, click Upload files, and navigate to the OVA file.
6 Click Next.
7 Enter a name and a location for the NSX Manager VM, and click Next.
The name you enter appears in the vSphere and vCenter Server inventory.
8 Select the compute resource on which to deploy the NSX Manager appliance page and click
Next.
9 Review and verify the OVF template details and click Next.
The Description panel on the right side of the wizard shows the details of selected
configuration. You can also refer to VMware Configuration Maximums to ensure that you
choose the correct size for the scale or your environment.
n Click Next.
Note The virtual disk format is determined by the selected VM storage policy when using a
vSAN datastore.
13 Select the management network as the destination network and click Next.
The following steps are all located in the Customize Template section of the Deploy OVF
Template wizard.
14 In the Application section, enter the system root, CLI admin, and audit passwords for the NSX
Manager. The root and admin credentials are mandatory fields.
n At least 12 characters
16 In the Network Properties section, enter the hostname of the NSX Manager.
18 Enter the default gateway, management network IPv4, and management network netmask.
19 In the DNS section, enter the DNS Server list and Domain Search list.
20 In the Services Configuration section, enter the NTP Server list and enable SSH.
21 Verify that all your custom OVF template specification is accurate and click Finish to initiate
the deployment.
Right-click the Global Manager VM and, from the Actions menu, select Power > Power on.
Procedure
1 SSH into the first NSX Global Manager node using the admin user account.
2 Run the following command to retrieve the Global Manager cluster ID.
4 Run the following command to retrieve the thumbprint of the Global Manager API certificate.
6 Log in to the second Global Manager node and run the following command to join this node
to the cluster:
where cluster_ID is the value from step 3 and certificate_thumbprint is the value from step 5.
A warning message displays: Data on this node will be lost. Are you sure? (yes/
no).
The joining and cluster stabilizing process might take from 10 to 15 minutes.
Verify that the status for every cluster service group is UP before making any other cluster
changes.
a Log in to the Global Manager web interface and select Configuration > Global Manager
Appliances.
b Verify that the Cluster status is green that the cluster node is Available.
Procedure
1 In a web browser, log in to the management domain or VI workload domain vCenter Server at
https://vcenter_server_fqdn/ui.
4 Select the Global Manager cluster and click the Configure tab.
Option Description
Members Click Add, select the three Global Manager nodes, and
click OK.
Procedure
3 Click Set Virtual IP and enter the VIP address for the cluster. Ensure that VIP is part of the
same subnet as the other management nodes.
4 Click Save.
From a browser, log in to the Global Manager using the virtual IP address assigned to the
cluster at https://gm_vip_fqdn/.
Procedure
1 In a web browser, log in to Local Manager cluster for the management domain or VI workload
domain at https://lm_vip_fqdn/).
a In the navigation pane, select IP Address Pools and click Add IP address pool.
b Enter a name.
d In the Set Subnets dialog box, click Add subnet > IP Ranges.
g Click Save.
c Under Global Fabric Settings, Click Edit for Remote Tunnel Endpoint.
Procedure
Procedure
1 In a web browser, log in to Global Manager cluster for the management or VI workload
domain at https://gm_vip_fqdn/.
3 Click Make Active and enter a name for the active Global Manager.
4 Click Save.
Procedure
b From the vCenter UI, open the web console of one of the NSX Managers and login to the
Admin user.
c Run the command start service ssh to enable SSH on the NSX Manager.
d Use a Secure Shell (SSH) client and log in to the same NSX Manager with the Admin user.
e Run the command get certificate cluster thumbprint to retrieve the Local
Manager cluster VIP thumbprint.
g Run the stop service ssh command to deactivate SSH on the NSX Manager.
b Select System > Location Manager and click Add On-Prem Location.
c In the Add New Location dialog box, enter the location details.
Option Description
Username and Password Provide the admin user's credentials for the NSX
Manager at the location.
d Click Save
a On the Location Manager page, in the Locations section, click Networking under the
location you are adding then click Configure.
b On the Configure Edge Nodes for Stretch Networking page, click Select All
c In the Remote Tunnel Endpoint Configuration pane enter the following details.
Option Value
d Click Save.
a Select the Global Manager context from the drop down menu.
Note You may need to refresh your browser or logout and log in to the Global Manager
to see the drop down menu.
d Verify that you have a recent backup and click Proceed to import.
e In the Preparing for import dialog box, click Next and then click Import.
Local Manager objects imported into the Global Manager are owned by the Global Manager
and appear in the Local Manager with a GM icon. You can modify these objects only from the
Global Manager.
Procedure
Procedure
1 In a web browser, log in to Global Manager for the management or VI workload domain at
https://gm_vip_fqdn/.
Tier-1 Gateway Name Enter a name for the new tier-1 gateway.
5 Click Save.
b Enable all available sources, click Save, and click Close editing.
Procedure
4 On the Segments tab, click the vertical eclipses for the cross-instance_nsx_segment and click
Edit.
5 Change the Connected Gateway from instance_tier1 to cross-instance_tier1, click Save, and
then click Close editing.
Procedure
b On the Tier-1 Gateways tab, click the vertical eclipses for the
additional_instance_tier1_gateway and click Edit.
c Under Linked Tier-0 gateway, click the X to disconnect the
additional_instance_tier0_gateway, click Save, and click Close editing.
5 On the Tier-0 Gateway page, click the vertical eclipses for the
additional_instance_tier0_gateway and click Delete.
6 Click Delete.
Procedure
c On the Tier-0 Gateway page, click the vertical eclipses for the cross-instance_tier0
gateway and click Edit.
Setting Value
Edge Cluster Select the Edge cluster name of the instance being
added.
e Click Save.
c Enter a name for the interface and select the instance location.
d Set the type to External and enter the IP address for the interface.
e Select the segment that the interface is connected to and the Edge node corresponding
to the instance.
You can enable BFD if the network supports it and is configured for BFD.
b Enter the IP address for the neighbor and select the instance location.
d Click Timers & Password and set the Hold Down Time to 12 and Keep Alive Time to 4.
e Enter the BGP neighbor password, click Save, and then click Close.
a Expand Route Re-Distribution and next to the location you are adding, click Set.
d In the Set route redistribution dialog box, select all listed sources and click Apply.
e Click Add to finish editing the default route redistribution and click Apply.
f Click Save
Procedure
4 On the Tier-1 Gateway page, click the vertical ellipses menu for the
this_instance_tier1_gateway and click Edit.
5 Change the Connected Gateway to cross_instance_tier0_gateway.
7 Under Locations, delete all locations except the location of the instance you are working with.
Procedure
4 On the Tier-1 Gateway page, click the vertical eclipses for the cross-instance_tier1 gateway
and click Edit.
Setting Value
Edge Cluster Select the NSX Edge cluster of the this instance
Prerequisites
Create the standby Global Manager cluster. See Create Global Manager Clusters for VMware
Cloud Foundation.
Procedure
b From the vCenter UI, open the web console of one of the NSX Managers and login to the
Admin user.
c Run the command start service ssh to enable SSH on the NSX Manager.
d Use a Secure Shell (SSH) client and log in to the same NSX Manager with the Admin user.
e Run the command get certificate cluster thumbprint to retrieve the Global
Manager cluster thumbprint.
g Run the stop service ssh command to deactivate SSH on the NSX Manager.
d Enter the location name, FQDN, username and password, and the SHA-256 thumbprint
you had retrieved earlier.
Prerequisites
Procedure
c In the Import CA Certificate dialog box, enter a name for the root CA certificate.
d For Certificate Contents, select the root CA certificate you created in step 2c and click
Import.
3 Import certificates for the Global Manager nodes and the load balanced virtual server
address.
c In the Certificate Contents, browse to the previously created certificate file with the
extension chain.pem and select the file.
d In the Private Key, browse to the previously created private key with the extension .key,
select the file, and click Import.
Procedure
4 Replace the default certificate on the first Global Manager node with the CA-signed
certificate.
a Start the Postman application in your web browser and log in.
Setting Value
Setting Value
Key Content-Type
e In the request pane at the top, send the following HTTP request.
Setting Value
After the Global Manager sends a response, a 200 OK status is displayed on the Body
tab.
c Right-click the node and select Actions > Power > Restart guest OS.
Table 19-1. URLs for Replacing the Global Manager Node Certificates
gm_node2_fqdn https://gm_node2_fqdn/api/v1/node/services/http?
action=apply_certificate&certificate_id=gm_vip_fqdn_certificat
e_ID
gm_node3_fqdn https://gm_node3_fqdn/api/v1/node/services/http?
action=apply_certificate&certificate_id=gm_fqdn_certificate_ID
gm_vip_fqdn https://gm_vip_fqdn/api/v1/cluster/api-certificate?
action=set_cluster_certificate&certificate_id=gm_vip_fqdn_cert
ificate_ID
Procedure
3 Replace the default certificate for the second Global Manager node with the CA-signed
certificate by using the first Global Manager node as a source.
a Start the Postman application in your web browser and log in.
Setting Value
Key Content-Type
Setting Value
After the NSX Manager appliance responds, the Body tab displays a 200 OK status.
4 To upload the CA-signed certificate on the third Global Manager node, repeat steps 2 to step
4 with appropriate values.
c Right-click the second and third Global Manager nodes and click Actions > Power >
Restart guest OS.
b For each node, navigate to System > Global Manager Appliances > View Details and
confirm that the status is REPO_SYNC = SUCCESS.
a Start the Postman application in your web browser and log in.
Setting Value
Setting Value
Key Content-Type
Setting Value
After the NSX Global Manager sends a response, a 200 OK status is displayed on the Body
tab.
Procedure
c Run the command to retrieve the SHA-256 thumbprint of the virtual IP for the NSX
Manager cluster certificate.
c Under Locations, select the Local Manager instance, and click Actions.
d Click Edit Settings and update NSX Local Manager Certificate Thumbprint.
f Wait for the Sync Status to display success and verify that all Local Manager nodes
appear.
4 Under Locations, update the Local Manager certificate thumbprint for all the instances.
The Global Manager cluster stores the configured state of the segments. If the Global Manager
appliances become unavailable, the network traffic in the data plane is intact but you can make
no configuration changes.
Procedure
6 The protocol text box is already filled in. SFTP is the only supported protocol.
7 In the Directory Path text box, enter the absolute directory path where the backups will be
stored.
8 Enter the user name and password required to log in to the backup file server.
The first time you configure a file server, you must provide a password. Subsequently, if you
reconfigure the file server, and the server IP or FQDN, port, and user name are the same, you
do not need to enter the password again.
9 Leave the SSH Fingerprint blank and accept the fingerprint provided by the server after you
click Save in a later step.
10 Enter a passphrase.
Note You will need this passphrase to restore a backup. If you forget the passphrase, you
cannot restore any backups.
You can schedule recurring backups or trigger backups for configuration changes.
b Click Weekly and set the days and time of the backup, or click Interval and set the interval
between backups.
c Enabling the Detect NSX configuration change option will trigger an unscheduled full
configuration backup when it detects any runtime or non-configuration related changes,
or any change in user configuration. For Global Manager, this setting triggers backup if
any changes in the database are detected, such as the addition or removal of a Local
Manager or Tier-0 gateway or DFW policy.
d You can specify a time interval for detecting database configuration changes. The valid
range is 5 minutes to 1,440 minutes (24 hours). This option can potentially generate a
large number of backups. Use it with caution.
e Click Save.
What to do next
After you configure a backup file server, you can click Backup Now to manually start a backup
at any time. Automatic backups run as scheduled. You see a progress bar of your in-progress
backup.
Do not change the configuration of the NSX Global Manager cluster while the restore process is
in progress.
Prerequisites
n Verify that you have the login credentials for the backup file server.
n Verify that you have the SSH fingerprint of the backup file server. Only SHA256 hashed
ECDSA (256 bit) host key is accepted as a fingerprint.
Procedure
1 If any nodes in the appliance cluster that you are restoring are online, power them off.
n If the backup listing for the backup you are restoring contains an IP address, you must
deploy the new Global Manager node with the same IP address. Do not configure the
node to publish its FQDN.
n If the backup listing for the backup you are restoring contains an FQDN, you must
configure the new appliance node with this FQDN and publish the FQDN. Only lowercase
FQDN is supported for backup and restore.
4 Make the Global Manager active. You can restore a backup only on an active Global Manager.
c On the Location Manager page, click Make Active, enter a name for the Global Manager,
and click Save.
5 On the main navigation bar, click System > Backup & Restore and then click Edit.
9 In the Destination Directory text box, enter the absolute directory path where the backups
are stored.
10 Enter the passphrase that was used to encrypt the backup data.
11 Leave the SSH Fingerprint blank and accept the fingerprint provided by the server after you
click Save in a later step.
14 After the restored manager node is up and functional, deploy additional nodes to form a NSX
Global Manager cluster.
The default management cluster must be stretched before a VI workload domain cluster can be
stretched. This ensures that the NSX control plane and management VMs (vCenter, NSX, SDDC
Manager) remain accessible if the stretched cluster in the second availability zone goes down.
Note Starting with VMware Cloud Foundation 5.2.1.1 you can stretch a cluster that uses
vSAN ESA. Earlier versions of VMware Cloud Foundation only support stretching vSAN OSA
clusters.
n The cluster shares a vSAN Storage Policy with any other clusters.
n Planned maintenance
You can perform a planned maintenance on an availability zone without any downtime and
then migrate the applications after the maintenance is completed.
n Automated recovery
Stretching a cluster automatically initiates VM restart and recovery, and has a low recovery
time for the majority of unplanned failures.
n Disaster avoidance
With a stretched cluster, you can prevent service outages before an impending disaster.
This release of VMware Cloud Foundation does not support deleting or unstretching a cluster.
Availability Zones
An availability zone is a collection of infrastructure components. Each availability zone runs on its
own physically distinct, independent infrastructure, and is engineered to be highly reliable. Each
zone should have independent power, cooling, network, and security.
Additionally, these zones should be physically separate so that disasters affect only one zone.
The physical distance between availability zones is short enough to offer low, single-digit latency
(less than 5 ms) and large bandwidth (10 Gbps) between the zones.
Availability zones can either be two distinct data centers in a metro distance, or two safety or fire
sectors (data halls) in the same large-scale data center.
Regions
Regions are in two distinct locations - for example, region A can be in San Francisco and region
B in Los Angeles (LAX). The distance between regions can be rather large. The latency between
regions must be less than 150 ms.
Note If VLAN is stretched between AZ1 and AZ2, the Layer 3 network must also be stretched
between the two AZs.
VM Management ✓ ✓ ✓ 1500
VLAN
Component Requirement
Layer 3 gateway availability For VLANs that are are stretched between available
zones, configure data center provided method to failover
the Layer 3 gateway between availability zones. For
example, VRRP or HSRP.
DHCP availability For VLANs that are stretched between availability zones,
provide high availability for the DHCP server so that a
failover operation of a single availability zone will not
impact DHCP availability.
BGP routing Each availability zone data center must have its own
Autonomous System Number (ASN).
Table 20-2. Physical Network Requirements for Multiple Availability Zone (continued)
Component Requirement
Ingress and egress traffic n For VLANs that are stretched between availability
zones, traffic flows in and out of a single zone. Local
egress is not supported.
n For VLANs that are not stretched between availability
zones, traffic flows in and out of the zone where the
VLAN is located.
n For NSX virtual network segments that are stretched
between regions, trafficflows in and out of a single
availability zone. Local egress is not supported.
Latency vSphere
n Less than 150 ms latency RTT for vCenter Server
connectivity.
n Less than 150 ms latency RTT for vMotion
connectivity.
n Less than 5 ms latency RTT for VSAN hosts
connectivity.
vSAN
n Less than 200 ms latency RTT for up to 10 hosts per
site.
n Less than 100 ms latency RTT for 11-15 hosts per site.
NSX Managers
n Less than 10 ms latency RTT between NSX Managers
n Less than 150 ms latency RTT between NSX Managers
and transport nodes.
You deploy the vSAN witness host using an appliance instead of using a dedicated physical ESXi
host as a witness host. The witness host does not run virtual machines and must run the same
version of ESXi as the ESXi hosts in the stretched cluster. It must also meet latency and Round
Trip Time (RTT) requirements.
There are separate vSAN witness appliances for vSAN OSA and vSAN ESA. You must deploy the
witness appliance that matches the cluster type that you are stretching.
See the Physical Network Requirements for Multiple Availability Zone table within Stretched
Cluster Requirements.
For more information, see vSAN Witness Design for VMware Cloud Foundation.
Prerequisites
Download the VMware vSAN Witness Appliance .ova file from the Broadcom Support Portal.
n For stretching a vSAN OSA cluster download the appliance for vSAN OSA.
n For stretching a vSAN ESA cluster download the appliance for vSAN ESA.
Procedure
5 On the Select an OVF template page, select Local file, click Upload files, browse to the
location of the vSAN witness host OVA file, and click Next.
6 On the Select a name and folder page, enter a name for the virtual machine and click Next.
8 On the Review details page, review the settings and click Next.
9 On the License agreements page, accept the license agreement and click Next.
12 On the Select networks page, select a portgroup for the witness and management network,
and click Next.
13 On the Customize template page, enter the root password for the witness and click Next.
14 On the Ready to complete page, click Finish and wait for the process to complete.
a In the inventory panel, navigate to vCenter Server > Datacenter > Cluster.
b Right-click the vSAN witness host and from the Actions menu, select Power > Power on.
Procedure
1 In the inventory panel of the vCenter Server Client, select vCenter Server > Datacenter.
a Right-click the vSAN witness host and click Open remote console.
c Select Set static IPv4 address and network configuration and press the Space bar.
d Enter IPv4 Address, Subnet Mask and Default Gateway and press Enter.
f Select Use the following DNS Server address and hostname and press the Space bar.
g Enter Primary DNS Server, Alternate DNS Server and Hostname and press Enter.
Procedure
1 Use the vSphere Client to log in to the vCenter Server containing the cluster that you want to
stretch.
Important You must add the vSAN Witness Host to the datacenter. Do not add it to a folder.
4 On the Name and location page, enter the Fully Qualified Domain Name (FQDN) of the vSAN
Witness Host and click Next.
5 On the Connection settings page, enter administrator credentials and click Next.
6 On the Host summary page, review the summary of the host details and click Next.
7 On the Host lifecycle page, the check box Manage host with an image is selected by default.
n If you want to manage the host with an image, leave the check box selected and click
Next.
n If you do not want to manage the host with an image, deselect the check box and click
Next.
8 If you manage the host with an image, on the Image page, set up the desired image and click
Next.
9 On the Assign license page, assign an existing license and click Next.
Procedure
1 In the inventory panel of the vCenter Server Client, select vCenter Server > Datacenter.
2 Select the vSAN witness host and click the Configure tab.
a In the System section, click Time configuration and click the Edit button.
Setting Value
Procedure
1 In the inventory panel of the vCenter Server Client, select vCenter Server > Datacenter.
2 Select the vSAN witness host and click the Configure tab.
3 Remove the dedicated witness traffic VMkernel adapter on the vSAN Witness host.
b Select the kernel adapter vmk1 with secondaryPg as Network label and click Remove.
4 Remove the virtual machine network port group on the vSAN witness host.
c Click the vertical ellipsis and from the drop-down menu, select Remove.
f In the VM Network pane, click the vertical ellipsis and from the drop-down menu, select
Remove.
5 Enable witness traffic on the VMkernel adapter for the management network of the vSAN
witness host.
a On the VMkernel adapters page, select the vmk0 adapter and click Edit.
b In the vmk0 - edit settings dialog box, click Port properties, select the vSAN check box,
and click OK.
When you stretch a cluster, VMware Cloud Foundation modifies the site disaster tolerance
setting for storage policy associated with datastore of that cluster from None - standard cluster
to Site mirroring - stretched cluster. This affects all VMs using default datastore policy in that
cluster. If you do not want to change the site disaster tolerance setting for specific VMs, apply a
different storage policy to those VMs before stretching the cluster.
This example use case has two availability zones in two buildings in an office campus - AZ1 and
AZ2. Each availability zone has its own power supply and network. The management domain is
on AZ1 and contains the default cluster, SDDC-Cluster1. This cluster contains four ESXi hosts.
VSAN network VLAN ID=1623
MTU=9000
Network=172.16.234.0
netmask 255.255.255.0
gateway 172.16.23.253
IP range=172.16.23.11 - 172.16.234.59
MTU=9000
Network=172.16.22.0
netmask 255.255.255.0
gateway 172.16.22.253
IP range=172.16.22.11 - 172.16.22.59
There are four ESXi hosts in AZ2 that are not in the VMware Cloud Foundation inventory yet.
We will stretch the default cluster SDDC-Cluster1 in the management domain from AZ1 to AZ2.
vSAN
L3 routing between AZ1 & AZ2 Hosts
L3 routing between AZ1/AZ2 hosts & witness
VMotion
L3 routing between AZ1 & AZ2 hosts
Stretched Networks
Host 1 Host 5
Host 2 Management cluster stretched Host 6
Host 3 across AZ1 and AZ2 Host 7
Host 4 Host 8
vMotion: NSX-T Host Overlay: vSAN: vMotion: NSX-T Host Overlay: vSAN:
VLAN 1612 VLAN 1614 VLAN 1613 VLAN 1622 VLAN 1624 VLAN 1623
172.16.12.0/24 172.16.14.0/24 172.16.13.0/24 172.16.22.0/24 172.16.24.0/24 172.16.23.0/24
GW 172.16.12.253 GW 172.16.14.253 GW 172.16.13.253 GW 172.16.22.253 GW 172.16.24.253 GW 172.16.23.253
AZ1 AZ2
To stretch a cluster for VMware Cloud Foundation on Dell VxRail, perform the following steps:
Prerequisites
n Verify that you have completed the Planning and Preparation Workbook with the
management domain or VI workload domain deployment option included.
n Verify that your environment meets the requirements listed in the Prerequisite Checklist sheet
in the Planning and Preparation Workbook.
n Ensure that you have enough hosts such that there is an equal number of hosts on each
availability zone. This is to ensure that there are sufficient resources in case an availability
zone goes down completely.
n Deploy and configure a vSAN witness host. See Deploy and Configure vSAN Witness Host.
n If you are stretching a cluster in a VI workload domain, the default management vSphere
cluster must have been stretched.
n Download https://community.broadcom.com/vmware-code/viewdocument/vcf-on-vxrail-
stretch-cluster-7.
Note Starting with VMware Cloud Foundation 5.2.1.1 you can stretch a cluster that uses
vSAN ESA. Earlier versions of VMware Cloud Foundation only support stretching vSAN OSA
clusters.
n The cluster shares a vSAN Storage Policy with any other clusters.
Procedure
2 Using SSH, log in to the SDDC Manager appliance with the user name vcf and the password
you specified in the deployment parameter workbook.
3 Run the script with -h option for details about the script options.
python initiate_stretch_cluster_vxrail_<version>.py -h
4 Run the following command to prepare the cluster to be stretched. The command creates
affinity rules for the VMs to run on the preferred site:
Enter the SSO user name and password when prompted to do so.
Once the workflow is triggered, track the task status in the SDDC Manager UI. If the task fails,
debug and fix the issue and retry the task from the SDDC Manager UI. Do not run the script
again.
5 Use the VxRail vCenter plug-in to add the additional hosts in Availability Zone 2 to the cluster
by performing the VxRail Manager cluster expansion work flow.
n vSAN gateway IP for the preferred (primary) and non-preferred (secondary) site
n vSAN CIDR for the preferred (primary) and non-preferred (secondary) site
Once the workflow is triggered, the task is tracked in the SDDC Manager UI. If the task fails,
debug and fix the issue and retry from SDDC Manager UI. Do not run the script again.
8 Monitor the progress of the AZ2 hosts being added to the cluster.
9 Validate that stretched cluster operations are working correctly by logging in to the vSphere
Web Client.
1 On the home page, click Host and Clusters and then select the stretched cluster.
3 Click Retest.
1 On the home page, click Policies and Profiles > VM Storage Policies > vSAN Default
Storage Policies.
2 Select the policy associated with the vCenter Server for the stretched cluster and click
Check Compliance.
3 Click VM Compliance and check the Compliance Status column for each VM.
Procedure
1 In a web browser, log in to NSX Manager for the management or workload domain to be
stretched at https://nsx_manager_fqdn/login.jsp?local=true.
4 Select the gateway and from the ellipsis menu, click Edit.
a Expand the Routing section and in the IP prefix list section, click Set.
b In the Set IP prefix list dialog box, click Add IP prefix list.
c Enter Any as the prefix name and under Prefixes, click Set.
d In the Set prefixes dialog box, click Add Prefix and configure the following settings.
Setting Value
Network any
Action Permit
6 Repeat step 5 to create the default route IP prefix set with the following configuration.
Setting Value
Network 0.0.0.0/0
Action Permit
Procedure
3 Select the gateway, and from the ellipsis menu, click Edit.
a Expand the Routing section and in the Route maps section, click Set.
b In the Set route maps dialog box, click Add route map.
e On the Set match criteria dialog box, click Add match criteria and configure the following
settings.
Local Preference 80 90
5 Repeat step 4 to create a route map for outgoing traffic from availability zone 2 with the
following configuration.
Setting Value
Type IP Prefix
Members Any
Action Permit
You configure two BGP neighbors with route filters for the uplink interfaces in availability zone 2.
Hold downtime 12 12
Table 20-4. Route Filters for BGP Neighbors for Availability Zone 2
Maximum Routes - -
Procedure
3 Select the gateway and from the ellipsis menu, click Edit.
b In the Set BGP neighbors dialog box, click Add BGP neighbor and configure the following
settings.
Setting Value
IP address ip_bgp_neighbor1
BFD Deactivated
Remote AS asn_bgp_neighbor1
Hold downtime 12
Password bgp_password
d In the Set route filter dialog box, click Add route filter and configure the following
settings.
Setting Value
Enabled Activated
In Filter rm-in-az2
Maximum Routes -
By default, when you stretch a cluster, the vSAN-tagged VMkernel adapter is used to carry traffic
destined for the vSAN witness host. With witness traffic separation, you can use a separately
tagged VMkernel adapter instead of extending the vSAN data network to the witness host. This
feature allows for a more flexible network configuration by allowing for separate networks for
node-to-node and node-to-witness communication.
Prerequisites
You must have a stretched cluster before you can configure it for witness traffic separation.
Procedure
Procedure
3 Right-click the vSphere distributed switch for the cluster and select Distributed Port Group >
New Distributed Port Group.
4 Enter a name for the port group for the first availability zone and click Next.
9 On the Teaming and failover page, modify the failover order of the uplinks to match the
existing failover order of the management traffic and click Next.
12 On the Ready to Complete page, review your selections and click Finish.
Procedure
1 In a web browser, log in to the first ESXi host in the stretched cluster using the VMware Host
Client.
2 In the navigation pane, click Manage and click the Services tab.
4 Open an SSH connection to the first ESXi host in the stretched cluster.
5 Log in as root.
8 In the VMware Host Client, select the TSM-SSH service for the ESXi host and click Stop.
9 Repeat these steps for each ESXi host in the stretched cluster.
Procedure
3 Right-click the witness distributed port group for the first availability zone, for example,
AZ1_WTS_PG, and select Add VMkernel Adapters.
4 Click + Attached Hosts, select the availability zone 1 hosts from the list, and click OK.
5 Click Next.
7 Select Use static IPv4 settings and enter the IP addresses and the subnet mask to use for the
witness traffic separation network.
8 Click Next.
10 Repeat these steps for the witness distributed port group for the second availability zone.
Procedure
3 For each host in the stretched cluster, click Configure > Networking > VMkernel adapters to
determine which VMkernel adapter to use for witness traffic. For example, vmk5.
4 In a web browser, log in to the first ESXi host in the stretched cluster using the VMware Host
Client.
5 In the navigation pane, click Manage and click the Services tab.
For example:
10 Verify that the ESXi host can access the witness host:
Replace <vmkernel_adapter> with the VMkernel adapter configured for witness traffic, for
example vmk5. Replace <witness_host_ip_address> with the witness host IP address.
11 In the VMware Host Client, select the TSM-SSH service for the ESXi host and click Stop.
Prerequisites
Procedure
1 Use the VxRail vCenter plug-in to add the additional hosts in availability zone 1 or availability
zone 2 to the cluster by performing the VxRail Manager cluster expansion work flow.
2 Log in to SDDC Manager and run the script to trigger the workflow to import the newly
added hosts in the SDDC Manager inventory.
In the script, provide the root credentials for each host and specify which fault domain the
host should be added to.
3 Using SSH, log in to the SDDC Manager VM with the username vcf and the password you
specified in the deployment parameter workbook.
n vSAN gateway IP for the preferred (primary) and non-preferred (secondary) site
n vSAN CIDR for the preferred (primary) and non-preferred (secondary) site
6 Once the workflow is triggered, track the task status in the SDDC Manager UI.
If the task fails, debug and fix the issue and retry from SDDC Manager UI. Do not run the
script again.
What to do next
If you add hosts to a stretched cluster configured for witness traffic separation, perform the
following tasks for the added hosts:
Prerequisites
Procedure
Results
You use the built-in monitoring capabilities for these typical scenarios.
Scenario Examples
Are the systems online? A host or other component shows a failed or unhealthy status.
Why did a storage drive fail? Hardware-centric views spanning inventory, configuration, usage, and event
history to provide for diagnosis and resolution.
Is the infrastructure meeting Analysis of system and device-level metrics to identify causes and resolutions.
tenant service level agreements
(SLAs)?
At what future time will the Trend analysis of detailed system and device-level metrics, with summarized
systems get overloaded? periodic reporting.
What person performed which History of secured user actions, with periodic reporting.
action and when? Workflow task history of actions performed in the system.
In addition to the most recent tasks, you can view and search for all tasks by clicking View All
Tasks at the bottom of the Recent Tasks widget. This opens the Tasks panel.
Note For more information about controlling the widgets that appear on the Dashboard page of
SDDC Manager UI, see Tour of the SDDC Manager User Interface.
n Search tasks by clicking the filter icon in the Task column header and entering a search string.
n Filter tasks by status by clicking the filter icon in Status column. Select by category All,
Failed, Successful, Running, or Pending.
Note Each category also displays the number of tasks with that status.
n Clear all filters by clicking Reset Filter at the top of the Tasks panel.
Note You can also sort the table by the contents of the Status and Last Occurrence columns.
n If a task is in a Failed state, you can also attempt to restart it by clicking Restart Task.
n If a task is in a Failed state, click on the icon next to the Failed status to view a detailed report
on the cause.
Note You can filter subtasks in the same way you filter tasks.
Note You can also sort the table by the contents of the Status and Last Occurrence columns.
sddc-manager-ui-activity.log /var/log/vmware/vcf/sddc-manager-ui-app
domainmanager-activity.log /var/log/vmware/vcf/domainmanager
operationsmanager-activity.log /var/log/vmware/vcf/operationsmanager
lcm-activity.log /var/log/vmware/vcf/lcm
vcf-commonsvcs-activity.log /var/log/vmware/vcf/commonsvcs
{
"timestamp":"", "username":"", "clientIP":"", "userAgent":"", "api":"", "httpMethod":"",
"httpStatus" :"", "operation" :"", "remoteIP" :""
}
n username: The username of the system from which the API request is triggered. For example:
"administrator@vsphere.local".
n timestamp: Date and time of the operation performed in the UTC format "YYYY-MM-
DD'T'HH:MM:SS.SSSXXX". For example: "2022-01-19T16:59:01.9192".
n client IP: The IP address of the user’s system. For example: "10.0.0.253".
n userAgent: The user’s system information such as the web browser name, web browser
version, operating system name, and operating system architecture type. For example:
"Mozilla/5.0 (Windows NT 6.3; Win 64; x64) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/97.0.4692.71 Safari/537.36".
n api: The API invoked to perform the opeartion. For example: "/domainmanager/vl/vra/
domains".
n httpStatus: The response code received after invoking the API. For example: 200.
n operation: The operation or activity that was performed. For example: "Gets VMware Aria
Automation integration status for workload domains".
The log history is stored for 30 days. The maximum file size of the log retention file is set to 100
MB.
Log Analysis
You can perform log aggregation and analysis by integrating VMware Aria Operations for Logs
with VMware Cloud Foundation. For more information, see Implementation of Intelligent Logging
and Analytics for VMware Cloud Foundation.
When you initially deploy VMware Cloud Foundation, you complete the deployment parameter
workbook to provide the system with the information required for bring-up. This includes up to
two DNS servers and up to two NTP servers. You can reconfigure these settings at a later date,
using the SDDC Manager UI.
SDDC Manager uses DNS servers to provide name resolution for the components in the system.
When you update the DNS server configuration, SDDC Manager performs DNS configuration
updates for the following components:
n SDDC Manager
n vCenter Servers
n ESXi hosts
n NSX Managers
n VxRail Manager
If the update fails, SDDC Manager rolls back the DNS settings for the failed component. Fix the
underlying issue and retry the update starting with the failed component.
Note There is no rollback for VMware Aria Suite Lifecycle. Check the logs, resolve any issues,
and retry the update.
Updating the DNS server configuration can take some time to complete, depending on the size
of your environment. Schedule DNS updates at a time that minimizes the impact to the system
users.
Prerequisites
n Verify that both forward and reverse DNS resolution are functional for each VMware Cloud
Foundation component using the updated DNS server information.
n Verify that the new DNS server is reachable from each of the VMware Cloud Foundation
components.
n Verify all VMware Cloud Foundation components are reachable from SDDC Manager.
n Verify that all VMware Cloud Foundation components are in an Active state.
Procedure
c Expand the Edit DNS configuration section, update the Primary DNS server and
Alternative DNS server, and click Save.
SDDC Manager uses NTP servers to synchronize time between the components in the system.
You must have at least one NTP server. When you update the NTP server configuration, SDDC
Manager performs NTP configuration updates for the following components:
n SDDC Manager
n vCenter Servers
n ESXi hosts
n NSX Managers
n VxRail Manager
If the update fails, SDDC Manager rolls back the NTP settings for the failed component. Fix the
underlying issue and retry the update starting with the failed component.
Note There is no rollback for the VMware Aria Suite Lifecycle. Check the logs, resolve any
issues, and retry the update.
Updating the NTP server configuration can take some time to complete, depending on the size
of your environment. Schedule NTP updates at a time that minimizes the impact to the system
users.
Prerequisites
n Verify the new NTP server is reachable from the VMware Cloud Foundation components.
n Verify the time skew between the new NTP servers and the VMware Cloud Foundation
components is less than 5 minutes.
n Verify all VMware Cloud Foundation components are reachable from SDDC Manager.
Procedure
c Expand the Edit NTP configuration section, update the NTP server, and click Save.
To run the SoS utility, SSH in to the SDDC Manager appliance using the vcf user account. For
basic operations, enter the following command:
To list the available command options, use the --help long option or the -h short option.
Note You can specify options in the conventional GNU/POSIX syntax, using -- for the long
option and - for the short option.
For privileged operations, enter su to switch to the root user, and navigate to the /opt/vmware/
sddc-support directory and type ./sos followed by the options required for your desired
operation.
For information about collecting log files using the SoS utility, see Collect Logs for Your VMware
Cloud Foundation System.
Option Description
Option Description
--short Display detailed health results only for failures and warnings.
Option Description
--domain-name DOMAINNAME Specify the name of the workload domain name on which to perform the SoS
operation.
To run the operation on all workload domains, specify --domain-name ALL.
Note If you omit the --domain-name flag and workload domain name, the SoS
operation is performed only on the management domain.
You can combine --domain-name with --clusternames to further limit the scope of an
operation. This can be useful in a scaled environment with a large number of ESXi
hosts.
--clusternames Specify the vSphere cluster names associated with a workload domain for which you
CLUSTERNAMES want to collect ESXi and Workload Management (WCP) logs.
Enter a comma-separated list of vSphere clusters. For example, --clusternames
cluster1, cluster2.
Note If you specify --domain-name ALL then the --clusternames option is ignored.
--skip-known-host-check Skips the specified check for SSL thumbprint for host in the known host.
--include-free-hosts Collect logs for free ESXi hosts, in addition to in-use ESXi hosts.
--include-precheck-report This option runs LCM upgrade prechecks and includes the LCM upgrade prechecks run
report in SoS health check operations.
Option Description
--get-vcf-summary Returns information about your VMware Cloud Foundation system, including
CEIP,workload domains, vSphere clusters, ESXi hosts, licensing, network pools, SDDC
Manager, and VCF services.
--get-vcf-tasks-summary Returns information about VMware Cloud Foundation tasks, including the time the task
was created and the status of the task.
--get-vcf-services- Returns information about SDDC Manager uptime and when VMware Cloud Foundation
summary services (for example, LCM) started and stopped.
./sos --option-name
Note For Fix-It-Up options, if you do not specify a workload domain, the command affects only
the management domain.
Option Description
--enable-ssh-esxi Applies SSH on all ESXi nodes in the specified workload domains.
n To enable SSH on ESXi nodes in a specific workload domain, include the flag
--domain-name DOMAINNAME.
n To enable SSH on ESXi nodes in all workload domains, include the flag --domain-
name ALL.
--disable-ssh-esxi Deactivates SSH on all ESXi nodes in the specified workload domains.
n To deactivate SSH on ESXi nodes in a specific workload domain, include the flag
--domain-name DOMAINNAME.
n To deactivate SSH on ESXi nodes in all workload domains, include the flag --
domain-name ALL.
n To enable SSH on vCenter Servers in all workload domains, include the flag --
domain-name ALL.
--enable-lockdown-esxi Applies normal lockdown mode on all ESXi nodes in the specified workload domains.
n To enable lockdown on ESXi nodes in a specific workload domain, include the flag
--domain-name DOMAINNAME.
n To enable lockdown on ESXi nodes in all workload domains, include the flag
--domain-name ALL.
--disable-lockdown-esxi Deactivates normal lockdown mode on ESXi nodes in the specified workload
domains.
n To deactivate lockdown on ESXi nodes in a specific workload domain, include the
flag --domain-name DOMAINNAME.
n To deactivate lockdown on ESXi nodes in all workload domains, include the flag
--domain-name ALL.
Option Description
--ondemand-service Execute commands on ESXi hosts, vCenter Servers. or SDDC Manager entities
ONDEMANDSERVICE for a given workload domain. Specify the workload domain using --domain-name
DOMAINNAME.
Replace ONDEMANDSERVICE with the path to a .yml input file. (Sample file available
at: /opt/vmware/sddc-support/ondemand_command_sample.yml).
--ondemand-service JSON Include this flag to execute commands in the JSON format on all ESXi hosts in a
file path workload domain. For example, /opt/vmware/sddc-support/<JSON file name>
A green status indicates that the health is normal, yellow provides a warning that attention might
be required, and red (critical) indicates that the component needs immediate attention.
Option Description
--connectivity-health Performs connectivity checks and validations for SDDC resources (NSX Managers,
ESXi hosts, vCenter Servers, and so on). This check performs a ping status check, SSH
connectivity status check, and API connectivity check for SDDC resources.
--services-health Performs a services health check to confirm whether services within the SDDC
Manager (like Lifecycle Management Server) and vCenter Server are running.
--compute-health Performs a compute health check, including ESXi host licenses, disk storage, disk
partitions, and health status.
--storage-health Performs a check on the vSAN disk health of the ESXi hosts and vSphere clusters.
Can be combined with --run-vsan-checks. For example:
Option Description
--run-vsan-checks This option cannot be run on its own and must be combined with --health-check or
--storage-health.
Runs a VM creation test to verify the vSAN cluster health. Running the test creates a
virtual machine on each host in the vSAN cluster. The test creates a VM and deletes
it. If the VM creation and deletion tasks are successful, assume that the vSAN cluster
components are working as expected and the cluster is functional.
Note You must not conduct the proactive test in a production environment as it
creates network traffic and impacts the vSAN workload.
--ntp-health Verifies whether the time on the components is synchronized with the NTP server in
the SDDC Manager appliance. It also ensures that the hardware and software time
stamp of ESXi hosts are within 5 minutes of the SDDC Manager appliance.
--general-health Checks ESXi for error dumps and gets NSX Manager and cluster status.
--certificate-health Verifies that the component certificates are valid and when they are expiring.
n GREEN: Certificate expires in more than 30 days.
n YELLOW: Certificate expires in 15-30 days.
n RED: Certificate expires in less than 15 days.
--get-inventory-info Returns inventory details for the VMware Cloud Foundation components, such as
vCenter Server NSX, SDDC Manager, and ESXi hosts. Optionally, add the flag --
domain-name ALL to return details for all workload domains.
--password-health Checks the status of passwords across VMware Cloud Foundation components. It
lists components with passwords managed by VCF, the date a password was last
changed, the password expiration date, and the number of days until expiration.
n GREEN: Password expires in more than 15 days.
n YELLOW: Password expires in 5-15 days.
n RED: Password expires in less than 5 days.
--hardware-compatibility- Validates ESXi hosts and vSAN devices and exports the compatibility report.
report
--version-health This operation checks the version of BOM components (vCenter Server, NSX, ESXi,
and SDDC Manager). It compares the SDDC Manager inventory, the actual installed
BOM component version, and the BOM component versions to detect any drift.
--json-output-dir JSONDIR Outputs the results of any health check as a JSON file to the specified directory,
JSONDIR.
./sos --password-health
n Check the DNS health for the workload domain named sfo-w01:
Use these options when retrieving support logs from your environment's various components.
n If you run the SoS utility from SDDC Manager without specifying any component-specific
options, the SoS tool collects SDDC Manager, API, and VMware Cloud Foundation summary
logs. To collect all logs, use the --collect-all-logs options.
Note SoS log collection may time out after 60 minutes, which could be an issue with large
workload domains. If the SoS utility does time out, collect component-specific logs or limit log
collection to specific clusters using the options described below.
n If you run the SoS utility from Cloud Builder without specifying any component-specific
options, the SoS tool collects SDDC Manager, API, and Cloud Builder logs.
n To collect logs for a specific component, run the utility with the appropriate options.
For example, the --domain-name option is important. If omitted, the SoS operation is
performed only on the management domain. See SoS Utility Options.
After running the SoS utility, you can examine the resulting logs to troubleshoot issues, or
provide to VMware Technical Support if requested. VMware Technical Support might request
these logs to help resolve technical issues when you have submitted a support request. The
diagnostic information collected using the SoS utility includes logs for the various VMware
software components and software products deployed in your VMware Cloud Foundation
environment.
Option Description
--sddc-manager-logs Collects logs from the SDDC Manager only. sddc<timestamp>.tgz contains logs from the
SDDC Manager file system's etc, tmp, usr, and var partitions.
--psc-logs Collects logs from the Platform Services Controller instances only.
Option Description
--nsx-logs Collects logs from the NSX Manager and NSX Edge instances only.
--no-clean-old-logs Use this option to prevent the utility from removing any output from a previous collection
run.
By default, before writing the output to the directory, the utility deletes the prior run's
output files that might be present. If you want to retain the older output files, specify this
option.
--api-logs Collects output from REST endpoints for SDDC Manager inventory and LCM.
--rvc-logs Collects logs from the Ruby vSphere Console (RVC) only. RVC is an interface for ESXi and
vCenter.
Note If the Bash shell is not enabled in vCenter Server, RVC log collection will be skipped .
Note RVC logs are not collected by default with ./sos log collection. You must enable RVC
to collect RVC logs.
--collect-all-logs Collects logs for all components, except Workload Management and system debug logs. By
default, logs are collected for the management domain components.
To collect logs for all workload domain, specify --domain-name ALL.
To collect logs for a specific workload domain, specify --domain-name domain_name.
--domain-name Specify the name of the workload domain name on which the SoS operation is to be
DOMAINNAME performed.
To run the operation on all domains, specify --domain-name ALL.
Note If you omit the --domain-name flag and domain name, the SoS operation is
performed only on the management domain.
Procedure
1 Using SSH, log in to the SDDC Manager appliance as the vcf user.
2 To collect the logs, run the SoS utility without specifying any component-specific options.
sudo /opt/vmware/sddc-support/sos
Note By default, before writing the output to the directory, the utility deletes the prior run's
output files that might be present. If you want to retain the older output files, specify the
--no-clean-old-logs option.
If you do not specify the --log-dir option, the utility writes the output to the /var/log/
vmware/vcf/sddc-support directory in the SDDC Manager appliance
Results
The utility collects the log files from the various software components in all of the racks and
writes the output to the directory named in the --log-dir option. Inside that directory, the utility
generates output in a specific directory structure.
Example
What to do next
The SoS utility writes the component log files into an output directory structure within the file
system of the SDDC Manager instance in which the command is initiated, for example:
Log Collection completed successfully for : [HEALTH-CHECK, SDDC-MANAGER, NSX_MANAGER, API-LOGS, ESX,
VMS_SCREENSHOT, VCENTER-SERVER, VCF-SUMMARY]
File Description
esx-FQDN.tgz Diagnostic information from running the vm-support command on the ESXi host.
An example file is esx-esxi-1.vrack.vsphere.local.tgz.
SmartInfo- S.M.A.R.T. status of the ESXi host's hard drive (Self-Monitoring, Analysis, and Reporting Technology).
FQDN.txt An example file is SmartInfo-esxi-1.vrack.vsphere.local.txt.
vsan-health- VMware vSAN cluster health information from running the standard command python /usr/lib/
FQDN.txt vmware/vsan/bin/vsan-health-status.pyc on the ESXi host.
An example file is vsan-health-esxi-1.vrack.vsphere.local.txt.
The number of files in this directory depends on the number of NSX Manager and NSX Edge
instances that are deployed in the rack. In a given rack, each management domain has a cluster
of three NSX Managers. The first VI workload domain has an additional cluster of three NSX
Managers. Subsequent VI workload domains can deploy their own NSX Manager cluster, or use
the same cluster as an existing VI workload domain. NSX Edge instances are optional.
File Description
VMware-NSX-Manager-tech-support- Standard NSX Manager compressed support bundle, generated using the
nsxmanagerIPaddr.tar.gz NSX API POST https://nsxmanagerIPaddr/api/1.0/appliance-management/
techsupportlogs/NSX, where nsxmanagerIPaddr is the IP address of the NSX
Manager instance.
An example is VMware-NSX-Manager-tech-support-10.0.0.8.tar.gz.
VMware-NSX-Edge-tech-support- Standard NSX Edge support bundle, generated using the NSX API to query
nsxmanagerIPaddr-edgeId.tgz the NSX Edge support logs: GET https://nsxmanagerIPaddr/api/4.0/edges/
edgeId/techsupportlogs, where nsxmanagerIPaddr is the IP address of the
Note This information is only collected
NSX Manager instance and edgeID identifies the NSX Edge instance.
if NSX Edges are deployed.
An example is VMware-NSX-Edge-tech-support-10.0.0.7-edge-1.log.gz.
vc Directory Contents
In each rack-specific directory, the vc directory contains the diagnostic information files collected
for the vCenter Server instances deployed in that rack.
The number of files in this directory depends on the number of vCenter Server instances that are
deployed in the rack. In a given rack, each management domain has one vCenter Server instance,
and any VI workload domains in the rack each have one vCenter Server instance.
File Description
vc-vcsaFQDN-vm- Standard vCenter Server support bundle downloaded from the vCenter Server Appliance
support.tgz instance having a fully qualified domain name vcsaFQDN. The support bundle is obtained from
the instance using the standard vc-support.sh command.
Before you can add users and groups to VMware Cloud Foundation, you must configure an
identity provider that has access to user and group data. VMware Cloud Foundation supports the
following identity providers:
n vCenter Single Sign-On is vCenter Server's built-in identity provider. By default, it uses the
system domain (for example, vsphere.local) as its identity source. You can add Active
Directory over LDAP and OpenLDAP as identity sources for vCenter Single Sign-On.
n You can also use any of the following external identity providers instead of vCenter Single
Sign-On:
n Microsoft ADFS
n Okta
Once you have configured an identity provider, you can add users and groups, and assign
roles to determine what tasks they can perform from the SDDC Manager UI and VMware Cloud
Foundation API.
Note SDDC Manager only manages users and groups for the management SSO domain. If you
created isolated VI workload domains that use different SSO domains, you must use the vSphere
Client to manage users and groups for those SSO domains. Use the vSphere Client to connect to
the VI workload domain's vCenter Server and then click Administration > Single Sign On.
In addition to user accounts, VMware Cloud Foundation includes the following accounts:
n Automation accounts for accessing VMware Cloud Foundation APIs. You can use these
accounts in automation scripts.
n Local account for accessing VMware Cloud Foundation APIs when vCenter Server is down.
n Service accounts are automatically created by VMware Cloud Foundation for inter-product
interaction. These are for system use only.
By default, VMware Cloud Foundation uses vCenter Single Sign-On as its identity provider and
the system domain (for example, vsphere.local) as its identity source. You can add Active
Directory over LDAP and OpenLDAP as identity sources for vCenter Single Sign-On. See Add
Active Directory over LDAP or OpenLDAP as an Identity Source for VMware Cloud Foundation.
You can also configure VMware Cloud Foundation to use Microsoft ADFS, Okta, or Microsoft
Entra ID as an external identity provider, instead of using vCenter Single Sign-On:
You can use identity sources to attach one or more domains to vCenter Single Sign-On. A
domain is a repository for users and groups that the vCenter Single Sign-On server can use for
user authentication with VMware Cloud Foundation. By default, vCenter Single Sign-On includes
the system domain (for example, vsphere.local) as an identity source. You can add Active
Directory over LDAP or an OpenLDAP directory service as idenitity sources.
Procedure
4 Click Next.
Table 24-1. Active Directory over LDAP and OpenLDAP Server Settings
Option Description
Base Distinguished Name for Users Base Distinguished Name for users. Enter the DN
from which to start user searches. For example,
cn=Users,dc=myCorp,dc=com.
Base Distinguished Name for Groups The Base Distinguished Name for groups. Enter the
DN from which to start group searches. For example,
cn=Groups,dc=myCorp,dc=com.
Table 24-1. Active Directory over LDAP and OpenLDAP Server Settings (continued)
Option Description
Primary Server URL Primary domain controller LDAP server for the domain.
You can use either the host name or the IP address.
Use the format ldap://
hostname_or_IPaddress:port or ldaps://
hostname_or_IPaddress:port. The port is typically
389 for LDAP connections and 636 for LDAPS
connections. For Active Directory multi-domain
controller deployments, the port is typically 3268 for
LDAP and 3269 for LDAPS.
A certificate that establishes trust for the LDAPS
endpoint of the Active Directory server is required
when you use ldaps:// in the primary or the
secondary LDAP URL.
Certificates (for LDAPS) If you want to use LDAPS with your Active Directory
LDAP Server or OpenLDAP Server identity source, click
Browse to select a certificate. To export the root CA
certificate from Active Directory, consult the Microsoft
documentation.
What to do next
After you successfully add an identity source, you can add users and groups from the domain.
See Add a User or Group to VMware Cloud Foundation .
You can only add one external identity provider to VMware Cloud Foundation.
Prerequisites
n Microsoft ADFS for Windows Server 2016 or later must already be deployed.
n You have created a vCenter Server administrators group in Microsoft ADFS that contains the
users you want to grant vCenter Server administrator privileges to.
For more information about configuring Microsoft ADFS, see the Microsoft documentation.
n vCenter Server must be able to connect to the Microsoft ADFS discovery endpoint, and
the authorization, token, logout, JWKS, and any other endpoints advertised in the discovery
endpoint metadata.
Procedure
5 Click Next.
7 If your Microsoft ADFS server certificate is signed by a publicly trusted Certificate Authority,
click Next. If you are using a self-signed certificate, add the Microsoft ADFS root CA
certificate added to the Trusted Root Certificates Store.
a Click Browse.
c Click Next.
You will need them when you create the Microsoft ADFS Application Group in the next step.
To establish a relying party trust between vCenter Server and an identity provider, you must
establish the identifying information and a shared secret between them. In Microsoft ADFS,
you do so by creating an OpenID Connect configuration known as an Application Group,
which consists of a Server application and a Web API. The two components specify the
information that vCenter Server uses to trust and communicate with the Microsoft ADFS
server. To enable OpenID Connect in Microsoft ADFS, see the VMware knowledge base
article at https://kb.vmware.com/s/article/78029.
Note the following when you create the Microsoft ADFS Application Group.
n You need the two Redirect URIs from the previous step.
n Copy the following information to a file or write it down for use when configuring the
identity provider in the next step.
n Client Identifier
n Shared Secret
Use the information you gathered in the previous step and enter the:
n Client Identifier
n Shared Secret
11 Enter user and group information for the Active Directory over LDAP connection to search
for users and groups.
vCenter Server derives the AD domain to use for authorization and permissions from the
Base Distinguished Name for users. You can add permissions on vSphere objects only for
users and groups from this AD domain. Users or groups from AD child domains or other
domains in the AD forest are not supported by vCenter Server Identity Provider Federation.
Option Description
Base Distinguished Name for Users Base Distinguished Name for users.
Base Distinguished Name for Groups The base Distinguished Name for groups.
User Name ID of a user in the domain who has a minimum of read-only access to Base
DN for users and groups.
Password ID of a user in the domain who has a minimum of read-only access to Base
DN for users and groups.
Option Description
Primary Server URL Primary domain controller LDAP server for the domain.
Use the format ldap://hostname:port or ldaps://hostname:port. The
port is typically 389 for LDAP connections and 636 for LDAPS connections.
For Active Directory multi-domain controller deployments, the port is
typically 3268 for LDAP and 3269 for LDAPS.
A certificate that establishes trust for the LDAPS endpoint of the Active
Directory server is required when you use ldaps:// in the primary or
secondary LDAP URL.
Secondary Server URL Address of a secondary domain controller LDAP server that is used for
failover.
Certificates (for LDAPS) If you want to use LDAPS, click Browse to select a certificate.
What to do next
After you successfully add Microsoft ADFS as an external identity provider, you can add
users and groups to VMware Cloud Foundation. See Add a User or Group to VMware Cloud
Foundation .
Configuring identity federation with Okta involves performing tasks in the Okta Admin Console
and the SDDC Manager UI. After the users and groups are synced, you can assign permissions in
SDDC Manager, vCenter Server, and NSX Manager.
3 Update the Okta OpenID Connect application with the Redirect URI from SDDC Manager.
5 Assign Permissions for Okta Users and Groups in SDDC Manager, vCenter Server, and NSX
Manager.
Note If you created isolated VI workload domains that use different SSO domains, you must use
the vSphere Client to configure Okta as the identity provider for those SSO domains. When you
configure Okta as the identity provider for an isolated workload domain in the vSphere Client,
NSX Manager is automatically registered as a relying party. This means that once an Okta user
with the necessary permissions has logged in to the isolated VI workload domain vCenter Server,
they can directly access the VI workload domain's NSX Manager from the SDDC Manager UI
without having to log in again.
Prerequisites
Integrate Active Directory (AD) with Okta. See Manage your Active Directory integration in the
Okta documentation for more information.
Note This is not required if you do not want to integrate with AD or have previously integrated
AD and Okta.
Procedure
1 Log in to the Okta Admin console and follow the Okta documentation, Create OIDC app
integrations, to create an OpenID Connect application.
When creating the OpenID Connect application in the Create a new app integration wizard:
n Enter an appropriate name for the OpenID Connect application, for example, Okta-VCF-
app.
n In General Settings, leave Authorization Code checked, and check Refresh Token and
Resource Owner Password.
n For now, ignore Sign-in redirect URIs and Sign-out redirect URIs. (you will input these
values later.)
n When selecting how to control access, you can select Skip group assignment for now if
you want.
2 After the OpenID Connect application is created, generate the Client Secret.
b In Client Credentials, click Edit and for Client Authentication check Client Secret.
c For Proof Key for Code Exchange (PKCE), uncheck Require PKCE as additional
verification.
d Click Save.
e Copy both the Client ID and Client Secret and save them for use in creating the Okta
identity provider in SDDC Manager.
Note SDDC Manager uses the terms Client Identifier and Shared Secret.
a Select the Assignments tab and select Assign to Groups from the Assign drop-down.
To view the users that have been assigned, click People under Filters on the Assignment
page.
You can only add one external identity provider to VMware Cloud Foundation.
This procedure configures Okta as the identity provider for the management domain vCenter
Server. The VMware Identity Services information endpoint is replicated to all other vCenter
Server nodes that are part of the management domain vCenter Server enhanced linked mode
(ELM) group. This means that when a user logs into and is authorized by the management
domain vCenter Server, the user is also authorized on any VI workload domain vCenter Server
that is part of the same ELM group. If the user logs in to a VI workload domain vCenter Server
first, the same holds true.
Note The Okta configuration information and user/group information is not replicated between
vCenter Server nodes in enhanced linked mode. Do not use the vSphere Client to configure Okta
as the identity provider for any VI workload domain vCenter Server that is part of the ELM group.
Prerequisites
Okta requirements:
n You are customer of Okta and have a dedicated domain space. For example: https://your-
company.okta.com.
n To perform OIDC logins and manage user and group permissions, you must create the
following Okta applications.
n An Okta native application with OpenID Connect as the sign-on method. The native
application must include the grant types of authorization code, refresh token, and
resource owner password.
n A System for Cross-domain Identity Management (SCIM) 2.0 application with an OAuth
2.0 Bearer Token to perform user and group synchronization between the Okta server
and the vCenter Server.
n vCenter Server must be able to connect to the Okta discovery endpoint, and the
authorization, token, JWKS, and any other endpoints advertised in the discovery endpoint
metadata.
n Okta must also be able to connect with vCenter Server to send user and group data for the
SCIM provisioning.
Networking requirements:
n If your network is not publicly available, you must create a network tunnel between your
vCenter Server system and your Okta server, then use the appropriate publicly accessible
URL as the SCIM 2.0 Base Uri.
Note If you added vCenter group memberships for any remote AD/LDAP users or groups,
vCenter Server attempts to prepare these memberships so that the are compatible with the
new identity provider configuration. This preparation process happens automatically at service
startup, but it must complete in order to continue with Okta configuration. Click Run Prechecks to
check the status of this process before proceeding.
Procedure
5 Click Next.
7 Click Run Prechecks to ensure that the system is ready to change identity providers.
If the precheck finds errors, click View Details and take steps to resolve the errors as
indicated.
n Directory Name: Name of the local directory to create on vCenter Server that stores the
users and groups pushed from Okta. For example, vcenter-okta-directory.
n Domain Name(s): Enter the Okta domain names that contain the Okta users and groups
you want to synchronize with vCenter Server.
After you enter your Okta domain name, click the Plus icon (+) to add it. If you enter
multiple domain names, specify the default domain.
9 Click Next.
n Redirect URIs: Filled in automatically. You give the redirect URI to your Okta administrator
for use in creating the OpenID Connect application.
n Client Identifier: Obtained when you created the OpenID Connect application in Okta.
(Okta refers to Client Identifier as the Client ID.)
n Shared Secret: Obtained when you created the OpenID Connect application in Okta.
(Okta refers to Shared Secret as the Client Secret.)
See https://developer.okta.com/docs/reference/api/oidc/#well-known-openid-
configuration for more information.
11 Click Next.
Update the Okta OpenID Connect application with the Redirect URI from SDDC
Manager
After you create the Okta identity provider configuration in the SDDC Manager UI, update the
Okta OpenID Connect application with the Redirect URI from SDDC Manager.
Prerequisites
4 In the OpenID Connect section, copy and save the Redirect URI.
Procedure
2 In the General Settings screen for the OpenID Connect application created, click Edit.
3 In the Sign-in redirect URIs text box, paste the copied Redirect URI from SDDC Manager.
4 Click Save.
Create a SCIM 2.0 Application for Using Okta with VMware Cloud Foundation
Creating a SCIM 2.0 application for Okta enables you to specify which Active Directory users and
groups to push to vCenter Server.
Prerequisites
Copy the Tenant URL and Secret Token from the SDDC Manager UI.
4 In the User Provisioning section, click Generate and then copy and save the Secret Token and
Tenant URL.
Procedure
2 Browse the app catalog for SCIM 2.0 Test App (OAuth Bearer Token), and click Add
Integration.
Note The word "Test" is of Okta's choosing. The SCIM application you create using this
"Test" template is of production quality.
3 Use the following settings when creating the SCIM 2.0 application:
n Enter an appropriate name for the SCIM 2.0 application, such as VCF SCIM 2.0 app.
n In the General settings · Required page, leave Automatically log in when user lands on
login page checked.
n Password reveal: Leave Allow users to securely see their password selected.
4 Assign users and groups to the SCIM 2.0 application to push from your Active Directory to
vCenter Server:
a In the Okta SCIM 2.0 application, under Provisioning, click Configure API integration.
c Enter the SCIM 2.0 Base Url and OAuth Bearer Token.
SDDC Manager calls the SCIM 2.0 Base Url the "Tenant URL," and the OAuth Bearer
Token the "Secret Token."
Note If you have a network tunnel between the vCenter Server system and the Okta
server, then use the appropriate publicly accessible URL as the Base Url.
f Click Save.
5 Provision users.
a Click the Provisioning tab and select To App, then click Edit.
d Click Save.
6 Make assignments.
a Click the Assignments tab and select Assign to Groups from the Assign drop-down.
g Under Filters, select People and Groups to view the users and groups assigned.
7 Click the Push Groups tab and select an options from the Push Groups drop-down menu.
n Find groups by rule: Select this option to create a search rule that pushes matching
groups to the app.
Note Unless you uncheck the Push group memberships immediately check box, the
selected membership is pushed immediately, and the Push Status shows Active. For more
information, see Enable Group Push in the Okta documentation.
Procedure
a In the SDDC Manager UI, click Administration > Single Sign On.
c Select one or more users or group by clicking the check box next to the user or group.
You can either search for a user or group by name, or filter by user type or domain.
Note Okta users and groups appear in the domain(s) that you specified when you
configured Okta as the identity provider in the SDDC Manager UI.
b Select Administration and click Global Permissions in the Access Control area.
c Click Add.
d From the Domain drop-down menu, select the domain for the user or group.
i Click OK.
c On the User Role Assignment tab, click Add Role for OpenID Connect User.
d Select vcenter-idp-federation from the drop-down menu and then enter text to search
for and select an Okta user or group.
g Select Enterprise Admin from the drop-down menu and click Add.
h Click Apply.
i Click Save.
Configuring identity federation with Microsoft Entra ID involves performing tasks in the Microsoft
Entra Admin Console and the SDDC Manager UI. After the users and groups are synced, you can
assign permissions in SDDC Manager, vCenter Server, and NSX Manager.
1 Create an OpenID Connect application for VMware Cloud Foundation in Microsoft Entra ID.
2 Configure Microsoft Entra ID as the Identity Provider in the SDDC Manager UI.
3 Update the Microsoft Entra ID OpenID Connect application with the Redirect URI from SDDC
Manager.
5 Assign Permissions for Microsoft Entra ID Users and Groups in SDDC Manager, vCenter
Server, and NSX Manager.
Note If you created isolated VI workload domains that use different SSO domains, you must
use the vSphere Client to configure Microsoft Entra ID as the identity provider for those SSO
domains. When you configure Microsoft Entra ID as the identity provider for an isolated workload
domain in the vSphere Client, NSX Manager is automatically registered as a relying party. This
means that once an Microsoft Entra ID user with the necessary permissions has logged in to the
isolated VI workload domain vCenter Server, they can directly access the VI workload domain's
NSX Manager from the SDDC Manager UI without having to log in again.
Prerequisites
Integrate Active Directory (AD) with Microsoft Entra ID. See the Microsoft documentation for
more information.
Note This is not required if you do not want to integrate with AD or have previously integrated
AD and Microsoft Entra ID.
Procedure
1 Log in to the Microsoft Entra Admin console and follow the Microsoft documentation, to
create an OpenID Connect application.
When creating the OpenID Connect application in the Create a new app integration wizard:
n Select Home > Azure AD Directory > App Registration > New Registration.
n Enter an appropriate name for the OpenID Connect application, for example, EntraID-
vCenter-app.
n Set Redirect URI as Web. There is no need to enter a redirect URI, this can be filled in
later.
2 After the OpenID Connect application is created, generate the Client Secret.
a Click Certificates & secrets > Client secrets > New client secret.
b Enter a description for the client secret and select the validity in Expiry drop-down menu.
c Click Add.
d Once a secret is generated, copy the content under Value and save it for use in creating
the Microsoft Entra ID identity provider in SDDC Manager.
Note SDDC Manager uses the term Shared Secret for the Client Secret.
a Click Overview.
Note SDDC Manager uses SDDC Manager uses the term Client Identifier for the Client ID.
4 Click Overview > Endpoints and copy the value for the OpenID Connect metadata document.
5 Click Manage > Authentication, scroll to the Advanced settings section, slide the toggle to
Yes for Enable the following mobile and desktop flows and click Save.
6 Click Manage > API permissions and click Grant admin consent for
<tenant_organization_name>. For example, Grant admin consent for vcenter auth services.
What to do next
Configure Microsoft Entra ID as the identity provider in the SDDC Manager UI using the Client
Secret, Client ID, and OpenID Connect information you copied.
You can only add one external identity provider to VMware Cloud Foundation.
This procedure configures Microsoft Entra ID as the identity provider for the management
domain vCenter Server. The VMware Identity Services information endpoint is replicated to all
other vCenter Server nodes that are part of the management domain vCenter Server enhanced
linked mode (ELM) group. This means that when a user logs into and is authorized by the
management domain vCenter Server, the user is also authorized on any VI workload domain
vCenter Server that is part of the same ELM group. If the user logs in to a VI workload domain
vCenter Server first, the same holds true.
Note The Microsoft Entra ID configuration information and user/group information is not
replicated between vCenter Server nodes in enhanced linked mode. Do not use the vSphere
Client to configure Microsoft Entra ID as the identity provider for any VI workload domain
vCenter Server that is part of the ELM group.
Prerequisites
n To perform OIDC logins and manage user and group permissions, you must create the
following Microsoft Entra ID applications.
n A Microsoft Entra ID native application with OpenID Connect as the sign-on method. The
native application must include the grant types of authorization code, refresh token, and
resource owner password.
n A System for Cross-domain Identity Management (SCIM) 2.0 application with an OAuth
2.0 Bearer Token to perform user and group synchronization between the Microsoft
Entra ID server and the vCenter Server.
Networking requirements:
n If your network is not publicly available, you must create a network tunnel between your
vCenter Server system and your Microsoft Entra ID server, then use the appropriate publicly
accessible URL as the SCIM 2.0 Tenant URL.
Note If you added vCenter group memberships for any remote AD/LDAP users or groups,
vCenter Server attempts to prepare these memberships so that the are compatible with the
new identity provider configuration. This preparation process happens automatically at service
startup, but it must complete in order to continue with Microsoft Entra ID configuration. Click Run
Prechecks to check the status of this process before proceeding.
Procedure
5 Click Next.
7 Click Run Prechecks to ensure that the system is ready to change identity providers.
If the precheck finds errors, click View Details and take steps to resolve the errors as
indicated.
n Directory Name: Name of the local directory to create on vCenter Server that stores
the users and groups pushed from Microsoft Entra ID. For example, vcenter-entra-
directory.
n Domain Name(s): Enter the domain names that contain the Microsoft Entra ID users and
groups you want to synchronize with vCenter Server.
After you enter a domain name, click the Plus icon (+) to add it. If you enter multiple
domain names, specify the default domain.
9 Click Next.
n Redirect URIs: Filled in automatically. You give the redirect URI to your Microsoft Entra ID
administrator for use in creating the OpenID Connect application.
n Client Identifier: Obtained when you created the OpenID Connect application in Microsoft
Entra ID. (Microsoft Entra ID refers to Client Identifier as the Client ID.)
n Shared Secret: Obtained when you created the OpenID Connect application in Microsoft
Entra ID. (Microsoft Entra ID refers to Shared Secret as the Client Secret.)
n OpenID Address: Obtained when you created the OpenID Connect application in
Microsoft Entra ID. (Microsoft Entra ID refers to OpenID Address as the OpenID Connect
metadata document).
11 Click Next.
Update the Microsoft Entra ID OpenID Connect application with the Redirect URI
from SDDC Manager
After you create the Microsoft Entra ID identity provider configuration in the SDDC Manager
UI, update the Microsoft Entra ID OpenID Connect application with the Redirect URI from SDDC
Manager.
Prerequisites
4 In the OpenID Connect section, copy and save the Redirect URI.
Procedure
2 In the App Registrations screen for your OpenID Connect application, click Authentication.
4 In the Redirect URIs text box, paste the copied Redirect URI from SDDC Manager.
5 Click Configure.
Create a SCIM 2.0 Application for Using Microsoft Entra ID with VMware Cloud
Foundation
Creating a SCIM 2.0 application for Microsoft Entra ID enables you to specify which Active
Directory users and groups to push to vCenter Server.
If your vCenter Server accepts inbound traffic, follow the procedure below to create a SCIM 2.0
application. If your vCenter Server does not accept inbound traffic, see the Microsoft Entra ID
documentation for alternative methods:
Prerequisites
Copy the Tenant URL and Secret Token from the SDDC Manager UI.
4 In the User Provisioning section, click Generate and then copy and save the Secret Token and
Tenant URL.
You will use this information to configure the Provisioning settings below.
Procedure
3 Search for "VMware Identity Service" and select it in the search results.
4 Enter an appropriate name for the SCIM 2.0 application, for example, VCF SCIM 2.0 app.
5 Click Create.
6 After the SCIM 2.0 application is created, click Manage > Provisioning and specify the
Provisioning settings.
b Enter the Tenant URL and Secret Token that you copied from the SDDC Manager UI and
click Test Connection.
Note If you have a network tunnel between the vCenter Server system and the Microsoft
Entra ID server, then use the appropriate publicly accessible URL as the Tenant URL.
c Click Save.
d Expand the Mappings section and click Provision Azure Active Directory Users.
f On the Edit Attribute screen, update the settings and then click OK.
Option Description
Item(Split[userPrincipalName], "@"), 1)
h On the Edit Attribute screen, update the settings and then click OK.
Option Description
Item(Split[userPrincipalName], "@"), 2)
i Click Save.
7 Provision users.
d Click Assign.
Procedure
a In the SDDC Manager UI, click Administration > Single Sign On.
c Select one or more users or group by clicking the check box next to the user or group.
You can either search for a user or group by name, or filter by user type or domain.
Note Microsoft Entra ID users and groups appear in the domain(s) that you specified
when you configured Microsoft Entra ID as the identity provider in the SDDC Manager UI.
b Select Administration and click Global Permissions in the Access Control area.
c Click Add.
d From the Domain drop-down menu, select the domain for the user or group.
i Click OK.
c On the User Role Assignment tab, click Add Role for OpenID Connect User.
d Select vcenter-idp-federation from the drop-down menu and then enter text to search
for and select a Microsoft Entra ID user or group.
g Select Enterprise Admin from the drop-down menu and click Add.
h Click Apply.
i Click Save.
SDDC Manager UI displays user and group information based on the configured identity provider
and identity sources. See Configuring the Identity Provider for VMware Cloud Foundation.
Prerequisites
Only a user with the ADMIN role can perform this task.
Procedure
3 Select one or more users or group by clicking the check box next to the user or group.
You can either search for a user or group by name, or filter by user type or domain.
Role Description
ADMIN This role has access to all the functionality of the UI and API.
VIEWER This role can only view the SDDC Manager. User management and password
management are hidden from this role.
Prerequisites
Only a user with the ADMIN role can perform this task.
Procedure
2 Click the vertical ellipsis (three dots) next to a user or group name and click Remove.
3 Click Delete.
Procedure
For more information about roles, see Chapter 24 Managing Users and Groups in VMware
Cloud Foundation.
You can also download the response by clicking the download icon to the right of
LocalUser (admin@local).
4 If the local account is not configured, perform the following tasks to configure the local
account:
n Minimum length: 12
n At least one lowercase letter, one uppercase letter, a number, and one of the
following special characters ! % @ $ ^ # ? *
Note You must remember the password that you created because it cannot be
retrieved. Local account passwords are used in password rotation.
Procedure
For more about roles, see Chapter 24 Managing Users and Groups in VMware Cloud
Foundation.
4 Create a service account with the ADMIN role and get the service account's API key.
[
{
"name": "service_account",
"type": "SERVICE",
"role":
{
"id": "317cb292-802f-ca6a-e57e-3ac2b707fe34"
}
}
]
c Click Execute.
c Click TokenCreationSpec.
{
"apiKey": "qsfqnYgyxXQ892Jk90HXyuEMgE3SgfTS"
}
e Click Execute.
f In the Response, click TokenPair and RefreshToken and save the access and refresh
tokens.
You entered passwords for your VMware Cloud Foundation system as part of the bring-
up procedure. You can rotate and update some of these passwords using the password
management functionality in the SDDC Manager UI, including:
n Accounts used for service consoles, such as the ESXi root account.
Note SDDC Manager manages passwords for all SSO administrator accounts, even if you
created isolated VI workload domains that use different SSO domains than the management
domain.
n Service accounts that are automatically generated during bring-up, host commissioning, and
workload creation.
Service accounts have a limited set of privileges and are created for communication between
products. Passwords for service accounts are randomly generated by SDDC Manager. You
cannot manually set a password for service accounts. To update the credentials of service
accounts, you can rotate the passwords.
To provide optimal security and proactively prevent any passwords from expiring, you must
rotate passwords every 80 days.
Note Do not change the passwords for system accounts and the
administrator@vsphere.local account outside SDDC Manager. This can break your VMware
Cloud Foundation system.
You can also use the VMware Cloud Foundation API to look up and manage credentials. In the
SDDC Manager UI, click Developer Center > API Explorer and browse to the APIs for managing
credentials.
Starting with VMware Cloud Foundation 5.2.1, you can also manage passwords using the vSphere
Client.
You can also click Security > Password Management in the navigation pane to view password
expiration information. For example:
For an expired password, you must update the password outside of VMware Cloud Foundation
and then remediate the password using the SDDC Manager UI or the VMware Cloud Foundation
API. See Remediate Passwords .
Note Password expiration information in the SDDC Manager UI is updated once a day. To get
real-time information, use the VMware Cloud Foundation API.
n Rotate Passwords
n Remediate Passwords
Rotate Passwords
As a security measure, you can rotate passwords for the components in your VMware Cloud
Foundation instance. The process of password rotation generates randomized passwords for
the selected accounts. You can rotate passwords manually or set up auto-rotation for accounts
managed by SDDC Manager.
n VxRail Manager
n ESXi
n vCenter Server
Note Auto-rotate is automatically enabled for vCenter Server service accounts. It may take
up to 24 hours to configure the service account auto-rotate policy for a newly deployed
vCenter Server.
n NSX Manager
n VMware Avi Load Balancer (formerly known as NSX Advanced Load Balancer)
Note For Workspace ONE Access passwords, the password rotation method varies
depending on the user account. See the table below for details.
Table 25-1. Password Rotation Details for Workspace ONE Access User Accounts
Workspace ONE Access VMware Aria Suite
User Account Lifecycle Locker Entry Password Rotation Method Password Rotation Scope
n 20 characters in length
n At least one uppercase letter, a number, and one of the following special characters: ! @ # $
^ *
If you changed the vCenter Server password length using the vSphere Client or the ESXi
password length using the VMware Host Client, rotating the password for those components
from SDDC Manager generates a password that complies with the password length that you
specified.
To update the SDDC Manager root, super user, and API passwords, see Updating SDDC Manager
Passwords.
Prerequisites
n Verify that there are no currently failed workflows in SDDC Manager. To check for failed
workflows, click Dashboard in the navigation pane and expand the Tasks pane at the bottom
of the page.
n Verify that no active workflows are running or are scheduled to run during the brief time
period that the password rotation process is running. It is recommended that you schedule
password rotation for a time when you expect to have no running workflows.
n Only a user with the ADMIN role can perform this task.
Procedure
2 Select one or more accounts and click one of the following operation.
n Rotate Now
n Schedule Rotation
You can set the password rotation interval (30 days, 60 days, or 90 days). You can also
deactivate the schedule.
A message appears at the top of the page showing the progress of the operation. The Tasks
panel also shows detailed status for the password rotation operation. To view sub-tasks, click
the task name. As each of these tasks is run, the status is updated. If the task fails, you can
click Retry.
Results
n Minimum length: 15
n Maximum length: 20
n At least one lowercase letter, one uppercase letter, a number, and one of the following
special characters: ! @ # $ ^ *
n A dictionary word
n A palindrome
Prerequisites
n Verify that there are no currently failed workflows in your VMware Cloud Foundation system.
To check for failed workflows, click Dashboard in the navigation pane and expand the Tasks
pane at the bottom of the page.
n Verify that no active workflows are running or are scheduled to run during the manual
password update.
n Only a user with the ADMIN role can perform this task. For more information about roles, see
Chapter 24 Managing Users and Groups in VMware Cloud Foundation.
Procedure
2 Select the account whose password you want to update, click the vertical ellipsis (three dots),
and click Update Password.
4 Click Update.
A message appears at the top of the page showing the progress of the operation. The Tasks
panel also shows detailed status of the password update operation. To view sub-tasks, click
the task name.
Results
Remediate Passwords
When an error occurs, for example after a password expires, you must manually reset the
password in the component product. After you reset the password in a component, you must
remediate the password in SDDC Manager to update the password in the SDDC Manager
database and the dependent VMware Cloud Foundation workflows.
To resolve any errors that might have occurred during password rotation or update, you must
use password remediation. Password remediation syncs the password of the account stored in
the SDDC Manager with the updated password in the component.
Note You can remediate the password for only one account at a time.
Although the individual VMware Cloud Foundation components support different password
requirements, you must set passwords following a common set of requirements across all
components.
Prerequisites
n Verify that VMware Cloud Foundation system contain no failed workflows. To check for failed
workflows, click Dashboard in the navigation pane and expand the Tasks pane at the bottom
of the page.
n Verify that no workflows are running or are scheduled to run while you remediate the
password.
n Only a user with the ADMIN role can perform this task. For more information about roles, see
Chapter 24 Managing Users and Groups in VMware Cloud Foundation.
Procedure
2 Select the account whose password you want to remediate, click the vertical ellipsis (three
dots), and click Remediate Password.
The Remediate Password dialog box appears. This dialog box displays the entity name,
account type, credential type, and user name, in case you must confirm you have selected
the correct account.
3 Enter and confirm the password that was set manually on the component.
4 Click Remediate.
A message appears at the top of the page showing the progress of the operation. The Task
panel also shows detailed status of the password remediation operation. To view subtasks,
you can click the task name.
Results
Prerequisites
Only a user with the ADMIN role can perform this task.
Procedure
1 SSH in to the SDDC Manager appliance using the vcf user account.
Note Although the password management CLI commands are located in /usr/bin, you can
run them from any directory.
lookup_passwords
You must enter the user name and password for a user with the ADMIN role.
4 (Optional) Save the command output to a secure location with encryption so that you can
access it later and use it to log in to the accounts as needed.
Procedure
Option Description
Results
n At least 15 characters
Procedure
For more information about roles, see Chapter 24 Managing Users and Groups in VMware
Cloud Foundation.
6 In the Value box, type the new and old passwords and click Execute.
A response of Status: 204, No Content indicates that the password was successfully
updated.
n Must include:
n a number
n *{}[]()/\'"`~,;:.<>
Procedure
1 In a web browser, log in to the management domain vCenter Server using the vSphere Client
(https://<vcenter_server_fqdn>/ui).
2 In the VMs and Templates inventory, expand the management domain vCenter Server and
the management virtual machines folder.
3 Right-click the SDDC Manager virtual machine, and select Open Remote Console.
4 Click within the console window and press Enter on the Login menu item.
5 Type root as the user name and enter the current password for the root user.
7 When prompted for a new password, enter a different password than the previous one and
click Enter.
You can backup and restore SDDC Manager with an image-based or a file-based solution. File-
based backup is recommended for customers who are comfortable with configuring backups
using APIs, and are not using composable servers.
For a file-based backup of SDDC Manager VM, the state of the VM is exported to a file that
is stored in a domain different than the one where the product is running. You can configure
a backup schedule for the SDDC Manager VM and enable task-based (state-change driven)
backups. When task-based backups are enabled, a backup is triggered after each SDDC Manager
task (such as workload domain and host operations or password rotation).
You can also define a backup retention policy to comply with your company's retention policy.
For more information, see the VMware Cloud Foundation on Dell VxRail API Reference Guide.
By default, NSX Manager file-based backups are taken on the SFTP server that is built into SDDC
Manager. It is recommended that you configure an external SFTP server as a backup location for
the following reasons:
n An external SFTP server is a prerequisite for restoring SDDC Manager file-based backups.
n Using an external SFTP server provides better protection against failures because it
decouples NSX backups from SDDC Manager backups.
This section of the documentation provides instructions on backing up and restoring SDDC
Manager, and on configuring the built-in automation of NSX backups. For information on backing
up and restoring a full-stack SDDC, see VMware Validated Design Backup and Restore.
Prerequisites
n Only a user with the ADMIN role can perform this task. See Chapter 24 Managing Users and
Groups in VMware Cloud Foundation.
n The external SFTP server must support a 256-bit length ECDSA SSH public key.
n The external SFTP server must support a 2048-bit length RSA SSH public key
n You will need the SHA256 fingerprint of RSA key of the SFTP server.
n Host Key algorithms: At least one of rsa-sha2-512 or rsa-sha2-256 and one of ecdsa-sha2-
nistp256, ecdsa-sha2-nistp384, or ecdsa-sha2-nistp521.
Procedure
2 On the Backup page, click the Site Settings tab and then click Register External.
To obtain the SSH Fingerprint of the target system to verify, connect to the SDDC Manager
Appliance over ssh and run the following command:
Setting Value
Port 22
Backup Directory The directory on the SFTP server where backups are
saved.
For example: /backups/.
4 In the Confirm your changes to backup settings dialog box, click Confirm.
To ensure that all management components are backed up correctly, you must create a series
of backup jobs that capture the state of a set of related components at a common point in
time. With some components, simultaneous backups of the component nodes ensure that you
can restore the component a state where the nodes are logically consistent with each other and
eliminate the necessity for further logical integrity remediation of the component.
Note
n You must monitor the space utilization on the SFTP server to ensure that you have sufficient
storage space to accommodate all backups taken within the retention period.
n Do not make any changes to the /opt/vmware/vcf directory on the SDDC Manager VM. If
this directory contains any large files, backups may fail.
Prerequisites
Verify that you have an SFTP server on the network to serve as a target of the file-based
backups.
Only a user with the Admin role can perform this task.
Procedure
4 On the Backup Schedule page, enter the settings and click Save.
Setting Value
Setting Value
Results
The status and the start time of the backup is displayed on the UI. You have set the SDDC
Manager backup schedule to run daily at 04:02 AM and after each change of state.
If the backup is unsuccessful, verify if the SFTP server is available and able to provide its SSH
fingerprint:
n SSH to the SDDC Manager appliance run the following command as the root user:
Enter the SFTP user password when prompted. The following message indicates a successful
connection:
Procedure
4 In the Create backup schedule dialog box, enter these values and click Create.
Setting Value
Results
Prerequisites
n In the vSphere Client, for each vSphere cluster that is managed by the vCenter Server, note
the current vSphere DRS Automation Level setting and then change the setting to Manual.
After the vCenter Server upgrade is complete, you can change the vSphere DRS Automation
Level setting back to its original value. See KB 87631 for information about using VMware
PowerCLI to change the vSphere DRS Automation Level.
Procedure
4 If you already have a backup schedule set up, select Use backup location and user name
from backup schedule and click Start.
5 If you do not already have a backup schedule, enter the following information and click Start.
Setting Value
What to do next
In order to restore vCenter Server, you will need the VMware vCenter Server Appliance ISO file
that matches the version you backed up.
n Identify the required vCenter Server version. In the vCenter Server Management Interface,
click Summary in the left navigation pane to see the vCenter Server version and build
number.
n Download the VMware vCenter Server Appliance ISO file for that version from the Broadcom
Support Portal.
You can use the exported file to create multiple copies of the vSphere Distributed Switch
configuration on an existing deployment, or overwrite the settings of existing vSphere
Distributed Switch instances and port groups.
You must backup the configuration of a vSphere Distributed Switch immediately after each
change in configuration of that switch.
Procedure
4 Expand the Management Networks folder, right-click the distributed switch, and select
Settings > Export configuration.
5 In the Export configuration dialog box, select Distributed switch and all port groups.
6 In the Description text box enter the date and time of export, and click OK.
7 Copy the backup zip file to a secure location from where you can retrieve the file and use it if
a failure of the appliance occurs.
Use this guidance as appropriate based on the exact nature of the failure encountered within
your environment. Sometimes, you can recover localized logical failures by restoring individual
components. In more severe cases, such as a complete and irretrievable hardware failure,
to restore the operational status of your SDDC, you must perform a complex set of manual
deployments and restore sequences. In failure scenarios where there is a risk of data loss, there
has already been data loss or where it involves a catastrophic failure, contact Broadcom Support
to review your recovery plan before taking any steps to remediate the situation.
Prerequisites
n Verify that you have a valid file-based backup of the failed SDDC Manager instance.
To be valid, the backup must be of the same version as the version of the SDDC Manager
appliance on which you plan to restore the instance.
n SFTP Server IP
n Encryption Password
Procedure
What to do next
The backup file contains sensitive data about your VMware Cloud Foundation instance, including
passwords in plain text. As a best practice, you must control access to the decrypted files and
securely delete them after you complete the restore operation.
Prerequisites
Verify that your host machine with access to the SDDC has OpenSSL installed.
Note The procedures have been written based on the host machine being a Linux-based
operating system.
Procedure
1 Identify the backup file for the restore and download it from the SFTP server to your host
machine.
2 On your host machine, open a terminal and run the following command to extract the content
of the backup file.
4 In the extracted folder, locate and open the metadata.json file in a text editor.
6 In a web browser, paste the URL and download the OVA file.
8 Locate the entityType BACKUP value and record the backup password.
Procedure
1 In a web browser, log in to management domain vCenter Server by using the vSphere Client
(https://<vcenter_server_fqdn>/ui).
5 On the Select an OVF template page, select Local file, click Upload files, browse to the
location of the SDDC Manager OVA file, click Open, and click Next.
6 On the Select a name and folder page, in the Virtual machine name text box, enter a virtual
machine name, and click Next.
8 On the Review details page, review the settings and click Next.
9 On the License agreements page, accept the license agreement and click Next.
10 On the Select storage page, select the vSAN datastore and click Next.
The datastore must match the vsan_datastore value in the metadata.json file that you
downloaded during the preparation for the restore.
11 On the Select networks page, from the Destination network drop-down menu, select the
management network distributed port group and click Next.
The distributed port group must match the port_group value in the metadata.json file that
you downloaded during the preparation for the restore.
12 On the Customize template page, enter the following values and click Next.
Setting Description
Enter root user password You can use the original root user password or a new
password.
Enter login (vcf) user password You can use the original vcf user password or a new
password.
Enter basic auth user password You can use the original admin user password or a new
password.
Enter backup (backup) user password The backup password that you saved during the
preparation for the restore. This password can be
changed later if desired.
Enter Local user password You can use the original Local user password or a new
password.
Domain search path The domain search path(s) for the appliance.
13 On the Ready to complete page, click Finish and wait for the process to complete.
14 When the SDDC Manager appliance deployment completes, expand the management folder.
15 Right-click the SDDC Manager appliance and select Snapshots > Take Snapshot.
16 Right-click the SDDC Manager appliance, select Power > Power On.
17 On the host machine, copy the encrypted backup file to the /tmp folder on the newly
deployed SDDC Manager appliance by running the following command. When prompted,
enter the vcf_user_password.
18 On the host machine, obtain the authentication token from the SDDC Manager appliance in
order to be able to execute the restore process by running the following command:
19 On the host machine with access to the SDDC Manager, open a terminal and run the
command to start the restore process.
21 Monitor the restore task by using the following command until the status becomes
Successful.
What to do next
Refresh the SSH keys that are stored in the SDDC Manager inventory. See VMware Cloud
Foundation SDDC Manager Recovery Scripts (79004).
Procedure
4 Manually delete the snapshot created in Restore SDDC Manager from a File-Based Backup.
What to do next
Refresh the SSH keys that are stored in the SDDC Manager inventory. See VMware Cloud
Foundation SDDC Manager Recovery Scripts (79004).
Prerequisites
n Verify that you have a valid file-based backup of the failed vCenter Server instance.
To be valid, the backup must be of the version of the vCenter Server Appliance on which you
plan to restore the instance.
n SFTP Server IP
n Encryption Password
Procedure
Prerequisites
Because the Management domain vCenter Server might be unavailable to authenticate the login,
you use the SDDC Manager API via the shell to retrieve this information.
Procedure
3 For each vCenter Server instance, record the values of these settings.
Setting Value
Setting Value
version version_number-build_number
4 Verify that the vCenter Server version retrieved from SDDC Manager is the same as the
version associated with the backup file that you plan to restore.
Before you can query the SDDC Manager API, you must obtain an API access token by using
admin@local account.
Prerequisites
Note If SDDC Manager is not operational, you can retrieve the required vCenter Server root,
vCenter Single Sign-On administrator, and ESXi root credentials from the file-based backup of
SDDC Manager. See Prepare for Restoring SDDC Manager.
Procedure
1 Log in to your host machine with access to the SDDC and open a terminal.
a Run the command to obtain an access token by using the admin@local credentials.
a Run the following command to retrieve the vCenter Server root credentials.
Setting Value
username root
password vcenter_server_root_password
a Run the following command to retrieve the vCenter Single Sign-On administrator
credentials.
Setting Value
username administrator@vsphere.local
password vsphere_admin_password
5 If you plan to restore the management domain vCenter Server, retrieve the credentials for a
healthy management domain ESXi host.
a Run the following command to retrieve the credentials for a management domain ESXi
host.
username root
password esxi_root_password
You deploy a new vCenter Server appliance and perform a file-based restore. If you are restoring
the management domain vCenter Server, you deploy the new appliance on a healthy ESXi host
in the management domain vSAN cluster. If you are restoring the VI workload domain vCenter
Server, you deploy the new appliance on the management domain vCenter Server.
Prerequisites
n Download the vCenter Server ISO file for the version of the failed instance. See Retrieve the
vCenter Server Deployment Details.
n If you are recovering the VI workload domain vCenter Server, verify that the management
vCenter Server is available.
Procedure
1 Mount the vCenter Server ISO image to your host machine with access to the SDDC and run
the UI installer for your operating system.
2 Click Restore.
b On the End user license agreement page, select the I accept the terms of the license
agreement check box and click Next.
c On the Enter backup details page, enter these values and click Next.
Password vsphere-service-account-password
d On the Review backup information page, review the backup details, record the vCenter
Server configuration information, and click Next.
You use the vCenter Server configuration information at a later step to determine the
deployment size for the new vCenter Server appliance.
e On the vCenter Server deployment target page, enter the values by using the
information that you retrieved during the preparation for the restore, and click Next.
ESXi host or vCenter Server name FQDN of the first ESXi host FQDN of the management vCenter
Server
f In the Certificate warning dialog box, click Yes to accept the host certificate.
g On the Set up a target vCenter Server VM page, enter the values by using the
information that you retrieved during the preparation for the restore, and click Next.
Setting Value
h On the Select deployment size page, select the deployment size that corresponds with
the vCenter Server configuration information from Step 3.d and click Next.
Refer to vSphere documentation to map CPU count recorded from Step 3.d to a vSphere
Server configuration size.
i On the Select datastore page, select these values, and click Next.
Setting Value
j On the Configure network settings page, enter the values by using the information that
you retrieved during the preparation for the restore, and click Next.
Setting Value
IP version IPV4
IP assignment static
k On the Ready to complete stage 1 page, review the restore settings and click Finish.
b On the Backup details page, in the Encryption password text box, enter the encryption
password of the SFTP server and click Next.
c On the Single Sign-On configuration page, enter these values and click Next.
Setting Value
d On the Ready to complete page, review the restore details and click Finish.
What to do next
Refresh the SSH keys that are stored in the SDDC Manager inventory. See VMware Cloud
Foundation SDDC Manager Recovery Scripts (79004).
Procedure
1 In a web browser, log in to the management domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
4 Right-click the appliance of the restored vCenter Server instance and select Move to folder.
Procedure
1 In a web browser, log in to the management domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
2 In the inventory, click the management domain vCenter Server inventory, click the Summary
tab, and verify that there are no unexpected vCenter Server alerts.
3 Click the Linked vCenter Server systems tab and verify that the list contains all other vCenter
Server instances in the vCenter Single Sign-On domain.
4 Log in to the recovered vCenter Server instance by using a Secure Shell (SSH) client.
cd /usr/lib/vmware-vmdir/bin
a Run the command to list the current replication partners of the vCenter Server instance
with the current replication status between the nodes.
b Verify that for each partner, the vdcrepadmin command output contains Host
available: Yes, Status available: Yes, and Partner is 0 changes behind.
c If you observe significant differences, because the resyncing might take some time, wait
five minutes and repeat this step.
Procedure
a Click the workload domain name and click the Updates/Patches tab.
b Click Precheck.
c Click View status to review the precheck result for the vCenter Server instance and verify
that the status is Succeeded.
This procedure restores only the vSphere Distributed Switch configuration of a vCenter Server
instance.
The restore operation changes the settings on the vSphere Distributed Switch back to the
settings saved in the configuration file. The operation overwrites the current settings of the
vSphere Distributed Switch and its port groups. The operation does not delete existing port
groups that are not a part of the configuration file.
The vSphere Distributed Switch configuration is part of the vCenter Server backup. If you want to
restore the entire vCenter Server instance, see Restore vCenter Server.
Procedure
1 In a web browser, log in to the vCenter Server by using the vSphere Client (https://
<vcenter_server_fqdn>/ui).
4 Expand the Management networks folder, right-click the distributed switch and select
Settings > Restore configuration.
5 On the Restore switch configuration page, click Browse, navigate to the location of the
configuration file for the distributed switch, and click Open.
6 Select the Restore distributed switch and all port groups radio-button and click Next.
7 On the Ready to complete page, review the changes and click Finish.
9 Review the switch configuration to verify that it is as you expect after the restore.
Prerequisites
n Verify that you have a valid file-based backup of the failed NSX Manager instance.
n SFTP Server IP
n Encryption Password
Procedure
5 Update or Recreate the VM Anti-Affinity Rule for the NSX Manager Cluster Nodes
During the NSX Manager bring-up process, SDDC Manager creates a VM anti-affinity rule to
prevent the VMs of the NSX Manager cluster from running on the same ESXi host. If you
redeployed all NSX Manager cluster nodes, you must recreate this rule. If you redeployed
one or two nodes of the cluster, you must add the new VMs to the existing rule.
Procedure
2 Retrieve the Credentials for Restoring NSX Manager from SDDC Manager
Before restoring a failed NSX Manager instance, you must retrieve the NSX Manager root
and admin credentials from the SDDC Manager inventory.
Procedure
4 Under Current versions, in the NSX panel, locate and record the NSX upgrade coordinator
value.
5 Verify that the NSX version retrieved from SDDC Manager is the same as the version
associated with the backup file that you plan to restore.
Retrieve the Credentials for Restoring NSX Manager from SDDC Manager
Before restoring a failed NSX Manager instance, you must retrieve the NSX Manager root and
admin credentials from the SDDC Manager inventory.
Before you can query the SDDC Manager API, you must obtain an API access token by using an
API service account.
Procedure
1 Log in to your host machine with access to the SDDC and open a terminal.
a Run the command to obtain an access token by using the admin@local account
credentials.
a Run the command to retrieve the NSX Manager root and admin credentials.
The command returns the NSX Manager root and admin credentials.
b Record the NSX Manager root and admin credentials for the instance you are restoring.
Important This procedure is not applicable in use cases when there are operational NSX
Manager cluster nodes.
n If two of the three NSX Manager nodes in the NSX Manager cluster are in a failed state,
you begin the restore process by deactivating the cluster. See Deactivate the NSX Manager
Cluster.
n If only one of the three NSX Manager nodes in the NSX Manager cluster is in a failed state,
you directly restore the failed node to the cluster. See Restore an NSX Manager Node to an
Existing NSX Manager Cluster.
Procedure
2 Restore the First Node in a Failed NSX Manager Cluster from a File-Based Backup
You restore the file-based backup of the first NSX Manager cluster node to the newly
deployed NSX Manager instance.
Prerequisites
n Download the NSX Manager OVA file for the version of the failed NSX Manager cluster. See
Retrieve the NSX Manager Version from SDDC Manager.
n Verify that the backup file that you plan to restore is associated with the version of the failed
NSX Manager cluster.
Procedure
1 In a web browser, log in to the management domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
5 On the Select an OVF template page, select Local file, click Upload files, navigate to the
location of the NSX Manager OVA file, click Open, and click Next.
6 On the Select a name and folder page, enter the VM name and click Next.
9 On the Configuration page, select the appropriate size and click Next.
For the management domain, select Medium and for workload domains, select Large unless
you changed these defaults during deployment.
10 On the Select storage page, select the vSAN datastore, and click Next.
11 On the Select networks page, from the Destination network drop-down menu, select the
management network distributed port group, and click Next.
12 On the Customize template page, enter these values and click Next.
Default IPv4 gateway Enter the default gateway for the appliance.
Management network IPv4 address Enter the IP Address for the appliance.
Management network netmask Enter the subnet mask for the appliance.
DNS server list Enter the DNS servers for the appliance.
NTP server list Enter the NTP server for the appliance.
13 On the Ready to complete page, review the deployment details and click Finish.
Restore the First Node in a Failed NSX Manager Cluster from a File-Based Backup
You restore the file-based backup of the first NSX Manager cluster node to the newly deployed
NSX Manager instance.
Procedure
1 In a web browser, log in to the NSX Manager node for the domain by using the user interface
(https://<nsx_manager_node_fqdn>/login.jsp?local=true)
3 In the left navigation pane, under Lifecycle management, click Backup and restore.
5 In the Backup configuration dialog box, enter these values, and click Save.
Setting Value
Protocol SFTP
Port 22
Password service_account_password
6 Under Backup history, select the target backup, and click Restore.
7 During the restore, when prompted, reject adding NSX Manager nodes by clicking I
understand and Resume.
Results
A progress bar displays the status of the restore operation with the current step of the process.
Procedure
1 In a web browser, log in to the management domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
3 In the inventory expand vCenter Server > Datacenter > NSX Folder.
4 Click the VM name of the newly deployed first NSX Manager cluster node, click Launch Web
Console, and log in by using administrator credentials.
Setting Value
Password nsx_admin_password
Important This procedure is not applicable in use cases when there are two operational NSX
Manager cluster nodes.
If only one of the three NSX Manager nodes in the NSX Manager cluster is in a failed state, after
you prepared for the restore, you directly restore the failed node to the cluster. See Restore an
NSX Manager Node to an Existing NSX Manager Cluster.
Procedure
1 In a web browser, log in to the management domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
3 In the inventory expand vCenter Server > Datacenter > NSX Folder.
4 Click the VM of the operational NSX Manager node in the cluster, click Launch Web Console,
and log in by using administrator credentials.
Setting Value
Password nsx_admin_password
deactivate cluster
6 On the Are you sure you want to remove all other nodes from this cluster? (yes/no)prompt,
enter yes.
What to do next
Power off and delete the two failed NSX Manager nodes from inventory.
Procedure
1 Detach the Failed NSX Manager Node from the NSX Manager Cluster
Before you recover a failed NSX Manager node, you must detach the failed node from the
NSX Manager cluster.
3 Join the New NSX Manager Node to the NSX Manager Cluster
You join the newly deployed NSX Manager node to the cluster by using the virtual machine
web console from the vSphere Client.
Detach the Failed NSX Manager Node from the NSX Manager Cluster
Before you recover a failed NSX Manager node, you must detach the failed node from the NSX
Manager cluster.
Procedure
1 In a web browser, log in to the management domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
3 In the inventory expand vCenter Server > Datacenter > NSX Folder.
4 Click the VM of an operational NSX Manager node in the cluster, click Launch Web Console,
and log in by using administrator credentials.
Setting Value
Password nsx_admin_password
6 Run the command to detach the failed node from the cluster
7 When the detaching process finishes, run the command to view the cluster status.
Prerequisites
Download the NSX Manager OVA file for the version of the failed NSX Manager instance. See
Retrieve the NSX Manager Version from SDDC Manager.
Procedure
1 In a web browser, log in to the management domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
5 On the Select an OVF template page, select Local file, click Upload files, navigate to the
location of the NSX Manager OVA file, click Open, and click Next.
6 On the Select a name and folder page, in the Virtual machine name text box, enter VM name
of the failed node, and click Next.
10 On the Select storage page, select the vSAN datastore, and click Next.
11 On the Select networks page, from the Destination network drop-down menu, select the
management network distributed port group, and click Next.
12 On the Customize template page, enter these values and click Next.
Setting Value
Hostname failed_node_FQDN
Default IPv4 gateway Enter the default gateway for the appliance.
Management network netmask Enter the subnet mask for the appliance.
DNS server list Enter the DNS servers for the appliance.
NTP servers list Enter the NTP services for the appliance.
13 On the Ready to complete page, review the deployment details and click Finish.
Join the New NSX Manager Node to the NSX Manager Cluster
You join the newly deployed NSX Manager node to the cluster by using the virtual machine web
console from the vSphere Client.
Procedure
1 In a web browser, log in to the management domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
3 In the inventory expand vCenter Server > Datacenter > NSX Folder.
4 Click the VM of an operational NSX Manager node in the cluster, click Launch web console,
and log in by using administrator credentials.
Setting Value
Password nsx_admin_password
8 In the vSphere Client, click the VM of the newly deployed NSX Manager node, click Launch
Web console, and log in by using administrator credentials.
Setting Value
Password nsx_admin_password
9 Run the command to join the new NSX Manager node to the cluster.
To view the state of the NSX Manager cluster, you log in to the NSX Manager for the particular
domain.
Procedure
1 In a web browser, log in to the NSX Manager cluster for the domain by using the user
interface (https://<nsx_manager_cluster_fqdn>/login.jsp?local=true)
4 Verify that the Cluster status is green and Stable and that each cluster node is Available.
To view the certificate of the failed NSX Manager cluster node, you log in to the NSX Manager for
the domain.
Workload domain NSX Manager cluster https://<FQDN of workload domain NSX Manager>/
login.jsp?local=true
This procedure is an example for restoring the certificate of a management domain NSX Manager
cluster node.
Procedure
1 In a Web browser, log in to the NSX Manager cluster for the management domain.
Setting Value
Password nsx_admin_password
4 Locate and copy the ID of the certificate that was issued by CA to the node that you are
restoring.
5 Run the command to install the CA-signed certificate on the new NSX Manager node.
What to do next
Important If assigning the certificate fails because the certificate revocation list (CRL) verification
fails, see https://kb.vmware.com/kb/78794. If you disable the CRL checking to assign the
certificate, after assigning the certificate, you must re-enable the CRL checking.
Procedure
1 In a web browser, log in to the management domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
3 In the inventory expand vCenter Server > Datacenter > NSX Folder.
4 Right click the new NSX Manager VM and select Guest OS > Restart.
To view the system status of the NSX Manager cluster, you log in to the NSX Manager for the
particular domain.
Procedure
1 In a web browser, log in to the NSX Manager cluster for the domain by using the user
interface (https://<nsx_manager_cluster_fqdn>/login.jsp?local=true)
4 If the host transport nodes are in a Pending state, run Configure NSX on these nodes to
refresh the UI.
What to do next
Refresh the SSH keys that are stored in the SDDC Manager inventory. See VMware Cloud
Foundation SDDC Manager Recovery Scripts (79004).
Update or Recreate the VM Anti-Affinity Rule for the NSX Manager Cluster
Nodes
During the NSX Manager bring-up process, SDDC Manager creates a VM anti-affinity rule to
prevent the VMs of the NSX Manager cluster from running on the same ESXi host. If you
redeployed all NSX Manager cluster nodes, you must recreate this rule. If you redeployed one or
two nodes of the cluster, you must add the new VMs to the existing rule.
Procedure
1 In a web browser, log in to the management domain vCenter Server by using the vSphere
Client (https://<vcenter_server_fqdn>/ui).
n If you redeployed one or two nodes of the cluster, add the new VMs to the existing rule.
b Click Add VM/Host rule member, select the new NSX Manager cluster nodes, and
click Add.
n If you redeployed all NSX Manager cluster nodes, click Add VM/Host rule, enter these
values to create the rule, and click OK.
Setting Value
Procedure
a Run the command to view the details about the VMware Cloud Foundation system.
3 Run the command to collect the log files from the restore of the NSX Manager cluster.
What to do next
Refresh the SSH keys that are stored in the SDDC Manager inventory. See VMware Cloud
Foundation SDDC Manager Recovery Scripts (79004).
Procedure
2 Replace the Failed NSX Edge Node with a Temporary NSX Edge Node
You deploy a temporary NSX Edge node in the domain, add it to the NSX Edge cluster, and
then delete the failed NSX Edge node.
3 Replace the Temporary NSX Edge Node with the Redeployed NSX Edge Node
After you replaced and deleted the failed NSX Edge node, to return the NSX Edge cluster
to its original state, you redeploy the failed node, add it to the NSX Edge cluster, and delete
then temporary NSX Edge node.
Procedure
1 Retrieve the NSX Edge Node Deployment Details from NSX Manager Cluster
Before restoring a failed NSX Edge node, you must retrieve its deployment details from the
NSX Manager cluster.
Retrieve the NSX Edge Node Deployment Details from NSX Manager Cluster
Before restoring a failed NSX Edge node, you must retrieve its deployment details from the NSX
Manager cluster.
Procedure
1 In a web browser, log in to the NSX Manager cluster for the domain by using the user
interface (https://<nsx_manager_cluster_fqdn>/login.jsp?local=true)
10 Click the name of the NSX Edge node that you plan to replace and record the following
values.
n Name
n Management IP
n Transport Zones
n Edge Cluster
n Uplink Profile
n IP Assignment
Procedure
1 In the SDDC Manager user interface, from the navigation pane click Developer center.
4 In the resourceName text box, enter the FQDN of the failed NSX Edge node, and click
Execute.
You use the SDDC Manager user interface to retrieve the ID of the vSphere cluster for the
workload domain.
Procedure
1 In the SDDC Manager user interface, from the navigation pane click Developer center.
3 Expand APIs for managing clusters, click GET /v1/clusters, and click Execute.
5 Record the ID of the cluster for the workload domain cluster ID.
Replace the Failed NSX Edge Node with a Temporary NSX Edge Node
You deploy a temporary NSX Edge node in the domain, add it to the NSX Edge cluster, and then
delete the failed NSX Edge node.
Procedure
2 Replace the Failed NSX Edge Node with the Temporary NSX Edge Node
You add the temporary NSX Edge node to the NSX Edge cluster by replacing the failed NSX
Edge node.
3 Delete the Failed NSX Edge Node from the NSX Manager Cluster
After replacing the failed NSX Edge node with the temporary NSX Edge node in the NSX
Edge cluster, you delete the failed node.
Prerequisites
Allocate the FQDN and IP address for the temporary NSX Edge node for the domain of the failed
node.
Procedure
1 In a web browser, log in to the NSX Manager cluster for the domain by using the user
interface (https://<nsx_manager_cluster_fqdn>/login.jsp?local=true)
6 On the Name and description page, enter these values and click Next.
Setting Value
7 On the Credentials page, enter these values and the passwords recorded in the earlier steps
and then click Next.
Setting Value
8 On the Configure deployment page, select the following and click Next.
Setting Value
9 On the Configure node settings page, enter these values and click Next.
Setting Value
IP Assignment Static
Setting Value
10 On the Configure NSX page, enter these values which are already recorded and click Finish.
Setting Value
Teaming policy switch mapping Enter the values for Uplink1 and Uplink2.
Replace the Failed NSX Edge Node with the Temporary NSX Edge Node
You add the temporary NSX Edge node to the NSX Edge cluster by replacing the failed NSX
Edge node.
Procedure
1 In a web browser, log in to the NSX Manager cluster for the domain by using the user
interface (https://<nsx_manager_cluster_fqdn>/login.jsp?local=true)
7 From the Replace drop down menu, select the Failed edge node and from the with drop
down menu, select the Temporary edge node and then click Save.
Delete the Failed NSX Edge Node from the NSX Manager Cluster
After replacing the failed NSX Edge node with the temporary NSX Edge node in the NSX Edge
cluster, you delete the failed node.
Procedure
1 In a web browser, log in to the NSX Manager cluster for the domain by using the user
interface (https://<nsx_manager_cluster_fqdn>/login.jsp?local=true)
5 Select the check-box for the failed NSX Edge node and click Delete.
You validate the state of the temporary NSX Edge node and the second NSX Edge node in the
cluster.
Procedure
1 In a web browser, log in to the NSX Manager cluster for the domain by using the user
interface (https://<nsx_manager_cluster_fqdn>/login.jsp?local=true)
Setting Value
Node status Up
Replace the Temporary NSX Edge Node with the Redeployed NSX Edge Node
After you replaced and deleted the failed NSX Edge node, to return the NSX Edge cluster to
its original state, you redeploy the failed node, add it to the NSX Edge cluster, and delete then
temporary NSX Edge node.
Procedure
2 Replace the Temporary NSX Edge Node with the Redeployed NSX Edge Node
After deploying the new NSX Edge node with the same configuration as the failed NSX Edge
node, you replace the temporary NSX Edge node with the redeployed failed node in the
NSX- Edge cluster.
4 Update or Recreate the VM Anti-Affinity Rule for the NSX Edge Cluster Nodes
During the NSX Edge deployment process, SDDC Manager creates a VM anti-affinity rule
to prevent the nodes of the NSX Edge cluster from running on the same ESXi host. If you
redeployed the two NSX Edge cluster nodes, you must recreate this rule. If you redeployed
one node of the cluster, you must add the new VM to the existing rule.
To return the NSX Edge cluster to the original state, you must use the FQDN and IP address of
the failed NSX Edge node that you deleted. This procedure ensures that the inventory in SDDC
Manager is accurate.
Procedure
1 In a web browser, log in to the NSX Manager cluster for the domain by using the user
interface (https://<nsx_manager_cluster_fqdn>/login.jsp?local=true)
6 On the Name and description page, enter these values and click Next.
Setting Value
7 On the Credentials page, enter these values which are recorded earlier and click Next.
Setting Value
8 On the Configure deployment page, select these values and click Next.
Setting Value
9 On the Configure Node Settings page, enter these values and click Next.
Setting Value
IP assignment Static
10 On the Configure NSX page, enter these values which are recorded earlier and click Finish.
Setting Value
Teaming policy switch mapping Enter the values for Uplink1 and Uplink2.
Replace the Temporary NSX Edge Node with the Redeployed NSX Edge Node
After deploying the new NSX Edge node with the same configuration as the failed NSX Edge
node, you replace the temporary NSX Edge node with the redeployed failed node in the NSX-
Edge cluster.
Procedure
1 In a web browser, log in to the NSX Manager cluster for the domain by using the user
interface (https://<nsx_manager_cluster_fqdn>/login.jsp?local=true)
7 From the Replace drop down menu, select the temporary node and from the with drop down
menu, select the new node and then click Save.
Procedure
1 In a web browser, log in to the NSX Manager cluster for the domain by using the user
interface (https://<nsx_manager_cluster_fqdn>/login.jsp?local=true)
3 In the left pane, under Configuration, click Fabric > Nodes > .
5 Select the check-box for the temporary NSX Edge node and click Delete.
Update or Recreate the VM Anti-Affinity Rule for the NSX Edge Cluster Nodes
During the NSX Edge deployment process, SDDC Manager creates a VM anti-affinity rule to
prevent the nodes of the NSX Edge cluster from running on the same ESXi host. If you
redeployed the two NSX Edge cluster nodes, you must recreate this rule. If you redeployed
one node of the cluster, you must add the new VM to the existing rule.
Procedure
1 In a web browser, log in to the domain vCenter Server by using the vSphere Client (https://
<vcenter_server_fqdn>/ui).
n If you redeployed one of the nodes in the NSX Edge cluster, add the new VM to the
existing rule.
b Click Add VM/Host rule member, select the new NSX Edge cluster node, and click
Add.
n If you redeployed the two nodes in the NSX Edge cluster, click Add VM/Host rule, enter
these values to create the rule, and click OK.
Setting Value
You validate the state of the redeployed NSX Edge node and the second NSX Edge node in the
cluster.
Procedure
1 In a web browser, log in to the NSX Manager cluster for the domain by using the user
interface (https://<nsx_manager_cluster_fqdn>/login.jsp?local=true)
Setting Value
Node status Up
vSphere Storage APIs - Data Protection compatible backup software connects to the vCenter
servers in the management domain to perform backups. In the event of failure, the backup
software connects to the vCenter servers in the management domain to restore the VMs. If the
management domain is lost, the vCenter servers are no longer available and must be restored
first. Choosing a backup software that supports Direct Restore to an ESXi host allows restoring
the vCenter Servers.
Connect your backup solution with the management domain vCenter Server and configure it. To
reduce the backup time and storage cost, use incremental backups in addition to the full ones.
Acquiesced backups are enabled for VMware Aria Suite Lifecycle and Workspace ONE Access.
Note Review the VMware Interoperability Matrix to verify compatibility and upgradability before
planning and starting an upgrade.
You can perform a sequential or skip-level upgrade to VMware Cloud Foundation 5.2.x on Dell
VxRail from VMware Cloud Foundation 4.5 or later. If your environment is at a version earlier than
4.5, you must upgrade the management domain and all VI workload domains to VMware Cloud
Foundation 4.5 or later and then upgrade to VMware Cloud Foundation 5.2.x.
Warning vSphere with Tanzu enabled clusters, may require a specific upgrade sequence. See
KB 88962 for more information.
The first step is to download the bundles for each VMware Cloud Foundation on Dell VxRail
component that requires an upgrade. After all of the bundles are available in SDDC Manager,
upgrade the management domain and then your VI workload domains.
n SDDC Manager only - You have updated SDDC Manager to 5.2, but none of the other BOM
components.
n Split BOM - Management domain or VI Workload Domain is only partially updated to VMware
Cloud Foundation 5.2.
n Mixed 4.5.x/5.x BOM - Some workload domains (Management or VI) have been completely
upgraded to VMware Cloud Foundation 5.2 and at least one VI Workload Domain is at the
Source 4.5.x BOM version.
n Mixed 5.x BOM - Some workload domains (Management or VI) have been completely
upgraded to VMware Cloud Foundation 5.2 and at least one VI Workload Domain is at the
Source 5.0 or 5.1 BOM version.
When a VMware Cloud Foundation instance is in Source BOM or Target BOM, the features
available within SDDC Manager are as expected for that given release. However when in a Mixed
BOM the operations available vary per workload domain depending on which state the domain
itself is in.
The following table indicates the functions available within SDDC Manager during an upgrade.
CEIP Activate / Y Y Y Y
Deactivate CEIP
Certificate View/Generate/ Y Y Y Y
Management Upload/Install
Validate / Y Y Y Y
Configure NTP
Relicensing Y Y Y Y
License check Y Y Y Y
LCM Connect to Y Y Y Y
VMware or
Dell Depot /
Download
Bundles
Schedule Bundle Y Y Y Y
Download
Install vCenter Y Y Y Y
Patch
Password Rotate/Update/ Y Y Y Y
Management Retry/Cancel
Workload Add/Remove Y Y Y Y
Domain ESXi Host
Add/Remove Y Y Y Y
vSphere Cluster
Add 5.x Y Y Y Y
Workload
Domain in ELM
mode
Remove 5.0 Y Y Y Y
Workload
Domain
It may be possible to upgrade some vSphere UI client plug-ins before upgrading to vSphere
8.0. Contact your 3rd Party vendor to determine the best upgrade path.
Procedure
1 In the In-Progress Updates section, click View Status to view the high-level update progress
and the number of components to be updated.
2 Details of the component being updated is shown below that. The image below is an example
and may not reflect the actual versions.
3 Click the arrow to see a list of tasks being performed to update the component. As the task is
completed, it shows a green check mark.
4 When all tasks to update a component have been completed, the update status for the
component is displayed as Updated.
5 If a component fails to be updated, the status is displayed as Failed. The reason for the failure
as well as remediation steps are displayed. The image below is an example and may not
reflect the actual versions in your environment.
6 After you resolve the issues, you can retry the update.
Procedure
2 Click the name of a workload domain and then click the Update History tab.
All updates applied to this workload domain are displayed. If an update bundle was applied
more than once, click View Past Attempts to see more information.
1 SSH in to the SDDC Manager appliance with the vcf user name and enter the password.
3 To create an sos bundle for support, see Supportability and Serviceability (SoS) Utility in the
VMware Cloud Foundation Administration Guide.
If the SDDC Manager appliance cannot connect to the internet, you can use the Bundle Transfer
Utility or connect to an offline depot.
See Public URL list for SDDC Manager for information about the URLs that must be accessible to
download bundles.
n Install Bundles
An install bundle includes software binaries to install VI workload domains (vCenter Server
and NSX) and VMware Aria Suite Lifecycle. You download install bundles using the same
process that you use for upgrade bundles.
An async patch bundle allows you to apply critical patches to certain VMware Cloud
Foundation components (NSX Manager, and vCenter Server) when an update or upgrade
bundle is not available. If you are running VMware Cloud Foundation 5.1 or earlier, you
must use the Async Patch Tool to download an async patch bundle. See Async Patch Tool.
Starting with VMware Cloud Foundation 5.2, you can download async patches using the
SDDC Manager UI or Bundle Transfer Utility.
n Online depot
n Offline depot
You can only connect SDDC Manager to one type of depot. If SDDC Manager is connected to an
online depot and you configure a connection to an offline depot, the online depot connection is
disabled and deleted.
Prerequisites
To connect to the online depot, SDDC Manager must be able to connect to the internet, either
directly or through a proxy server.
To connect to an offline depot, you must first configure it. See KB 312168 for information about
the requirements and process for creating an offline depot. To download bundles to an offline
depot, see "Download Bundles to an Offline Depot" in the VMware Cloud Foundation Lifecycle
Management Guide.
Procedure
SDDC Manager attempts to connect to the depot. If the connection is successful, SDDC
Manager starts looking for available bundles. To view available bundles, click Lifecycle
Management > Bundle Management and then click the Bundles tab. It may take some time
for all available bundles to appear.
If SDDC Manager does not have direct internet access, configure a proxy server or use the
Bundle Transfer Utility for offline bundle downloads.
When you download bundles, SDDC Manager verifies that the file size and checksum of the
downloaded bundles match the expected values.
Prerequisites
Connect SDDC Manager to an online or offline depot. See Connect SDDC Manager to a Software
Depot for Downloading Bundles.
Procedure
Note If you just connected SDDC Manager to a depot, it can take some time for bundles to
appear.
All available bundles are displayed. Install bundles display an Install Only Bundle label. If the
bundle can be applied right away, the Bundle Details column displays the workload domains
to which the bundle needs to be applied to, and the Availability column says Available. If
another bundle must be applied before a particular bundle, the Availability field displays
Future.
To view more information about the bundle, click View Details. The Bundle Details section
displays the bundle version, release date, and additional details about the bundle.
Select the date and time for the bundle download and click Schedule.
Procedure
6 If your proxy server requires authentication, toggle the Authentication setting to the on
position and enter the user name and password.
7 Click Save.
What to do next
You can now download bundles as described in Download Bundles Using SDDC Manager.
Using the Bundle Transfer Utility to upgrade to VMware Cloud Foundation 5.2.x involves the
following steps:
n On a computer with access to the internet, use the Bundle Transfer Utility to download the
bundles and other required files.
n Copy the bundles and other required files to the SDDC Manager appliance.
n On the SDDC Manager appliance, use the Bundle Transfer Utility to upload the bundles and
other required files to the internal LCM repository.
If the computer with internet access can only access the internet using a proxy server, use the
following options when downloading:
Option Description
./btuJre/lin64/bin/keytool -importcert
-file proxy.crt -keystore ./btuJre/lin64/lib/
security/cacerts
Prerequisites
n A Windows or Linux computer with internet connectivity (either directly or through a proxy)
for downloading the bundles and other required files.
n Configure TCP keepalive in your SSH client to prevent socket connection timeouts when
using the Bundle Transfer Utility for long-running operations.
Note The Bundle Transfer Utility is the only supported method for downloading bundles. Do not
use third-party tools or other methods to download bundles.
Procedure
1 Download the most recent version of the Bundle Transfer Utility on a computer with internet
access.
a Log in to the Broadcom Support Portal and browse to My Downloads > VMware Cloud
Foundation.
b Click the version of VMware Cloud Foundation to which you are upgrading.
e Extract lcm-tools-prod.tar.gz.
f Navigate to the lcm-tools-prod/bin/ and confirm that you have execute permission on all
folders.
2 Download bundles and other artifacts to the computer with internet access.
This is a structured metadata file that contains information about the VMware product
versions included in the release Bill of Materials.
./lcm-bundle-transfer-util --vsanHclDownload
where
absolute-path- Path to the directory where the bundle files should be downloaded. This directory folder
output-dir must have 777 permissions.
If you do not specify the download directory, bundles are downloaded to the default
directory with 777 permissions.
depotUser User name for the Broadcom Support Portal. You will be prompted to enter the depot
user password. If there are any special characters in the password, specify the password
within single quotes.
n all
n install
n patch
You can also enter a comma-separated list of bundle names to download specific
bundles. For example: bundle-38371, bundle-38378.
Download progress for each bundle is displayed. Wait until all bundles are downloaded
successfully.
n Manifest file
n vSAN HCL
You can select any location on the SDDC Manager appliance that has enough free space
available. For example, /nfs/vmware/vcf/nfs-mount/.
Note Make sure to copy the entire output directory, including any VxRail bundles and JSON
files.
a SSH in to the SDDC Manager appliance using the vcf user account.
mkdir /opt/vmware/vcf/lcm/lcm-tools
d Copy the Bundle Transfer Utility file (lcm-tools-prod.tar.gz) that you downloaded in
step 1 to the /opt/vmware/vcf/lcm/lcm-tools directory.
cd /opt/vmware/vcf/lcm/
6 From the SDDC Manager appliance, use the Bundle Transfer Utility to upload the bundles and
artifacts.
An independent SDDC Manager release includes a fourth digit in its version number, for example
SDDC Manager 5.2.0.1.
n On a computer with access to the internet, use the Bundle Transfer Utility to download the
independent SDDC Manager bundle and other required files.
n Copy the bundle and other required files to the SDDC Manager appliance.
n On the SDDC Manager appliance, use the Bundle Transfer Utility to upload the bundle and
other required files to the internal LCM repository.
If the computer with internet access can only access the internet using a proxy server, use the
following options when downloading:
Option Description
Prerequisites
n A Windows or Linux computer with internet connectivity (either directly or through a proxy)
for downloading the bundles and other required files.
n Configure TCP keepalive in your SSH client to prevent socket connection timeouts when
using the Bundle Transfer Utility for long-running operations.
n The computer with internet connectivity and the SDDC Manager appliance must have the
latest version of the Bundle Transfer Utility installed and configured. See Offline Download of
VMware Cloud Foundation 5.2.x Upgrade Bundles for more information.
Procedure
1 Download bundles and other artifacts to the computer with internet access.
This is a structured metadata file that contains information about the VMware product
versions included in the release Bill of Materials.
where
depotUser User name for the Broadcom Support Portal. You will be prompted to enter the user
password. If there are any special characters in the password, specify the password
within single quotes.
absolute-path- Path to the directory where the bundle files should be downloaded. This directory folder
output-dir must have 777 permissions.
If you do not specify the download directory, bundles are downloaded to the default
directory with 777 permissions.
n Manifest file
3 From the SDDC Manager appliance, use the Bundle Transfer Utility to upload the bundles and
artifacts.
What to do next
After the upload completes successfully, you can use the SDDC Manager UI to upgrade SDDC
Manager. See Independent SDDC Manager Upgrade using the SDDC Manager UI.
n On a computer with access to the internet, use the Bundle Transfer Utility to download the
async patch bundle and other required files.
n Copy the bundle and other required files to the SDDC Manager appliance.
n On the SDDC Manager appliance, use the Bundle Transfer Utility to upload the bundle and
other required files to the internal LCM repository.
If the computer with internet access can only access the internet using a proxy server, use the
following options when downloading:
Option Description
Option Description
Prerequisites
n A Windows or Linux computer with internet connectivity (either directly or through a proxy)
for downloading the bundles and other required files.
n Configure TCP keepalive in your SSH client to prevent socket connection timeouts when
using the Bundle Transfer Utility for long-running operations.
n The computer with internet connectivity and the SDDC Manager appliance must have the
latest version of the Bundle Transfer Utility installed and configured. See Offline Download of
VMware Cloud Foundation 5.2.x Upgrade Bundles for more information.
Procedure
1 Download bundles and other artifacts to the computer with internet access.
This is a structured metadata file that contains information about the VMware product
versions included in the release Bill of Materials.
For example:
n Manifest file
3 From the SDDC Manager appliance, use the Bundle Transfer Utility to upload the bundles and
artifacts.
n Replace number with the bundle number you are uploading. For example: 12345 for
bundle-12345.
n Replace absolute-path-bundle-dir with the path to the location where you copied the
output directory. For example: /nfs/vmware/vcf/nfs-mount/upgrade-bundles.
What to do next
After the upload completes successfully, you can use the SDDC Manager UI to apply the async
patch. See Patching the Management and Workload Domains.
After you download the bundles, you can use the upgrade planner in the SDDC Manager UI to
select any supported version for each of the VMware Cloud Foundation BOM components. This
includes async patch versions as well as VCF BOM versions.
Offline download of flexible BOM upgrade bundles involves the following steps:
n On a computer with access to the internet, use the Bundle Transfer Utility to download the
required files.
n On the SDDC Manager appliance, use the Bundle Transfer Utility to upload the required files
to the internal LCM repository.
n On the SDDC Manager appliance, use the Bundle Transfer Utility to generate the
plannerFile.json.
n On the computer with access to the internet, download bundles using plannerFile.json.
n Copy the bundle directory to the SDDC Manager appliance and use the Bundle Transfer
Utility to upload the bundles to the internal LCM repository.
If the computer with internet access can only access the internet using a proxy server, use the
following options when downloading:
Option Description
Prerequisites
n A Windows or Linux computer with internet connectivity (either directly or through a proxy)
for downloading the bundles and other required files.
n A Windows or Linux computer with access to the SDDC Manager appliance for uploading the
bundles.
n To upload the manifest file from a Windows computer, you must have OpenSSL installed and
configured.
n Configure TCP keepalive in your SSH client to prevent socket connection timeouts when
using the Bundle Transfer Utility for long-running operations.
n The computer with internet connectivity and the SDDC Manager appliance must all have the
latest version of the Bundle Transfer Utility installed and configured. See Offline Download of
VMware Cloud Foundation 5.2.x Upgrade Bundles for more information.
Procedure
The manifest is a structured metadata file that contains information about the VMware
product versions included in the release Bill of Materials.
You can select any location on the SDDC Manager appliance that has enough free space
available. For example, /nfs/vmware/vcf/nfs-mount/.
5 On the SDDC Manager appliance, use the Bundle Transfer Utility to generate a planner file.
For example:
7 On the computer with access to the internet, download the bundles using the
plannerFile.json.
10 Upload the bundle directory to the SDDC Manager appliance internal LCM repository.
What to do next
In the SDDC Manager UI browse to the Available Updates screen for the workload domain
you are upgrading and click Schedule Update or Update Now to update the first component.
Continue to update the VCF BOM components until they are all updated.
If the computer with internet access can only access the internet using a proxy server, use the
following options when downloading the HCL:
Option Description
Option Description
Prerequisites
n A Windows or Linux computer with internet connectivity (either directly or through a proxy)
for downloading the HCL. To upload the HCL file from a Windows computer, you must have
OpenSSL installed and configured.
n Configure TCP keepalive in your SSH client to prevent socket connection timeouts when
using the Bundle Transfer Utility for long-running operations.
Note The Bundle Transfer Utility is the only supported method for downloading HCL. Do not use
third-party tools or other methods to download HCL.
Procedure
1 Download the most recent version of the Bundle Transfer Utility on a computer with internet
access.
a Log in to the Broadcom Support Portal and browse to My Downloads > VMware Cloud
Foundation.
b Click the version of VMware Cloud Foundation to which you are upgrading.
2 Extract lcm-tools-prod.tar.gz.
3 Navigate to the lcm-tools-prod/bin/ and confirm that you have execute permission on all
folders.
4 Copy the bundle transfer utility to a computer with access to the SDDC Manager appliance
and then copy the bundle transfer utility to the SDDC Manager appliance.
a SSH in to the SDDC Manager appliance using the vcf user account.
mkdir /opt/vmware/vcf/lcm/lcm-tools
d Copy the Bundle Transfer Utility file (lcm-tools-prod.tar.gz) that you downloaded in
step 1 to the /opt/vmware/vcf/lcm/lcm-tools directory.
cd /opt/vmware/vcf/lcm/
./lcm-bundle-transfer-util --vsanHclDownload
7 From the SDDC Manager appliance, use the Bundle Transfer Utility to upload the HCL file.
user SDDC Manager user. After this, the tool will prompt for
the user password.
You can use the Bundle Transfer Utility to download upgrade bundles and async patch bundles
to the offline depot.
Prerequisites
n The latest version of the Bundle Transfer Utility. You can download it from the Broadcom
Support portal.
n Internet connectivity (either directly or through a proxy) for downloading the bundles and
other required files.
n Configure TCP keepalive in your SSH client to prevent socket connection timeouts when
using the Bundle Transfer Utility for long-running operations.
n Connect SDDC Manager to the offline depot. See Connect SDDC Manager to a Software
Depot for Downloading Bundles.
Note You can also connect SDDC Manager to the offline depot after you download bundles
to the offline depot.
Procedure
1 On the computer hosting offline depot, run the following command to download the bundles
required to upgrade VMware Cloud Foundation.
For example:
2 Run the following command to download async patch bundles to the offline depot:
For example:
What to do next
After the bundles are available in the offline depot, you can use the SDDC Manager UI to apply
the bundles to workload domains. Multiple instances of SDDC Manager UI can connect to the
same offline depot.
Allocate a temporary IP address for each vCenter Server [Conditional] When upgrading from VMware Cloud
upgrade Foundation 4.5.x.
Required for each vCenter Server upgrade. Must be
allocated from the management subnet. The IP address
can be reused.
Verify there are no expired or expiring passwords Review the password management dashboard in SDDC
Manager.
Verify there are no expired or expiring certificates Review the Certificates tab in SDDC Manager for each
workload domain.
Verify ESXi host TPM module status [Conditional] If ESXi hosts have TPM modules in use,
verify they are running the latest 2.0 firmware. If not in
use they must be disabled in the BIOS. See KB 312159
Verify ESXi hardware is compatible with target version See ESXi Requirements and VMware
Compatibility Guide at http://www.vmware.com/
resources/compatibility/search.php.
Manually update the vSAN HCL database to ensure that it See KB 2145116
is up-to-date.
Back up SDDC Manager, all vCenter Server instances, and Take file-based backups or image-level backups of SDDC
NSX Manager instances. Manager, all vCenter Servers, and NSX Managers. Take a
cold snapshot of SDDC Manager.
Make sure that there are no failed workflows in your Caution If any of these conditions are true, contact
system and none of the VMware Cloud Foundation VMware Technical Support before starting the upgrade.
resources are in activating or error state.
Deactivate all VMware Cloud Foundation 4.x async VMware Cloud Foundation 5.0 and later no longer require
patches and run an inventory sync before upgrading. using the Async Patch Tool to enable upgrades from an
async-patched VMware Cloud Foundation instance. See
VMware Cloud Foundation Async Patch Tool Options for
more information
Download the upgrade bundles. See Downloading VMware Cloud Foundation Upgrade
Bundles.
Apply the VMware Cloud Foundation n The initial VMware Cloud If the current version of VMware
Upgrade Bundle Foundation version is Cloud Foundation is 4.5.x or 5.x
n 4.5.x or 5.x Upgrade SDDC Manager to 5.2.x.
Apply the VMware Cloud Foundation n Once the SDDC Manager has
Configuration Updates been upgraded to 5.2.x the
Configuration updates can be
applied collectively.
Upgrade VMware Aria Suite Lifecycle for VMware Cloud [Conditional] If VMware Aria Suite Lifecycle is present
Foundation
Upgrade VMware Aria Suite products for VMware Cloud [Conditional] If VMware Aria Suite products are present
Foundation
Upgrade NSX Global Managers to 4.2 When NSX is deployed in n [Conditional] If NSX Federation is
the workload domain with NSX present
Federation configured. n Upgrade NSX Global Managers to
4.2 using the Global Manager UI
n Upgrade standby global
manager, followed by active
global manager
n [Conditonal] for VI Workload
Domain upgrades, If you are
upgrading by component rather
than by workload domain,
upgrade all NSX global managers
in your estate now.
Upgrade to NSX 4.2 When NSX is deployed in the n Upgrade NSX to 4.2 using SDDC
workload domain and is not using Manager.
NSX Federation. n [Conditonal] for VI Workload
Domain upgrades, If you are
upgrading by component rather
than by workload domain,
upgrade NSX across all VI
workload domains now.
Upgrade vCenter Server for VMware Cloud Foundation n [Conditional] When upgrading from VMware Cloud
Foundation 4.5.x.
Table 27-8. Upgrade VxRail Manager and Management Domain vSphere clusters
Upgrade vSAN Witness Host for VMware Cloud [Conditional] If the vSphere cluster is a stretched vSAN
Foundation cluster
Upgrade VxRail Manager and ESXi Hosts n Choose an approach based on your requirements.
n [Optional] If you are upgrading by component rather
than by workload domain, upgrade vSphere clusters
across all VI workload domains now.
Update Licenses for a Workload Domain [Conditional] If upgrading from a VMware Cloud
Foundation version prior to 5.0
Update licenses for:
n vSAN 8.x
n vSphere 8.x
Upgrade vSphere Distributed Switch versions n [Optional] The upgrade lets the distributed switch
take advantage of features that are available only in
the later versions.
Upgrade vSAN on-disk format versions n The upgrade lets the vSAN Cluster take advantage of
features that are available only in the later versions.
n The upgrade may cause temporary resynchronization
traffic and use additional space by moving data or
rebuilding object components to a new data structure.
n These updates can be performed at a time that is
most convenient for your organization..
Upgrade to NSX 4.2 When NSX is deployed in the n Upgrade NSX to 4.2 using SDDC
workload domain and is not using Manager.
NSX Federation. n [Conditonal] for VI Workload
Domain upgrades, If you are
upgrading by component rather
than by workload domain,
upgrade NSX across all VI
workload domains now.
Upgrade NSX Global Managers to 4.2 When NSX is deployed in n [Conditional] If NSX Federation is
the workload domain with NSX present
Federation configured. n Upgrade NSX Global Managers to
4.2 using the Global Manager UI
n Upgrade standby global
manager, followed by active
global manager
n [Conditonal] for VI Workload
Domain upgrades, If you are
upgrading by component rather
than by workload domain,
upgrade all NSX global managers
in your estate now.
Upgrade vCenter Server for VMware Cloud Foundation n [Conditional] When upgrading from VMware Cloud
Foundation 4.5.x.
Table 27-14. Upgrade VxRail Manager and VI Workload Domain vSphere clusters
Upgrade vSAN Witness Host for VMware Cloud [Conditional] If the vSphere cluster is a stretched vSAN
Foundation cluster
Upgrade VxRail Manager and ESXi Hosts n Choose an approach based on your requirements.
n [Optional] If you are upgrading by component rather
than by workload domain, upgrade vSphere clusters
across all VI workload domains now.
Update Licenses for a Workload Domain [Conditional] If upgrading from a VMware Cloud
Foundation version prior to 5.0
Update licenses for:
n vSAN 8.x
n vSphere 8.x
Upgrade vSphere Distributed Switch versions n [Optional] The upgrade lets the distributed switch
take advantage of features that are available only in
the later versions.
Upgrade vSAN on-disk format versions n The upgrade lets the vSAN Cluster take advantage of
features that are available only in the later versions.
n The upgrade may cause temporary resynchronization
traffic and use additional space by moving data or
rebuilding object components to a new data structure.
n These updates can be performed at a time that is
most convenient for your organization..
Until SDDC Manager is upgraded to version 5.2.x, you must upgrade the management domain
before you upgrade VI workload domains. Once SDDC Manager is at version 5.2 or later, you can
upgrade VI workload domains before or after upgrading the management domain, as long as all
components in the workload domain are compatible.
4 vCenter Server.
If you silence a vSAN Skyline Health alert in the vSphere Client, SDDC Manager skips the related
precheck and indicates which precheck it skipped. Click Restore Precheck to include the silenced
precheck. For example:
You can also silence failed vSAN prechecks in the SDDC Manager UI by clicking Silence
Precheck. Silenced prechecks do not trigger warnings or block upgrades.
Important You should only silence alerts if you know that they are incorrect. Do not silence
alerts for real issues that require remediation.
Procedure
2 On the Workload Domains page, click the workload domain where you want to run the
precheck.
3 On the domain summary page, click the Updates/Patches tab. The image below is a sample
screenshot and may not reflect the correct product versions.
Once the precheck begins, a message appears indicating the time at which the precheck was
started.
5 Click View Status to see detailed tasks and their status. The image below is a sample
screenshot and may not reflect the correct versions.
If a precheck task failed, fix the issue, and click Retry Precheck to run the task again. You can
also click Precheck Failed Resources to retry all failed tasks.
7 If the workload domain contains a host that includes pinned VMs, the precheck fails at the
Enter Maintenance Mode step. If the host can enter maintenance mode through vCenter
Server UI, you can suppress this check for NSX and ESXi in VMware Cloud Foundation by
following the steps below.
a Log in to SDDC Manager by using a Secure Shell (SSH) client with the user name vcf and
password you specified in the deployment parameter workbook.
lcm.nsxt.suppress.dry.run.emm.check=true
lcm.esx.suppress.dry.run.emm.check.failures=true
d Restart Lifecycle Management by typing the following command in the console window.
Results
The precheck result is displayed at the top of the Upgrade Precheck Details window. If you click
Exit Details, the precheck result is displayed at the top of the Precheck section in the Updates/
Patches tab.
Ensure that the precheck results are green before proceeding. A failed precheck may cause the
update to fail.
If you silence a vSAN Skyline Health alert in the vSphere Client, SDDC Manager skips the related
precheck and indicates which precheck it skipped. Click RESTORE PRECHECK to include the
silenced precheck. For example:
You can also silence failed vSAN prechecks in the SDDC Manager UI by clicking Silence
Precheck. Silenced prechecks do not trigger warnings or block upgrades.
Important Only silence alerts if you know that they are incorrect. Do not silence alerts for real
issues that require remediation.
Procedure
2 On the Workload Domains page, click the workload domain where you want to run the
precheck.
(The following image is a sample screenshot and may not reflect current product versions.)
Note It is recommended that you Precheck your workload domain prior to performing an
upgrade.
4 Click RUN PRECHECK to select the components in the workload domain you want to
precheck.
a You can select to run a Precheck only on vCenter or the vSphere cluster. All components
in the workload domain are selected by default. To perform a precheck on certain
components, choose Custom selection.
Note For VMware Cloud Foundation on Dell EMC VxRail, you can run prechecks on
VxRail Manager.
b If there are pending upgrade bundles available, then the "Target Version" dropdown
contains "General Upgrade Readiness" and the available VMware Cloud Foundation
versions to upgrade to. If there is an available VMware Cloud Foundation upgrade
version, there will be extra checks - bundle-level prechecks for hosts, vCenter Server, and
so forth. The version specific prechecks will only run prechecks on components that have
available upgrade bundles downloaded.
5 When the precheck begins, a progress message appears indicating the precheck progress
and the time when the precheck began.
Note Parallel precheck workflows are supported. If you want to precheck multiple domains,
you can repeat steps 1-5 for each of them without waiting for step 5 to finish.
6 Once the Precheck is complete, the report appears. Click through ALL, ERRORS,
WARNINGS, and SILENCED to filter and browse through the results.
If a precheck task failed, fix the issue, and click Retry Precheck to run the task again. You can
also click RETRY ALL FAILED RESOURCES to retry all failed tasks.
8 If the workload domain contains a host that includes pinned VMs, the precheck fails at the
Enter Maintenance Mode step. If the host can enter maintenance mode through vCenter
Server UI, you can suppress this check for NSX and ESXi in VMware Cloud Foundation by
following the steps below.
a Log in to SDDC Manager by using a Secure Shell (SSH) client with the user name vcf and
password.
lcm.nsxt.suppress.dry.run.emm.check=true
lcm.esx.suppress.dry.run.emm.check.failures=true
d Restart Lifecycle Management by typing the following command in the console window.
Results
The precheck result is displayed at the top of the Upgrade Precheck Details window. If you click
Exit Details, the precheck result is displayed at the top of the Precheck section in the Updates
tab.
Ensure that the precheck results are green before proceeding. Although a failed precheck will
not prevent the upgrade from proceeding, it may cause the update to fail.
After SDDC Manager is upgraded to 5.2 or later, new functionality is introduced that allows
you to upgrade SDDC Manager without having to upgrade the entire VMware Cloud Foundation
BOM. See Independent SDDC Manager Upgrade using the SDDC Manager UI.
Prerequisites
n Download the VMware Cloud Foundation update bundle for your target release. See
Downloading VMware Cloud Foundation Upgrade Bundles.
n Ensure you have a recent successful backup of SDDC Manager using an external SFTP server.
n Ensure you have recent successful backups of the components managed by SDDC Manager.
Procedure
2 On the Workload Domains page, click the management domain and then click the Updates
tab.
3 In the Available Updates section, select the target VMware Cloud Foundation release or click
Plan Upgrade.
The available options depend on the source version of VMware Cloud Foundation.
n For VMware Cloud Foundation 5.x, click Plan Upgrade, select a target version, and click
Confirm.
4 Click Update Now or Schedule Update next to the VMware Cloud Foundation Upgrade
bundle.
5 If you selected Schedule Update, select the date and time for the bundle to be applied and
click Schedule.
If you clicked Update Now, the VMware Cloud Foundation Update Status window displays
the components that will be upgraded and the upgrade status. Click View Update Activity
to view the detailed tasks. After the upgrade is completed, a green bar with a check mark is
displayed.
6 Click Finish.
When the update completes successfully, you are logged out of the SDDC Manager UI and
must log in again.
deployment aligns with the recommended configuration. This process includes reconciling the
configuration for 2nd party software components listed in the VMware Cloud Foundation Bill of
Materials (BOM).
Configuration updates may be required after you apply software updates. Once a configuration
update becomes available, you can apply it immediately or wait until after you have applied all
software updates. Configuration Updates must be performed during a maintenance window.
Note Configuration Updates in VCF detects and reconciles to a prescribed configuration for
the release. Once reconciled, it does not identify subsequent non-compliance arising from out of
band changes.
The following configuration updates may become available, depending on your source version of
VMware Cloud Foundation:
Required
Minimum
Configuration Introduced in Component
Update Description VCF Version Resource Type Update Type Versions
Required
Minimum
Configuration Introduced in Component
Update Description VCF Version Resource Type Update Type Versions
Required
Minimum
Configuration Introduced in Component
Update Description VCF Version Resource Type Update Type Versions
Required
Minimum
Configuration Introduced in Component
Update Description VCF Version Resource Type Update Type Versions
Procedure
2 On the Workload Domains page, click the workload domain name and then click the Updates
tab.
5 Check the progress of a configuration update by clicking the task in the Tasks panel.
6 After the configuration updates are successfully applied, they will no longer appear in the
table.
If you had VMware Aria Suite Lifecycle, VMware Aria Operations for Logs, VMware Aria
Automation, VMware Aria Operations, or Workspace ONE Access in your pre-upgrade
environment, you must upgrade them from VMware Aria Suite Lifecycle.
You can upgrade VMware Aria Suite products as new versions become available in VMware
Aria Suite Lifecycle. VMware Aria Suite Lifecycle will only allow upgrades to compatible and
supported versions of VMware Aria Suite products.
Note See the VMware Interoperability Matrix for information about which versions are
supported with your version of VMware Cloud Foundation and KB 88829 for more information
about supported upgrade paths using VMware Aria Suite Lifecycle.
Important The VMware Cloud Foundation 5.2 BOM requires VMware Aria Suite Lifecycle 8.18 or
higher.
Note The VMware Aria Suite of products were formerly known as the vRealize Suite of
products.
Procedure
Upgrade VMware Aria Suite Lifecycle first and then upgrade VMware Aria Suite products.
See “Upgrading VMware Aria Suite Lifecycle and VMware Aria Suite Products” in the VMware
Aria Suite Lifecycle Installation, Upgrade, and Management Guide for your current version of
VMware Aria Suite Lifecycle.
Procedure
1 Log in to the Broadcom Support Portal and browse to My Downloads > VMware NSX.
3 Locate the NSX version Upgrade Bundle and verify that the upgrade bundle filename
extension ends with .mub.
4 Click the download icon to download the upgrade bundle to the system where you access
the NSX Global Manager UI.
The upgrade coordinator guides you through the upgrade sequence. You can track the upgrade
process and, if necessary, you can pause and resume the upgrade process from the UI.
Procedure
4 Navigate to the upgrade bundle .mub file you downloaded or paste the download URL link.
n Click Browse to navigate to the location you downloaded the upgrade bundle file.
n Paste the VMware download portal URL where the upgrade bundle .mub file is located.
5 Click Upload.
7 Read and accept the EULA terms and accept the notification to upgrade the upgrade
coordinator..
8 Click Run Pre-Checks to verify that all NSX components are ready for upgrade.
The pre-check checks for component connectivity, version compatibility, and component
status.
Prerequisites
Before you can upgrade NSX Global Managers, you must upgrade all VMware Cloud Foundation
instances in the NSX Federation, including NSX Local Managers, using SDDC Manager.
Procedure
3 Click Start to upgrade the management plane and then click Accept.
4 On the Select Upgrade Plan page, select Plan Your Upgrade and click Next.
The NSX Manager UI, API, and CLI are not accessible until the upgrade finishes and the
management plane is restarted.
Until SDDC Manager is upgraded to version 5.2, you must upgrade NSX in the management
domain before you upgrade NSX in a VI workload domain. Once SDDC Manager is at version
5.2 or later, you can upgrade NSX in VI workload domains before or after upgrading NSX in the
management domain.
n Upgrade Coordinator
n Host clusters
Procedure
2 On the Workload Domains page, click the domain you are upgrading and then click the
Updates/Patches tab.
When you upgrade NSX components for a selected VI workload domain, those components
are upgraded for all VI workload domains that share the NSX Manager cluster.
Note The NSX precheck runs on all VI workload domains in your environment that share the
NSX Manager cluster.
a In the Available Updates section, click Update Now or Schedule Update next to the
VMware Software Update for NSX.
b On the NSX Edge Clusters page, select the NSX Edge clusters you want to upgrade and
click Next.
By default, all NSX Edge clusters are upgraded. To select specific NSX Edge clusters,
select the Upgrade only NSX Edge clusters check box and select the Enable edge
selection option. Then select the NSX Edges you want to upgrade.
c On the Host Cluster page,select the host cluster you want to upgrade and click Next.
By default, all host clusters across all workload domains are upgraded. If you want
to select specific host clusters to upgrade, select Custom Selection. Host clusters are
upgraded after all Edge clusters have been upgraded.
Note The NSX Manager cluster is upgraded only if you select all host clusters. If you
have multiple host clusters and choose to upgrade only some of them, you must go
through the NSX upgrade wizard again until all host clusters have been upgraded.
d On the Upgrade Options dialog box, select the upgrade optimizations and click Next.
By default, Edge clusters and host clusters are upgraded in parallel. You can enable
sequential upgrade by selecting the relevant check box.
e If you selected the Schedule Upgrade option, specify the date and time for the NSX
bundle to be applied and click Next.
If you selected Upgrade Now, the NSX upgrade begins and the upgrade components
are displayed. The upgrade view displayed here pertains to the workload domain where
you applied the bundle. Click the link to the associated workload domains to see the
components pertaining to those workload domains. If you selected Schedule Upgrade,
the upgrade begins at the time and date you specified.
b On the NSX Edge Clusters page, select the NSX Edge clusters you want to upgrade and
click Next.
By default, all NSX Edge clusters are upgraded. To select specific NSX Edge clusters,
select the Upgrade only NSX Edge clusters check box and select the Enable edge
selection option. Then select the NSX Edges you want to upgrade.
c On the Host Cluster page,select the host cluster you want to upgrade and click Next.
By default, all host clusters across all workload domains are upgraded. If you want
to select specific host clusters to upgrade, select Custom Selection. Host clusters are
upgraded after all Edge clusters have been upgraded.
Note The NSX Manager cluster is upgraded only if you select all host clusters. If you
have multiple host clusters and choose to upgrade only some of them, you must go
through the NSX upgrade wizard again until all host clusters have been upgraded.
d On the Upgrade Options dialog box, select the upgrade optimizations and click Next.
By default ESXi hosts are placed into maintenance mode during an upgrade. Starting with
VMware Cloud Foundation 5.2.1, in-place upgrades are available for workload domains in
which all the clusters use vSphere Lifecycle Manager baselines. If NSX Manager is shared
between workload domains, in-place upgrade is only available if all the clusters in all
the workload domains that share the NSX Manager use vLCM baselines. If the option is
available, you can select In-place as the upgrade mode to avoid powering off and placing
hosts into maintenance mode before the upgrade.
Note To perform an in-place upgrade, the target NSX version must be the VMware
Cloud Foundation 5.2.1 BOM version or later.
By default, Edge clusters and host clusters are upgraded in parallel. You can enable
sequential upgrade by selecting the relevant check box.
e On the Review page, review your settings and click Run Precheck.
The precheck begins. Resolve any issues until the precheck succeeds.
f After the precheck succeeds, click Schedule Update and select an option.
6 Monitor the upgrade progress. See Monitor VMware Cloud Foundation Updates.
If a component upgrade fails, the failure is displayed across all associated workload domains.
Resolve the issue and retry the failed task.
Results
When all NSX workload components are upgraded successfully, a message with a green
background and check mark is displayed.
Prerequisites
n Download the VMware vCenter Server upgrade bundle. See Downloading VMware Cloud
Foundation Upgrade Bundles.
n Take a file-based backup of the vCenter Server appliance before starting the upgrade. See
Manually Back Up vCenter Server.
Note After taking a backup, do not make any changes to the vCenter Server inventory or
settings until the upgrade completes successfully.
n If your workload domain contains Workload Management (vSphere with Tanzu) enabled
clusters, the supported target release depends on the version of Kubernetes (K8s) currently
running in the cluster. Older versions of K8s might require a specific upgrade sequence. See
KB 92227 for more information.
Procedure
2 On the Workload Domains page, click the domain you are upgrading and then click the
Updates tab.
a In the Available Updates section, click Update Now or Schedule Update next to the
VMware Software Update for vCenter Server.
b Click Confirm to confirm that you have taken a file-based backup of the vCenter Server
appliance before starting the upgrade.
c If you selected Schedule Update, click the date and time for the bundle to be applied and
click Schedule.
d If you are upgrading from VMware Cloud Foundation 4.5.x, enter the details for the
temporary network to be used only during the upgrade. The IP address must be in the
management subnet.
5 Upgrading to VMware Cloud Foundation 5.2.1 from VMware Cloud Foundation 5.x:
Option Description
vCenter Reduced Downtime The reduced downtime upgrade process uses a migration-based
Upgrade approach. In this approach, a new vCenter Server Appliance is deployed
and the current vCenter data and configuration is copied to it.
During the preparation phase of a reduced downtime upgrade, the
source vCenter Server Appliance and all resources remain online. The
only downtime occurs when the source vCenter Server Appliance is
stopped, the configuration is switched over to the target vCenter,
and the services are started. The downtime is expected to take
approximately 5 minutes under ideal network, CPU, memory, and storage
provisioning.
vCenter Regular Upgrade During a regular upgrade, the vCenter Server Appliance is offline for the
duration of the upgrade.
d For an RDU update, provide a temporary network to be used only during the upgrade
and click Next.
Option Description
Static Enter an IP address, subnet mask, and gateway. The IP address must be
in the management subnet.
Option Description
For vCenter Reduced Downtime Select scheduling options for the preparation and switchover phases of
Upgrade the upgrade.
Note If you are scheduling the switchover phase, you must allow a
minimum of 4 hours between the start of preparation and the start of
switchover.
6 Upgrading to VMware Cloud Foundation 5.2.1 from VMware Cloud Foundation 4.5.x:
b Enter the details for the temporary network to be used only during the upgrade. The IP
address must be in the management subnet.
7 Monitor the upgrade progress. See Monitor VMware Cloud Foundation Updates.
8 After the upgrade is complete, remove the old vCenter Server appliance (if applicable).
Note Removing the old vCenter is only required for major upgrades. If you performed a
vCenter RDU patch upgrade, the old vCenter is automatically removed after a successful
upgrade.
If the upgrade fails, resolve the issue and retry the failed task. If you cannot resolve the issue,
restore vCenter Server using the file-based backup. See Restore vCenter Server. vCenter
RDU upgrades perform automatic rollback if the upgrade fails.
What to do next
Once the upgrade successfully completes, use the vSphere Client to change the vSphere DRS
Automation Level setting back to the original value (before you took a file-based backup) for
each vSphere cluster that is managed by the vCenter Server. See KB 87631 for information about
using VMware PowerCLI to change the vSphere DRS Automation Level.
By default, the upgrade process upgrades the ESXi hosts in all clusters in a workload domain in
parallel. If you have multiple clusters in the management domain or in a VI workload domain, you
can select the clusters to upgrade. You can also choose to upgrade the clusters in parallel or
sequentially.
If you are using external (non-vSAN) storage, the following procedure updates the ESXi hosts
attached to the external storage. However, updating and patching the storage software and
drivers is a manual task and falls outside of SDDC Manager lifecycle management. To ensure
supportability after an ESXi upgrade, consult the vSphere HCL and your storage vendor.
Prerequisites
n Download the VxRail upgrade bundle. See Downloading VMware Cloud Foundation Upgrade
Bundles.
n Ensure that the domain for which you want to perform cluster-level upgrade does not have
any hosts or clusters in an error state. Resolve the error state or remove the hosts and
clusters with errors before proceeding.
Procedure
If you selected Schedule Update, specify the date and time for the bundle to be applied.
The default setting is to upgrade all clusters. To upgrade specific clusters, click Enable
cluster-level selection and select the clusters to upgrade.
6 Click Next.
By default, the selected clusters are upgraded in parallel. If you selected more than five
clusters to be upgraded, the first five are upgraded in parallel and the remaining clusters are
upgraded sequentially. To upgrade all selected clusters sequentially, select Enable sequential
cluster upgrade.
Click Enable Quick Boot if desired. Quick Boot for ESXi hosts is an option that allows Update
Manager to reduce the upgrade time by skipping the physical reboot of the host.
8 Monitor the upgrade progress. See Monitor VMware Cloud Foundation Updates.
What to do next
Upgrade the vSAN Disk Format for vSAN clusters. The disk format upgrade is optional. Your
vSAN cluster continues to run smoothly if you use a previous disk format version. For best
results, upgrade the objects to use the latest on-disk format. The latest on-disk format provides
the complete feature set of vSAN. See Upgrade vSAN on-disk format versions.
Prerequisites
Download the ESXi ISO that matches the version listed in the the Bill of Materials (BOM) section
of the VMware Cloud Foundation Release Notes.
Procedure
d Navigate to the ESXi ISO file you downloaded and click Open.
a On the Imported ISOs tab, select the ISO file that you imported, and click New baseline.
b Enter a name for the baseline and specify the Content Type as Upgrade.
c Click Next.
d Select the ISO file you had imported and click Next.
c Select the vSAN witness host and click the Updates tab.
d Under Attached Baselines, click Attach > Attach Baseline or Baseline Group.
e Select the baseline that you had created in step 3 and click Attach.
After the compliance check is completed, the Status column for the baseline is displayed
as Non-Compliant.
5 Remediate the vSAN witness host and update the ESXi hosts that it contains.
a Right-click the vSAN witness and click Maintenance Mode > Enter Maintenance Mode.
b Click OK.
d Select the baseline that you had created in step 3 and click Remediate.
e In the End user license agreement dialog box, select the check box and click OK.
f In the Remediate dialog box, select the vSAN witness host, and click Remediate.
The remediation process might take several minutes. After the remediation is completed,
the Status column for the baseline is displayed as Compliant.
g Right-click the vSAN witness host and click Maintenance Mode > Exit Maintenance Mode.
h Click OK.
Prerequisites
Procedure
1 On the vSphere Client Home page, click Networking and navigate to the distributed switch.
2 Right-click the distributed switch and select Upgrade > Upgrade Distributed Switch
3 Select the vSphere Distributed Switch version that you want to upgrade the switch to and
click Next
Results
n The upgrade may cause temporary resynchronization traffic and use additional space by
moving data or rebuilding object components to a new data structure.
Prerequisites
n Verify that the disks are in a healthy state. Navigate to the Disk Management page to verify
the object status.
n Verify that your hosts are not in maintenance mode. When upgrading the disk format, do not
place the hosts in maintenance mode.
n Verify that there are no component rebuilding tasks currently in progress in the vSAN cluster.
For information about vSAN resynchronization, see vSphere Monitoring and Performance
Procedure
4 Click Pre-check Upgrade. The upgrade pre-check analyzes the cluster to uncover any issues
that might prevent a successful upgrade. Some of the items checked are host status, disk
status, network status, and object status. Upgrade issues are displayed in the Disk pre-check
status text box.
5 Click Upgrade.
6 Click Yes on the Upgrade dialog box to perform the upgrade of the on-disk format.
Results
vSAN successfully upgrades the on-disk format. The On-disk Format Version column displays the
disk format version of storage devices in the cluster
You first add the new component license key to SDDC Manager. This must be done once per
license instance. You then apply the license key to the component on a per workload domain
basis.
Prerequisites
You need a new license key for vSAN 8.x and vSphere 8.x. Prior to VMware Cloud Foundation
5.1.1, you must add and update the component license key for each upgraded component in the
SDDC Manager UI as described below.
With VMware Cloud Foundation 5.1.1 and later, you can add a component license key as
described below, or add a solution license key in the vSphere Client. See Managing vSphere
Licenses for information about using a solution license key for VMware ESXi and vCenter Server.
If you are using a solution license key, you must also add a VMware vSAN license key for vSAN
clusters. See Configure License Settings for a vSAN Cluster.
Procedure
f Click Add.
b On the Workload Domains page, click the domain you are upgrading.
c On the Summary tab, expand the red error banner, and click Update Licenses.
f For each product, select a new license key from the list, and select the entity to which the
licensekey should be applied and click Next.
g On the Review pane, review each license key and click Submit.
The new license keys will be applied to the workload domain. Monitor the task in the
Tasks pane in SDDC Manager.
1 NSX.
2 vCenter Server.
3 ESXi.
4 Workload Management on clusters that have vSphere with Tanzu. Workload Management
can be upgraded through vCenter Server. See Updating the vSphere with Tanzu
Environment.
5 If you suppressed the Enter Maintenance Mode prechecks for ESXi or NSX,
delete the following lines from the /opt/vmware/vcf/lcm/lcm-app/conf/application-
prod.properties file and restart the LCM service:
lcm.nsxt.suppress.dry.run.emm.check=true
lcm.esx.suppress.dry.run.emm.check.failures=true
6 If you have stretched clusters in your environment, upgrade the vSAN witness host. See
Upgrade vSAN Witness Host for VMware Cloud Foundation.
Prerequisites
Procedure
2 On the Workload Domains page, click the workload domain you want to upgrade and click
the Updates tab.
4 On the Plan Upgrade for VMware Cloud Foundation screen, select the target version from
the drop-down, and click CONFIRM.
Caution You must upgrade all VI workload domains to VMware Cloud Foundation 5.x.
Upgrading to a higher 4.x release once the management domain has been upgraded to 5.x is
unsupported.
Note If the taget version of VMware Cloud Foundation supports multipl versions of VxRail
Manager, the drop-down menu includes separate entires for each combination.
Results
Bundles applicable to the chosen release will be made available to the VI workload domain.
If you silence a vSAN Skyline Health alert in the vSphere Client, SDDC Manager skips the related
precheck and indicates which precheck it skipped. Click RESTORE PRECHECK to include the
silenced precheck. For example:
You can also silence failed vSAN prechecks in the SDDC Manager UI by clicking Silence
Precheck. Silenced prechecks do not trigger warnings or block upgrades.
Important Only silence alerts if you know that they are incorrect. Do not silence alerts for real
issues that require remediation.
Procedure
2 On the Workload Domains page, click the workload domain where you want to run the
precheck.
(The following image is a sample screenshot and may not reflect current product versions.)
Note It is recommended that you Precheck your workload domain prior to performing an
upgrade.
4 Click RUN PRECHECK to select the components in the workload domain you want to
precheck.
a You can select to run a Precheck only on vCenter or the vSphere cluster. All components
in the workload domain are selected by default. To perform a precheck on certain
components, choose Custom selection.
Note For VMware Cloud Foundation on Dell EMC VxRail, you can run prechecks on
VxRail Manager.
b If there are pending upgrade bundles available, then the "Target Version" dropdown
contains "General Upgrade Readiness" and the available VMware Cloud Foundation
versions to upgrade to. If there is an available VMware Cloud Foundation upgrade
version, there will be extra checks - bundle-level prechecks for hosts, vCenter Server, and
so forth. The version specific prechecks will only run prechecks on components that have
available upgrade bundles downloaded.
5 When the precheck begins, a progress message appears indicating the precheck progress
and the time when the precheck began.
Note Parallel precheck workflows are supported. If you want to precheck multiple domains,
you can repeat steps 1-5 for each of them without waiting for step 5 to finish.
6 Once the Precheck is complete, the report appears. Click through ALL, ERRORS,
WARNINGS, and SILENCED to filter and browse through the results.
If a precheck task failed, fix the issue, and click Retry Precheck to run the task again. You can
also click RETRY ALL FAILED RESOURCES to retry all failed tasks.
8 If the workload domain contains a host that includes pinned VMs, the precheck fails at the
Enter Maintenance Mode step. If the host can enter maintenance mode through vCenter
Server UI, you can suppress this check for NSX and ESXi in VMware Cloud Foundation by
following the steps below.
a Log in to SDDC Manager by using a Secure Shell (SSH) client with the user name vcf and
password.
lcm.nsxt.suppress.dry.run.emm.check=true
lcm.esx.suppress.dry.run.emm.check.failures=true
d Restart Lifecycle Management by typing the following command in the console window.
Results
The precheck result is displayed at the top of the Upgrade Precheck Details window. If you click
Exit Details, the precheck result is displayed at the top of the Precheck section in the Updates
tab.
Ensure that the precheck results are green before proceeding. Although a failed precheck will
not prevent the upgrade from proceeding, it may cause the update to fail.
Procedure
1 Log in to the Broadcom Support Portal and browse to My Downloads > VMware NSX.
3 Locate the NSX version Upgrade Bundle and verify that the upgrade bundle filename
extension ends with .mub.
4 Click the download icon to download the upgrade bundle to the system where you access
the NSX Global Manager UI.
The upgrade coordinator guides you through the upgrade sequence. You can track the upgrade
process and, if necessary, you can pause and resume the upgrade process from the UI.
Procedure
4 Navigate to the upgrade bundle .mub file you downloaded or paste the download URL link.
n Click Browse to navigate to the location you downloaded the upgrade bundle file.
n Paste the VMware download portal URL where the upgrade bundle .mub file is located.
5 Click Upload.
7 Read and accept the EULA terms and accept the notification to upgrade the upgrade
coordinator..
8 Click Run Pre-Checks to verify that all NSX components are ready for upgrade.
The pre-check checks for component connectivity, version compatibility, and component
status.
Prerequisites
Before you can upgrade NSX Global Managers, you must upgrade all VMware Cloud Foundation
instances in the NSX Federation, including NSX Local Managers, using SDDC Manager.
Procedure
3 Click Start to upgrade the management plane and then click Accept.
4 On the Select Upgrade Plan page, select Plan Your Upgrade and click Next.
The NSX Manager UI, API, and CLI are not accessible until the upgrade finishes and the
management plane is restarted.
Until SDDC Manager is upgraded to version 5.2, you must upgrade NSX in the management
domain before you upgrade NSX in a VI workload domain. Once SDDC Manager is at version
5.2 or later, you can upgrade NSX in VI workload domains before or after upgrading NSX in the
management domain.
n Upgrade Coordinator
n Host clusters
Procedure
2 On the Workload Domains page, click the domain you are upgrading and then click the
Updates/Patches tab.
When you upgrade NSX components for a selected VI workload domain, those components
are upgraded for all VI workload domains that share the NSX Manager cluster.
Note The NSX precheck runs on all VI workload domains in your environment that share the
NSX Manager cluster.
a In the Available Updates section, click Update Now or Schedule Update next to the
VMware Software Update for NSX.
b On the NSX Edge Clusters page, select the NSX Edge clusters you want to upgrade and
click Next.
By default, all NSX Edge clusters are upgraded. To select specific NSX Edge clusters,
select the Upgrade only NSX Edge clusters check box and select the Enable edge
selection option. Then select the NSX Edges you want to upgrade.
c On the Host Cluster page,select the host cluster you want to upgrade and click Next.
By default, all host clusters across all workload domains are upgraded. If you want
to select specific host clusters to upgrade, select Custom Selection. Host clusters are
upgraded after all Edge clusters have been upgraded.
Note The NSX Manager cluster is upgraded only if you select all host clusters. If you
have multiple host clusters and choose to upgrade only some of them, you must go
through the NSX upgrade wizard again until all host clusters have been upgraded.
d On the Upgrade Options dialog box, select the upgrade optimizations and click Next.
By default, Edge clusters and host clusters are upgraded in parallel. You can enable
sequential upgrade by selecting the relevant check box.
e If you selected the Schedule Upgrade option, specify the date and time for the NSX
bundle to be applied and click Next.
If you selected Upgrade Now, the NSX upgrade begins and the upgrade components
are displayed. The upgrade view displayed here pertains to the workload domain where
you applied the bundle. Click the link to the associated workload domains to see the
components pertaining to those workload domains. If you selected Schedule Upgrade,
the upgrade begins at the time and date you specified.
b On the NSX Edge Clusters page, select the NSX Edge clusters you want to upgrade and
click Next.
By default, all NSX Edge clusters are upgraded. To select specific NSX Edge clusters,
select the Upgrade only NSX Edge clusters check box and select the Enable edge
selection option. Then select the NSX Edges you want to upgrade.
c On the Host Cluster page,select the host cluster you want to upgrade and click Next.
By default, all host clusters across all workload domains are upgraded. If you want
to select specific host clusters to upgrade, select Custom Selection. Host clusters are
upgraded after all Edge clusters have been upgraded.
Note The NSX Manager cluster is upgraded only if you select all host clusters. If you
have multiple host clusters and choose to upgrade only some of them, you must go
through the NSX upgrade wizard again until all host clusters have been upgraded.
d On the Upgrade Options dialog box, select the upgrade optimizations and click Next.
By default ESXi hosts are placed into maintenance mode during an upgrade. Starting with
VMware Cloud Foundation 5.2.1, in-place upgrades are available for workload domains in
which all the clusters use vSphere Lifecycle Manager baselines. If NSX Manager is shared
between workload domains, in-place upgrade is only available if all the clusters in all
the workload domains that share the NSX Manager use vLCM baselines. If the option is
available, you can select In-place as the upgrade mode to avoid powering off and placing
hosts into maintenance mode before the upgrade.
Note To perform an in-place upgrade, the target NSX version must be the VMware
Cloud Foundation 5.2.1 BOM version or later.
By default, Edge clusters and host clusters are upgraded in parallel. You can enable
sequential upgrade by selecting the relevant check box.
e On the Review page, review your settings and click Run Precheck.
The precheck begins. Resolve any issues until the precheck succeeds.
f After the precheck succeeds, click Schedule Update and select an option.
6 Monitor the upgrade progress. See Monitor VMware Cloud Foundation Updates.
If a component upgrade fails, the failure is displayed across all associated workload domains.
Resolve the issue and retry the failed task.
Results
When all NSX workload components are upgraded successfully, a message with a green
background and check mark is displayed.
Prerequisites
n Download the VMware vCenter Server upgrade bundle. See Downloading VMware Cloud
Foundation Upgrade Bundles.
n Take a file-based backup of the vCenter Server appliance before starting the upgrade. See
Manually Back Up vCenter Server.
Note After taking a backup, do not make any changes to the vCenter Server inventory or
settings until the upgrade completes successfully.
n If your workload domain contains Workload Management (vSphere with Tanzu) enabled
clusters, the supported target release depends on the version of Kubernetes (K8s) currently
running in the cluster. Older versions of K8s might require a specific upgrade sequence. See
KB 92227 for more information.
Procedure
2 On the Workload Domains page, click the domain you are upgrading and then click the
Updates tab.
a In the Available Updates section, click Update Now or Schedule Update next to the
VMware Software Update for vCenter Server.
b Click Confirm to confirm that you have taken a file-based backup of the vCenter Server
appliance before starting the upgrade.
c If you selected Schedule Update, click the date and time for the bundle to be applied and
click Schedule.
d If you are upgrading from VMware Cloud Foundation 4.5.x, enter the details for the
temporary network to be used only during the upgrade. The IP address must be in the
management subnet.
5 Upgrading to VMware Cloud Foundation 5.2.1 from VMware Cloud Foundation 5.x:
Option Description
vCenter Reduced Downtime The reduced downtime upgrade process uses a migration-based
Upgrade approach. In this approach, a new vCenter Server Appliance is deployed
and the current vCenter data and configuration is copied to it.
During the preparation phase of a reduced downtime upgrade, the
source vCenter Server Appliance and all resources remain online. The
only downtime occurs when the source vCenter Server Appliance is
stopped, the configuration is switched over to the target vCenter,
and the services are started. The downtime is expected to take
approximately 5 minutes under ideal network, CPU, memory, and storage
provisioning.
vCenter Regular Upgrade During a regular upgrade, the vCenter Server Appliance is offline for the
duration of the upgrade.
d For an RDU update, provide a temporary network to be used only during the upgrade
and click Next.
Option Description
Static Enter an IP address, subnet mask, and gateway. The IP address must be
in the management subnet.
Option Description
For vCenter Reduced Downtime Select scheduling options for the preparation and switchover phases of
Upgrade the upgrade.
Note If you are scheduling the switchover phase, you must allow a
minimum of 4 hours between the start of preparation and the start of
switchover.
6 Upgrading to VMware Cloud Foundation 5.2.1 from VMware Cloud Foundation 4.5.x:
b Enter the details for the temporary network to be used only during the upgrade. The IP
address must be in the management subnet.
7 Monitor the upgrade progress. See Monitor VMware Cloud Foundation Updates.
8 After the upgrade is complete, remove the old vCenter Server appliance (if applicable).
Note Removing the old vCenter is only required for major upgrades. If you performed a
vCenter RDU patch upgrade, the old vCenter is automatically removed after a successful
upgrade.
If the upgrade fails, resolve the issue and retry the failed task. If you cannot resolve the issue,
restore vCenter Server using the file-based backup. See Restore vCenter Server. vCenter
RDU upgrades perform automatic rollback if the upgrade fails.
What to do next
Once the upgrade successfully completes, use the vSphere Client to change the vSphere DRS
Automation Level setting back to the original value (before you took a file-based backup) for
each vSphere cluster that is managed by the vCenter Server. See KB 87631 for information about
using VMware PowerCLI to change the vSphere DRS Automation Level.
By default, the upgrade process upgrades the ESXi hosts in all clusters in a workload domain in
parallel. If you have multiple clusters in the management domain or in a VI workload domain, you
can select the clusters to upgrade. You can also choose to upgrade the clusters in parallel or
sequentially.
If you are using external (non-vSAN) storage, the following procedure updates the ESXi hosts
attached to the external storage. However, updating and patching the storage software and
drivers is a manual task and falls outside of SDDC Manager lifecycle management. To ensure
supportability after an ESXi upgrade, consult the vSphere HCL and your storage vendor.
Prerequisites
n Download the VxRail upgrade bundle. See Downloading VMware Cloud Foundation Upgrade
Bundles.
n Ensure that the domain for which you want to perform cluster-level upgrade does not have
any hosts or clusters in an error state. Resolve the error state or remove the hosts and
clusters with errors before proceeding.
Procedure
If you selected Schedule Update, specify the date and time for the bundle to be applied.
The default setting is to upgrade all clusters. To upgrade specific clusters, click Enable
cluster-level selection and select the clusters to upgrade.
6 Click Next.
By default, the selected clusters are upgraded in parallel. If you selected more than five
clusters to be upgraded, the first five are upgraded in parallel and the remaining clusters are
upgraded sequentially. To upgrade all selected clusters sequentially, select Enable sequential
cluster upgrade.
Click Enable Quick Boot if desired. Quick Boot for ESXi hosts is an option that allows Update
Manager to reduce the upgrade time by skipping the physical reboot of the host.
8 Monitor the upgrade progress. See Monitor VMware Cloud Foundation Updates.
What to do next
Upgrade the vSAN Disk Format for vSAN clusters. The disk format upgrade is optional. Your
vSAN cluster continues to run smoothly if you use a previous disk format version. For best
results, upgrade the objects to use the latest on-disk format. The latest on-disk format provides
the complete feature set of vSAN. See Upgrade vSAN on-disk format versions.
Prerequisites
Download the ESXi ISO that matches the version listed in the the Bill of Materials (BOM) section
of the VMware Cloud Foundation Release Notes.
Procedure
d Navigate to the ESXi ISO file you downloaded and click Open.
a On the Imported ISOs tab, select the ISO file that you imported, and click New baseline.
b Enter a name for the baseline and specify the Content Type as Upgrade.
c Click Next.
d Select the ISO file you had imported and click Next.
c Select the vSAN witness host and click the Updates tab.
d Under Attached Baselines, click Attach > Attach Baseline or Baseline Group.
e Select the baseline that you had created in step 3 and click Attach.
After the compliance check is completed, the Status column for the baseline is displayed
as Non-Compliant.
5 Remediate the vSAN witness host and update the ESXi hosts that it contains.
a Right-click the vSAN witness and click Maintenance Mode > Enter Maintenance Mode.
b Click OK.
d Select the baseline that you had created in step 3 and click Remediate.
e In the End user license agreement dialog box, select the check box and click OK.
f In the Remediate dialog box, select the vSAN witness host, and click Remediate.
The remediation process might take several minutes. After the remediation is completed,
the Status column for the baseline is displayed as Compliant.
g Right-click the vSAN witness host and click Maintenance Mode > Exit Maintenance Mode.
h Click OK.
Prerequisites
Procedure
1 On the vSphere Client Home page, click Networking and navigate to the distributed switch.
2 Right-click the distributed switch and select Upgrade > Upgrade Distributed Switch
3 Select the vSphere Distributed Switch version that you want to upgrade the switch to and
click Next
Results
n The upgrade may cause temporary resynchronization traffic and use additional space by
moving data or rebuilding object components to a new data structure.
Prerequisites
n Verify that the disks are in a healthy state. Navigate to the Disk Management page to verify
the object status.
n Verify that your hosts are not in maintenance mode. When upgrading the disk format, do not
place the hosts in maintenance mode.
n Verify that there are no component rebuilding tasks currently in progress in the vSAN cluster.
For information about vSAN resynchronization, see vSphere Monitoring and Performance
Procedure
4 Click Pre-check Upgrade. The upgrade pre-check analyzes the cluster to uncover any issues
that might prevent a successful upgrade. Some of the items checked are host status, disk
status, network status, and object status. Upgrade issues are displayed in the Disk pre-check
status text box.
5 Click Upgrade.
6 Click Yes on the Upgrade dialog box to perform the upgrade of the on-disk format.
Results
vSAN successfully upgrades the on-disk format. The On-disk Format Version column displays the
disk format version of storage devices in the cluster
You first add the new component license key to SDDC Manager. This must be done once per
license instance. You then apply the license key to the component on a per workload domain
basis.
Prerequisites
You need a new license key for vSAN 8.x and vSphere 8.x. Prior to VMware Cloud Foundation
5.1.1, you must add and update the component license key for each upgraded component in the
SDDC Manager UI as described below.
With VMware Cloud Foundation 5.1.1 and later, you can add a component license key as
described below, or add a solution license key in the vSphere Client. See Managing vSphere
Licenses for information about using a solution license key for VMware ESXi and vCenter Server.
If you are using a solution license key, you must also add a VMware vSAN license key for vSAN
clusters. See Configure License Settings for a vSAN Cluster.
Procedure
f Click Add.
b On the Workload Domains page, click the domain you are upgrading.
c On the Summary tab, expand the red error banner, and click Update Licenses.
f For each product, select a new license key from the list, and select the entity to which the
licensekey should be applied and click Next.
g On the Review pane, review each license key and click Submit.
The new license keys will be applied to the workload domain. Monitor the task in the
Tasks pane in SDDC Manager.
You can upgrade SDDC Manager without upgrading the full VCF BOM when:
n The target version of SDDC Manager is compatible with all the BOM product versions running
in your current environment (management and workload domains).
n There is a supported upgrade path from your current SDDC Manager version to the target
SDDC Manager version.
Note You can use the SDDC Manager upgrade functionality to upgrade SDDC Manager even
when the target version of SDDC Manager is part of a full VCF BOM release, as long as it is
compatible.
Updating SDDC Manager without upgrading the full VCF BOM, does not change the version of
the management domain.
Prerequisites
n Download the SDDC Manager bundle. See Downloading VMware Cloud Foundation Upgrade
Bundles.
Procedure
The UI displays available SDDC Manager updates that are either SDDC Manager only updates
or SDDC Manager updates that are part of a full VCF BOM update.
4 Schedule the update to run now or at a specific time and click Start Update.
When the update completes successfully, you are logged out of the SDDC Manager UI and
must log in again.
You can use the upgrade planner to select any supported version for each of the VMware Cloud
Foundation BOM components. This includes async patch versions as well as VCF BOM versions.
To plan an upgrade when SDDC Manager does not have internet access, see Offline Download of
Flexible BOM Upgrade Bundles.
Prerequisites
n Download the bundles for the target versions of each VCF component. See Downloading
VMware Cloud Foundation Upgrade Bundles.
Procedure
2 On the Workload Domains page, click the domain you are upgrading and then click the
Updates tab.
4 In the Available Updates section, click Plan Upgrade create a new upgrade plan or select Edit
Upgrade Plan from the Actions menu to modify an upgrade plan.
5 Select the target version of VMware Cloud Foundation and VxRail Manager from the drop-
down menu and click Next.
6 Click Customize Upgrade to select specific target versions for each VCF BOM component.
7 Use the drop-down menus in the Target Version column to select a target version for each
component and then click Validate Selection.
9 Review the update sequence based on your target version selections and click Done.
10 In the Available Updates screen, click Schedule Update or Update Now to update the first
component.
Continue to update the VCF BOM components until they are all updated.
Note If SDDC Manager does not have internet access, you need to perform additional steps
before you can start updating. See Offline Download of Flexible BOM Upgrade Bundles.
The patch planner provides the ability to apply async patches to workload domain components.
If you are connected to the online depot, async patches are available in the patch planner. If
you do not have access to the online depot, use the Bundle Transfer Utility to download async
patches and add them to an offline depot or upload them directly to SDDC Manager.
Prerequisites
n Download the async patch bundles. See Downloading VMware Cloud Foundation Upgrade
Bundles.
n SDDC Manager must be version 5.2 or later. See Apply the VMware Cloud Foundation 5.2.x
Upgrade Bundle.
Procedure
2 On the Workload Domains page, click the domain you are patching and then click the
Updates tab.
4 In the Available Updates section, click Plan Patching create a new patching plan or select
Edit Patching Plan from the Actions menu to modify a patching plan.
Note You cannot plan patching if you have an existing upgrade plan. Cancel the upgrade
plan to create a patching plan.
5 Select the components to patch and the target versions and then click Validate Selection.
Note When you select a target vCenter version, the UI indicates which versions support
vCenter Reduced Downtime Upgrade (RDU).
7 Review the update sequence based on your target version selections and click Done.
8 In the Available Updates screen, click Schedule Update or Update Now to update the first
component.
Continue to update the VCF BOM components until they are all updated.
The format of the vCenter Server bundle is modified starting from VMware Cloud Foundation 5.1.
The new bundle is a unified bundle that bundles both the .iso and .zip files for the Target vCenter
Server build. This unified bundle can be used for both major and minor vCenter Server upgrades.
The SDDC Manager needs to be at least at the 5.1 version to understand the new format and run
the prechecks. As VMware Cloud Foundation 5.0.0.0 does not understand the format, the bundle
pre-check will fail.
Procedure
u Upgrade the SDDC Manager to VMware Cloud Foundation 5.1.0.0 and run the on-demand
prechecks for vCenter Server in VMware Cloud Foundation 5.1.0.0.
Results
https://kb.vmware.com/s/article/94862
Problem
SDDC Manager Pre-check "Upgrade Bundle Download Status" fails with an error
Cause
From VMware Cloud Foundation 5.1 onwards, we are deprecating the Config Drift bundle.
However, the previously released VCF versions expect that a config drift bundle will be applied
as part of a target release and hence indicate this as a pre-check failure.
Solution
This pre-check failure can be ignored for VCF 5.1+, and it is safe to proceed with the upgrade
despite this bundle pre-check failure.
Example
https://kb.vmware.com/s/article/94271
This is unlikely for customers who have started in a greenfield environment in VMware Cloud
Foundation 4.x and have not performed any modifications to the SDDC Manager. This has
only been seen so far on environments in which the customer has started on VMware Cloud
Foundation 3.x.
Problem
Cause
RPMs may have been left behind by previous upgrades or greenfield deployments, or a user has
implicitly or explicitly installed RPMs that prevent the upgrade
Procedure
The workaround is to uninstall RPMs that are causing this upgrade conflict manually.
Example
https://kb.vmware.com/s/article/95047
Problem
A warning message with an empty product list in the plan upgrade wizard appears
n "Unable to verify the compatibility for the following product versions. Please check the
product documentation before proceeding to upgrade:"
Solution
Example
https://kb.vmware.com/s/article/95409
Problem
This is due to a miscalculation in the no. of available licenses. Along with the incorrect quantity,
an error alert might also be displayed saying,
Cause
A miscalculation in the code for the number of available licenses is causing the error alert to
appear.
Solution
The users can simply choose to ignore the incorrect license count in the 'Available quantity' field
when assigning the license. Also, the error alert should be ignored as it does not prohibit the user
from moving forward. Users can proceed with the addition of a license even with the error alert.
If there are sufficient licenses available, the operation will succeed.
Example
https://kb.vmware.com/s/article/95128
vCenter Troubleshooting
A library of vCenter troubleshooting processes that may be referenced during upgrade as
appropriate.
Reuse of temporary IP address causes an arp cache issue. Reset the arp cache on the
management domain vCenter Server.
Customers who have fewer Temporary IP Addresses than vCenter Servers that are conducting a
parallel upgrade have the hightest likelyhood of impact.
Procedure
You follow a strict order and steps for shutdown and startup of the VMware Cloud Foundation
management components.
You shut down the customer workloads and the management components for the VI workload
domains before you shut down the components for the management domain.
® ®
If the VMware NSX Manager™ cluster and VMware NSX Edge™ cluster are shared with other
VI workload domains, shut down the NSX Manager and NSX Edge clusters as part of the
shutdown of the first VI workload domain.
Prerequisites
n Verify that the management virtual machines are not running on snapshots.
n If a vSphere Storage APIs for Data Protection (VADP) based backup solution is running on
the management clusters, verify that the solution is properly shut down by following the
vendor guidance.
n To reduce the startup time before you shut down the management virtual machines, migrate
®
the VMware vCenter Server instance for the management domain to the first VMware
ESXi™ host in the default management cluster in the management domain.
n Shut Down a Virtual Infrastructure Workload Domain with vSphere with Tanzu
You shut down the components of a VI workload domain that runs containerized workloads
in VMware Cloud Foundation in a specific order to keep components operational by
maintaining the necessary infrastructure, networking, and management services as long as
possible before shutdown.
You shut down the management components for the VI workload domains before you shut down
the components for the management domain.
If the NSX Manager cluster and NSX Edge cluster are shared with other VI workload domains,
follow this general order:
1 Shut down the customer workloads in all VI workload domains that share the VMware
®
NSX instance. Otherwise, all NSX networking services in the customer workloads will be
interrupted when you shut down NSX.
2 Shut down the VI workload domain that runs the shared NSX Edge nodes.
Procedure
2 In the VMs and templates inventory, expand the tree of workload domain vCenter Server and
expand data center for the workload domain.
3 Right-click an NSX Edge virtual machine for the management domain or VI workload domain
and select Power > Shut down Guest OS.
5 Repeat the steps for the remaining NSX Edge nodes for the domain.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
3 Right-click the primary NSX manager virtual machine and select Power > Shut down Guest
OS.
5 Repeat the steps for the remaining NSX Manager virtual machines.
Shut Down vSphere Cluster Services Virtual Machines, VxRail Manager, VMware
vSAN, and ESXi Hosts
To shut down the vSphere Cluster Services (vCLS) virtual machines, VxRail Manager, VMware
vSAN, and ESXi hosts in a workload domain cluster, you use the VxRail plugin in the vSphere
Client.
Procedure
2 In the Hosts and Clusters inventory, expand the tree of the workload domain vCenter Server
and expand the data center for the workload domain.
3 Right-click a cluster, select VxRail-Shutdown, and follow the prompts to shut down the
cluster.
Prerequisites
Verify that all ESXi hosts in all clusters are stopped and are disconnected.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
a Locate the vCenter Server virtual machine for the VI workload domain.
b Right-click the virtual machine and select Power > Shut down Guest OS.
You shut down the management components for the VI workload domains that run vSphere with
Tanzu and containers or that run virtualized workloads before you shut down the components for
the management domain.
If the NSX Manager cluster and NSX Edge cluster are shared with other VI workload domains,
follow this general order:
1 Shut down the customer workloads in all VI workload domains that share the NSX instance.
Otherwise, all NSX networking services in the customer workloads will be interrupted when
you shut down NSX.
2 Shut down the VI workload domain that runs the shared NSX Edge nodes.
Table 28-2. Shutdown Order for a VI Workload Domain with vSphere with Tanzu (continued)
11 VxRail Manager *
Find Out the Location of the vSphere with Tanzu Virtual Machines on the ESXi
Hosts
Before you begin shutting down a VI workload domain with vSphere with Tanzu, you get a
mapping between virtual machines in the workload domain and the ESXi hosts on which they
are deployed. You later use this mapping to log in to specific ESXi hosts and shut down specific
management virtual machines.
Procedure
1 Start PowerShell.
Procedure
2 In the Hosts and clusters inventory, expand the tree of the VI workload domain vCenter
Server and expand the data center for the VI workload domain.
4 Copy the cluster domain ID domain-c(cluster_domain_id) from the URL of the browser.
When you navigate to a cluster in the vSphere client, the URL is similar to this one:
https://<fqdn-of-vCenter-server>/ui/app/cluster;nav=h/
urn:vmomi:ClusterComputeResource:domain-c8:eef257af-fa50-455a-af7a-6899324fabe6/summary
5 In the Host and Clusters inventory, select the vCenter Server instance and click the Configure
tab.
If the property is not present, add it. The entry for the cluster cannot be deleted from the
vSphere Client then. However, keeping this entry is not an issue.
8 Click Save.
Results
The vCLS monitoring service initiates the clean-up of vCLS VMs. If vSphere DRS is activated for
the cluster, it stops working and you see an additional warning in the cluster summary. vSphere
DRS remains deactivated until vCLS is re-activated on this cluster.
Shut Down vCenter Server for a Virtual Infrastructure Workload Domain with
vSphere with Tanzu
To shut down the vCenter Server instance for a VI workload domain with vSphere with Tanzu in
VMware Cloud Foundation, you use the vSphere Client. You stop the Kubernetes services and
check the vSAN health status.
Procedure
1 Shut down the Kubernetes services on the workload domain vCenter Server.
vmon-cli -k wcp
vmon-cli -s wcp
b In the left pane, navigate to vSAN > Skyline health, and verify the status of each vSAN
health check category under Health findings and that the cluster health score is 100%.
c In the left pane, under vSAN > Resyncing objects, verify that all synchronization tasks are
complete.
4 If a vSAN cluster in the workload domain has vSphere HA turned on, stop vSphere HA to
avoid vSphere HA initiated migrations of virtual machines after vSAN is partitioned during the
shutdown process.
b In the left pane, select Services > vSphere Availability and click the Edit button.
c In the Edit Cluster Settings dialog box, turn off vSphere HA and click OK.
6 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
a Locate the vCenter Server virtual machine for the VI workload domain.
b Right-click the virtual machine and select Power > Shut down Guest OS.
Shut Down the NSX Edge Nodes for vSphere with Tanzu
You begin shutting down the NSX infrastructure in a VI workload domain with vSphere with
Tanzu by shutting down the NSX Edge nodes that provide north-south traffic connectivity
between the physical data center networks and the NSX SDN networks.
Because the vCenter Server instance for the domain is already down, you shut down the NSX
Edge nodes from the ESXi hosts where they are running.
Procedure
1 Log in to the ESXi host that runs the first NSX Edge node as root by using the VMware Host
Client.
3 Right-click an NSX Edge virtual machine, and select Guest OS > Shut down
5 Repeat these steps to shut down the remaining NSX Edge nodes for the VI workload domain
with vSphere with Tanzu.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
3 Right-click the primary NSX manager virtual machine and select Power > Shut down Guest
OS.
5 Repeat the steps for the remaining NSX Manager virtual machines.
Shut Down the VxRail Manager Virtual Machine in a VI Workload Domain with
vSphere with Tanzu
Because the vCenter Server instance for the VI workload domain is already down, you shut down
the VxRail Manager virtual machine from the ESXi host on which it is running.
Procedure
1 Using the VMware Host Client, log in as root to the ESXi host that runs the VxRail Manager
virtual machine.
3 Right-click the VxRail Manager virtual machine and select Guest OS > Shut down.
Shut Down vSAN and the ESXi Hosts in a Virtual Infrastructure Workload
Domain with vSphere with Tanzu
You shut down vSAN and the ESXi hosts in a VI workload domain with vSphere with Tanzu
by preparing the vSAN cluster for shutdown, placing each ESXi host in maintenance mode to
prevent any virtual machines being deployed to or starting up on the host, and shutting down the
host.
In a VI workload domain with vSphere with Tanzu, the vCenter Server instance for the domain
is already down. Hence, you perform the shutdown operation on the ESXi hosts by using the
VMware Host Client.
Procedure
1 Turn on SSH on the ESXi hosts in the workload domain by using the SoS utility of the SDDC
Manager appliance.
a Log in to the SDDC Manager appliance by using a Secure Shell (SSH) client as vcf.
b Switch to the root user by running the su command and entering the root password.
2 Log in to the first ESXi host in the workload domain cluster by using a Secure Shell (SSH)
client as root.
3 For a vSAN cluster, deactivate vSAN cluster member updates by running the command.
esxcfg-advcfg -s 1 /VSAN/IgnoreClusterMemberListUpdates
5 On the first ESXi host per vSAN cluster, prepare the vSAN cluster for shutdown by running
the command.
8 Repeat Step 6 and Step 7 on the remaining hosts in the workload domain cluster.
a Log in to the first ESXi host for the cluster at https://<esxi_host_fqdn>/ui as root.
b In the navigation pane, right-click Host and, from the drop-down menu, select Shut down.
After you shut down the components in all VI workload domains, you begin shutting down the
management domain.
Note If your VMware Cloud Foundation instance is deployed with the consolidated architecture,
shut down any customer workloads or additional virtual machines in the management domain
before you proceed with the shutdown order of the management components.
You shut down Site Recovery Manager and vSphere Replication after you shut down the
management components that can be failed over between the VMware Cloud Foundation
instances. You also shut Site Recovery Manager and vSphere Replication down as late as
possible to have the management virtual machines protected as long as possible if a disaster
event occurs. The virtual machines in the paired VMware Cloud Foundation instance become
unprotected after you shut down Site Recovery Manager and vSphere Replication in the current
VMware Cloud Foundation instance.
You shut down VMware Aria Operations for Logs as late as possible to collect as much as log
®
data for potential troubleshooting. You shut down the Workspace ONE Access™ instances after
the management components they provide identity and access management services for.
10 SDDC Manager *
11 VxRail Manager *
Save the Credentials for the ESXi Hosts and vCenter Server for the Management
Domain
Before you shut down the management domain, get the credentials for the management domain
hosts and vCenter Server from SDDC Manager and save them. You need these credentials to
shut down the ESXi hosts and then to start them and vCenter Server back up. Because SDDC
Manager is down during each of these operations, you must save the credentials in advance.
To get the credentials, log in to the SDDC Manager appliance by using a Secure Shell (SSH) client
as vcf and run the lookup_passwords command.
Procedure
5 In the VMware Identity Manager section, click the horizontal ellipsis icon and select Power
off.
6 In the Power off VMware Identity Manager dialog box, click Submit.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
3 Right-click the VMware Aria Suite Lifecycle virtual machine and select Power > Shut down
Guest OS.
Procedure
2 In the VMs and templates inventory, expand the tree of workload domain vCenter Server and
expand data center for the workload domain.
3 Right-click an NSX Edge virtual machine for the management domain or VI workload domain
and select Power > Shut down Guest OS.
5 Repeat the steps for the remaining NSX Edge nodes for the domain.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
3 Right-click the primary NSX manager virtual machine and select Power > Shut down Guest
OS.
5 Repeat the steps for the remaining NSX Manager virtual machines.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
4 Right-click the SDDC Manager virtual machine and click Power > Shut down Guest OS.
Shut Down the VxRail Manager Virtual Machine in the Management Domain
Shut down the VxRail Manager virtual machine in the management domain by using the vSphere
Client.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
4 Right-click the VxRail Manager virtual machine and click Power > Shut down Guest OS.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
3 Right-click the Skyline Health Diagnostics virtual machine and select Power > Shutdown
Guest OS.
Procedure
2 In the Hosts and clusters inventory, expand the tree of the VI workload domain vCenter
Server and expand the data center for the VI workload domain.
4 Copy the cluster domain ID domain-c(cluster_domain_id) from the URL of the browser.
When you navigate to a cluster in the vSphere client, the URL is similar to this one:
https://<fqdn-of-vCenter-server>/ui/app/cluster;nav=h/
urn:vmomi:ClusterComputeResource:domain-c8:eef257af-fa50-455a-af7a-6899324fabe6/summary
5 In the Host and Clusters inventory, select the vCenter Server instance and click the Configure
tab.
If the property is not present, add it. The entry for the cluster cannot be deleted from the
vSphere Client then. However, keeping this entry is not an issue.
8 Click Save.
Results
The vCLS monitoring service initiates the clean-up of vCLS VMs. If vSphere DRS is activated for
the cluster, it stops working and you see an additional warning in the cluster summary. vSphere
DRS remains deactivated until vCLS is re-activated on this cluster.
To shut down the management domain vCenter Server, it must be running on the first
management ESXi host in the default management cluster.
Caution Before you shut down vCenter Server, migrate any virtual machines that are running
infrastructure services like Active Directory, NTP, DNS and DHCP servers in the management
domain to the first management host by using the vSphere Client. You can shut them down from
the first ESXi host after you shut down vCenter Server.
Procedure
2 In the Hosts and clusters inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
3 Set the vSphere DRS automation level of the management cluster to manual to prevent
vSphere HA migrating the vCenter Server appliance.
a Select the default management cluster and click the Configure tab.
b In the left pane, select Services > vSphere DRS and click Edit.
c In the Edit cluster settings dialog box, click the Automation tab, and, from the drop-down
menu, in the Automation level section, select Manual
d Click OK.
4 If the management domain vCenter Server is not running on the first ESXi host in the default
management cluster, migrate it there.
a Select the default management cluster and click the Monitor tab.
b In the left pane, under vSAN > Resyncing objects, verify that all synchronization tasks are
complete.
6 Stop vSphere HA to avoid vSphere HA initiated migrations of virtual machines after vSAN is
partitioned during the shutdown process.
b In the left pane, select Services > vSphere Availability and click the Edit button.
c In the Edit Cluster Settings dialog box, deactivate vSphere HA and click OK.
9 Right-click the management domain vCenter Server and select Guest OS > Shut down.
Shut Down vSAN and the ESXi Hosts in a Virtual Infrastructure Workload
Domain with vSphere with Tanzu
You shut down vSAN and the ESXi hosts in a VI workload domain with vSphere with Tanzu
by preparing the vSAN cluster for shutdown, placing each ESXi host in maintenance mode to
prevent any virtual machines being deployed to or starting up on the host, and shutting down the
host.
In a VI workload domain with vSphere with Tanzu, the vCenter Server instance for the domain
is already down. Hence, you perform the shutdown operation on the ESXi hosts by using the
VMware Host Client.
Procedure
1 Turn on SSH on the ESXi hosts in the workload domain by using the SoS utility of the SDDC
Manager appliance.
a Log in to the SDDC Manager appliance by using a Secure Shell (SSH) client as vcf.
b Switch to the root user by running the su command and entering the root password.
2 Log in to the first ESXi host in the workload domain cluster by using a Secure Shell (SSH)
client as root.
3 For a vSAN cluster, deactivate vSAN cluster member updates by running the command.
esxcfg-advcfg -s 1 /VSAN/IgnoreClusterMemberListUpdates
5 On the first ESXi host per vSAN cluster, prepare the vSAN cluster for shutdown by running
the command.
8 Repeat Step 6 and Step 7 on the remaining hosts in the workload domain cluster.
a Log in to the first ESXi host for the cluster at https://<esxi_host_fqdn>/ui as root.
b In the navigation pane, right-click Host and, from the drop-down menu, select Shut down.
You start the management components for the management domain first. Then, you start the
management components for the VI workload domains and the customer workloads.
If the NSX Manager cluster and NSX Edge cluster are shared with other VI workload domains,
start the other VI workload domains first. Start up NSX Manager and NSX Edge nodes as part of
the startup of the last workload domain.
Prerequisites
n Verify that external services such as Active Directory, DNS, NTP, SMTP, and FTP or SFTP are
available.
n If a vSphere Storage APIs for Data Protection (VADP) based backup solution is deployed on
the default management cluster, verify that the solution is properly started and operational
according to the vendor guidance.
You start the management components for the management domain first. Then, you start the
management components for the VI workload domains and the customer workloads.
You start VMware Aria Operations for Logs as early as possible to collect log data that helps
troubleshooting potential issues. You also start Site Recovery Manager and vSphere Replication
as early as possible to protect the management virtual machines if a disaster event occurs.
4 VxRail Manager *
5 SDDC Manager *
Start the vSphere and vSAN Components for the Management Domain
You start the management ESXi hosts using an out-of-band management interface, such as, ILO
or iDRAC to connect to the hosts and power them on. Then, restarting the vSAN cluster starts
automatically vSphere Cluster Services, vCenter Server and vSAN.
Procedure
a Log in to the first ESXi host in the management domain by using the out-of-band
management interface.
2 Repeat the previous step to start all the remaining ESXi hosts in the management domain.
vCenter Server is started automatically. Wait until vCenter Server is running and the vSphere
Client is available again.
a Right-click the vSAN cluster and select vSAN > Restart cluster.
The vSAN Services page on the Configure tab changes to display information about the
restart process.
5 After the cluster has restarted, check the vSAN health service and resynchronization status,
and resolve any outstanding issues.
b In the left pane, under vSAN > Resyncing objects, verify that all synchronization tasks are
complete.
c In the left pane, navigate to vSAN > Skyline health and verify that the cluster health score
is 100%.
6 If you have added the root user of the ESXi hosts to the Exception Users list for lockdown
mode during shutdown, remove the user from the list on each host.
a Select the host in the inventory and click the Configure tab.
d On the Exception Users page, from the vertical ellipsis menu in front of the root user,
select Remove User and click OK.
Note Start any virtual machines that are running infrastructure services like Active Directory,
NTP, DNS and DHCP servers in the management domain before you start vCenter Server.
Procedure
3 Right-click the management domain vCenter Server, and, from the drop-down menu, select
Power > Power on.
The startup of the virtual machine and the vSphere services takes some time to complete.
5 In the Hosts and clusters inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
b In the left pane, navigate to vSAN > Skyline health and verify the status of each vSAN
health check category.
c In the left pane, navigate to vSAN > Resyncing objects and verify that all synchronization
tasks are complete.
a Select the vSAN cluster under the management domain data center and click the
Configure tab.
b In the left pane, select Services > vSphere Availability and click the Edit button.
c In the Edit Cluster Settings dialog box, enable vSphere HA and click OK.
8 Set the vSphere DRS automation level of the management cluster to automatic.
a Select the default management cluster and click the Configure tab.
b In the left pane, select Services > vSphere DRS and click Edit.
c In the Edit cluster settings dialog box, click the Automation tab, and, from the drop-down
menu, in the Automation level section, select Fully automated.
d Click OK.
Procedure
2 In the Hosts and clusters inventory, expand the tree of the VI workload domain vCenter
Server and expand the data center for the VI workload domain.
4 Copy the cluster domain ID domain-c(cluster_domain_id) from the URL of the browser.
When you navigate to a cluster in the vSphere Client, the URL is similar to this one:
https://<fqdn-of-vCenter-server>/ui/app/cluster;nav=h/
urn:vmomi:ClusterComputeResource:domain-c8:eef257af-fa50-455a-af7a-6899324fabe6/summary
5 In the Host and Clusters inventory, select the vCenter Server instance for the VI workload
domain and click the Configure tab.
8 Click Save
Procedure
2 In the VMs and templates inventory, expand the workload domain vCenter Server tree and
expand the workload domain data center.
3 Locate the VxRail Manager virtual machine, right-click it, and select Power > Power on.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
4 Right-click the SDDC Manager virtual machine and click Power > Power on.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
3 Right-click the Skyline Health Diagnostics virtual machine and select Power > Power on.
4 After the Skyline Health Diagnostics virtual machine is powered on, verify its operational
state.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
3 Power on the NSX Manager nodes for the management domain or the VI workload domain.
a Right-click the primary NSX Manager node and select Power > Power on.
This operation takes several minutes to complete until the NSX Manager cluster becomes
fully operational again and its user interface - accessible.
4 Log in to NSX Manager for the management domain or VI workload domain at https://
<nsxt_manager_cluster_fqdn> as admin.
c On the Appliances page, verify that the NSX Manager cluster has a Stable status and all
NSX Manager nodes are available.
Procedure
2 In the VMs and templates inventory, expand the tree of workload domain vCenter Server and
expand data center for the workload domain.
3 Right-click an NSX Edge virtual machine from the edge cluster and select Power > Power on.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
3 Right-click the VMware Aria Suite Lifecycle virtual machine and select Power > Power on.
Procedure
2 Power on the Workspace ONE Access cluster and verify its status.
d In the VMware Identity Manager section, click the horizontal ellipsis icon and select
Power on.
3 Configure the domain and domain search parameters on the Workspace ONE Access
appliances.
a Log in to the first appliances of the Workspace ONE Access cluster by using a Secure
Shell (SSH) client as sshuser.
vi /etc/resolv.conf
d Add the following entries to the end of the file and save the changes.
Domain <domain_name>
search <space_separated_list_of_domains_to_search>
e Repeat this step to configure the domain and domain search parameters on the remaining
Workspace ONE Access appliances.
4 In the VMware Aria Suite Lifecycle user interface, check the health of the Workspace ONE
Access cluster.
c In the VMware Identity Manager section, click the horizontal ellipsis icon and select
Trigger cluster health.
You start the management components for the management domain first. Then, you start the
management components for the VI workload domains and the customer workloads.
If the NSX Manager cluster and NSX Edge cluster are shared with other VI workload domains,
follow this general order:
2 Start the VI workload domain that runs the shared NSX Edge nodes.
Start the vCenter Server Instance for a VxRail Virtual Infrastructure Workload
Domain
Use the vSphere Client to power on the vCenter Server appliance for the VxRail VI workload
domain.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
4 Right-click the virtual machine of the VxRail VI workload domain vCenter Server and select
Power > Power on.
The startup of the virtual machine and the vSphere services takes some time to complete.
What to do next
Start ESXi hosts, vSAN and VxRail Manager in a Virtual Infrastructure Workload
Domain
You start the ESXi hosts using an out-of-band management interface, such as, ILO or iDRAC to
connect to the hosts and power them on. Powering on the ESXi hosts starts VxRail Manager,
which starts vSAN and the vSphere Cluster Services (vCLS) virtual machines.
Procedure
a Log in to the first ESXi host in the VI workload domain by using the out-of-band
management interface.
2 Repeat the previous step to start all the remaining ESXi hosts in the VI workload domain.
3 Log in in to the VI workload domain vCenter Server and wait until the VxRail Manager startup
for the cluster is finished.
Use the Recent Tasks pane in the cluster to monitor startup progress.
Once startup is complete, the VxRail Manager and vSphere Cluster Services (vCLS) virtual
machines in the cluster should be running.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
3 Power on the NSX Manager nodes for the management domain or the VI workload domain.
a Right-click the primary NSX Manager node and select Power > Power on.
This operation takes several minutes to complete until the NSX Manager cluster becomes
fully operational again and its user interface - accessible.
4 Log in to NSX Manager for the management domain or VI workload domain at https://
<nsxt_manager_cluster_fqdn> as admin.
c On the Appliances page, verify that the NSX Manager cluster has a Stable status and all
NSX Manager nodes are available.
Procedure
2 In the VMs and templates inventory, expand the tree of workload domain vCenter Server and
expand data center for the workload domain.
3 Right-click an NSX Edge virtual machine from the edge cluster and select Power > Power on.
You start the management components for the management domain first. Then, you start the
management components for the VI workload domains and the customer workloads.
If the NSX Manager cluster and NSX Edge cluster are shared with other VI workload domains,
follow this general order:
2 Start the VI workload domain that runs the shared NSX Edge nodes.
Start the vSphere and vSAN Components for the Management Domain
You start the management ESXi hosts using an out-of-band management interface, such as, ILO
or iDRAC to connect to the hosts and power them on. Then, restarting the vSAN cluster starts
automatically vSphere Cluster Services, vCenter Server and vSAN.
Procedure
a Log in to the first ESXi host in the management domain by using the out-of-band
management interface.
2 Repeat the previous step to start all the remaining ESXi hosts in the management domain.
vCenter Server is started automatically. Wait until vCenter Server is running and the vSphere
Client is available again.
a Right-click the vSAN cluster and select vSAN > Restart cluster.
The vSAN Services page on the Configure tab changes to display information about the
restart process.
5 After the cluster has restarted, check the vSAN health service and resynchronization status,
and resolve any outstanding issues.
b In the left pane, under vSAN > Resyncing objects, verify that all synchronization tasks are
complete.
c In the left pane, navigate to vSAN > Skyline health and verify that the cluster health score
is 100%.
6 If you have added the root user of the ESXi hosts to the Exception Users list for lockdown
mode during shutdown, remove the user from the list on each host.
a Select the host in the inventory and click the Configure tab.
d On the Exception Users page, from the vertical ellipsis menu in front of the root user,
select Remove User and click OK.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
a Locate the vCenter Server virtual machine for the VI workload domain.
b Right-click the virtual machine and select Power > Power on.
The startup of the virtual machine and the vSphere services takes some time to complete.
Procedure
2 In the Hosts and clusters inventory, expand the tree of the VI workload domain vCenter
Server and expand the data center for the VI workload domain.
4 Copy the cluster domain ID domain-c(cluster_domain_id) from the URL of the browser.
When you navigate to a cluster in the vSphere Client, the URL is similar to this one:
https://<fqdn-of-vCenter-server>/ui/app/cluster;nav=h/
urn:vmomi:ClusterComputeResource:domain-c8:eef257af-fa50-455a-af7a-6899324fabe6/summary
5 In the Host and Clusters inventory, select the vCenter Server instance for the VI workload
domain and click the Configure tab.
8 Click Save
Procedure
2 In the VMs and templates inventory, expand the workload domain vCenter Server tree and
expand the workload domain data center.
3 Locate the VxRail Manager virtual machine, right-click it, and select Power > Power on.
Procedure
2 In the VMs and templates inventory, expand the management domain vCenter Server tree
and expand the management domain data center.
3 Power on the NSX Manager nodes for the management domain or the VI workload domain.
a Right-click the primary NSX Manager node and select Power > Power on.
This operation takes several minutes to complete until the NSX Manager cluster becomes
fully operational again and its user interface - accessible.
4 Log in to NSX Manager for the management domain or VI workload domain at https://
<nsxt_manager_cluster_fqdn> as admin.
c On the Appliances page, verify that the NSX Manager cluster has a Stable status and all
NSX Manager nodes are available.
Procedure
2 In the VMs and templates inventory, expand the tree of workload domain vCenter Server and
expand data center for the workload domain.
3 Right-click an NSX Edge virtual machine from the edge cluster and select Power > Power on.