0% found this document useful (0 votes)
168 views

Field Installation Guide v5 - 3

Uploaded by

Rakesh V Rakesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
168 views

Field Installation Guide v5 - 3

Uploaded by

Rakesh V Rakesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

Foundation 5.3.

Field Installation Guide


January 23, 2023
Contents

Field Installation Overview.......................................................................................4

Foundation Considerations.......................................................................................5
Foundation Use Case Matrix.................................................................................................................................5
CVM vCPU and vRAM Allocation....................................................................................................................... 8
Hyper-V Installation Requirements.....................................................................................................................8
Network Requirements............................................................................................................................................11

Prepare Factory-Imaged Nodes for Imaging...................................................13


Node Discovery and Foundation Launch.......................................................................................................14
Discovering Nodes in the Same Broadcast Domain...................................................................... 15
Discovering Nodes in a VLAN-Segmented Network..................................................................... 15
Launching Foundation............................................................................................................................... 16
CVM Foundation Upgrade.................................................................................................................................... 17
Upgrading the CVM Foundation by Using the GUI........................................................................17
Upgrading the CVM Foundation by Using the Foundation Java Applet............................... 18

Prepare Bare-Metal Nodes for Imaging............................................................. 19


Prepare Bare-Metal Nodes for Imaging.......................................................................................................... 19
Considerations for Bare-Metal Imaging.......................................................................................................... 19
Preparing the Workstation..................................................................................................................................20
Installing the Foundation VM............................................................................................................................. 22
Uploading Installation Files to the Foundation VM...................................................................................26
Setting Up the Network....................................................................................................................................... 27
Foundation VM Upgrade......................................................................................................................................29
Upgrading the Foundation VM by Using the GUI......................................................................... 29
Upgrading the Foundation VM by Using the CLI..........................................................................30

Foundation App for Imaging................................................................................. 31


Installing Foundation App on macOS............................................................................................................. 31
Installing Foundation App on Windows......................................................................................................... 31
Uninstalling Foundation App on macOS....................................................................................................... 32
Uninstalling Foundation App on Windows................................................................................................... 32
Upgrading Foundation App................................................................................................................................ 33

Node Configuration and Foundation Launch.................................................34


Node Configuration and Foundation Launch..............................................................................................34
Configuring the Foundation GUI Automatically......................................................................................... 34
Configuring Foundation VM by Using the Foundation GUI................................................................... 37

Check Foundation Version......................................................................................41

ii
Post-Installation Steps............................................................................................. 42
Configuring a New Cluster in Prism................................................................................................................42
Deploying Drivers and Software on Hyper-V for HPE DX......................................................................43

Hypervisor ISO Images............................................................................................47


Verify Hypervisor Support...................................................................................................................................48
Updating an iso_whitelist.json File on Foundation VM........................................................................... 49

Troubleshooting..........................................................................................................50
Setting IPMI Static IP Address.......................................................................................................................... 50
Fixing IPMI Configuration Issues........................................................................................................................51
Fixing Imaging Issues............................................................................................................................................ 53

Copyright.......................................................................................................................54

iii
FIELD INSTALLATION OVERVIEW
For a node to join a Nutanix cluster, it must have a hypervisor and AOS combination that
Nutanix supports. AOS is the operating system of the Nutanix Controller VM, which is a VM
that must be running in the hypervisor to provide Nutanix-specific functionality. Find the
complete list of supported hypervisor/AOS combinations at https://portal.nutanix.com/page/
documents/compatibility-interoperability-matrix.
Foundation is the official deployment software of Nutanix. Foundation allows you to configure
a pre-imaged node, or image a node with a hypervisor and an AOS of your choice. Foundation
also allows you to form a cluster out of nodes whose hypervisor and AOS versions are
the same, with or without re-imaging. Foundation is available for download at https://
portal.nutanix.com/#/page/Foundation.
If you already have a running cluster and want to add nodes to it, you must use the Expand
Cluster option in Prism, instead of using Foundation. Expand Cluster allows you to directly re-
image a node whose hypervisor/AOS version does not match the cluster's version, or a node
that is only running DiscoveryOS. More details on DiscoveryOS are provided below.
Nutanix and its OEM partners install some software on a node at the factory, before shipping
it to the customer. For shipments inside the USA, this software is a hypervisor and an AOS.
For Nutanix factory nodes, the hypervisor is AHV. In case of the OEM factories, it is up to the
vendor to decide what hypervisor to ship to the customer. However, they always install AOS,
regardless of the hypervisor.
For shipments outside the USA, Nutanix installs a light-weight software called DiscoveryOS,
which allows the node to be discovered in Foundation or in the Expand Cluster option of Prism.
Since a node with DiscoveryOS is not pre-imaged with a hypervisor and an AOS, it must go
through imaging first before joining a cluster. Both Foundation and Expand Cluster allow you to
directly image it with the correct hypervisor and AOS.
Vendors who do not have an OEM agreement with Nutanix ship a node without any software
(not even DiscoveryOS) installed on it. Foundation supports bare-metal imaging of such nodes.
In contrast, Expand Cluster does not support direct bare-metal imaging. Therefore, if you want
to add a software-less node to an existing cluster, you must first image it using Foundation,
then use Expand Cluster.
This document only explains procedures that apply to NX and OEM nodes. For non-OEM nodes,
you must perform the bare-metal imaging procedures specifically adapted for those nodes.
For those procedures, see the vendor-specific field installation guides, available on the Nutanix
Support portal at https://portal.nutanix.com/page/documents/list?type=compatibilityList.

Note: Mixed-vendor clusters are not supported. For more information, see the Product Mixing
Restrictions in the NX Series Hardware Administration Guide.

• To re-image factory-prepared nodes, or create a cluster from these nodes, or both, see
Prepare Factory-Imaged Nodes for Imaging on page 13.
• To image bare-metal nodes and optionally configure them into a cluster, see Prepare Bare-
Metal Nodes for Imaging on page 19 .

Foundation |  Field Installation Overview | 4


FOUNDATION CONSIDERATIONS
This section provides the various guidelines, compatibility list, limitations, and capabilities of
Foundation.

Foundation Use Case Matrix


The following matrix provides a list of use cases and their supportability with the various
Foundation bits available for download:

Table 1: Foundation Use Case Matrix

CVM Foundation Portable Foundation Standalone


(Windows, macOS) Foundation VM

Function Factory-imaged nodes


• Factory-imaged • Factory-imaged
nodes nodes
• Bare-metal nodes • Bare-metal nodes

Foundation |  Foundation Considerations | 5


CVM Foundation Portable Foundation Standalone
(Windows, macOS) Foundation VM

Hardware Any Any, if you image Any


discovered nodes.
Note: On all Note: On all
Nutanix G8 If you image nodes Nutanix G8
platforms, without discovery, the platforms,
Hyper-V 2019 hardware support is Hyper-V 2019
and Hyper-V limited as follows: and Hyper-V
2022 is only 2022 is only
supported with • Nutanix: Only G4 supported with
Hybrid (SATA and above Hybrid (SATA
or SAS SSDs or SAS SSDs
+ HDDs) and • Dell + HDDs) and
AFA (SATA AFA (SATA
or SAS SSD) • HPE or SAS SSD)
configurations. configurations.
Hyper-V 2019 Note: Hyper-V 2019
and Hyper- and Hyper-
V 2022 is not • Portable V 2022 is not
supported on foundation supported on
any Nutanix G8 any Nutanix G8
does
server models server models
with SATA or not have with SATA or
SAS + NVMe support SAS + NVMe
SSDs and only for Lenovo SSDs and only
NVMe SSD server. NVMe SSD
configurations. configurations.
For more • On all For more
details, see KB Nutanix G8 details, see KB
000012360. platforms, 000012360.
Hyper-V
2019 and
Hyper-
V 2022
is only
supported
with Hybrid
(SATA or
SAS SSDs
+ HDDs)
and AFA
(SATA or
SAS SSD)
configurations.
Hyper-V
2019 and
Hyper-V
2022 is not
supported
on any
Nutanix
G8 server
models
with SATA
or SAS
+ NVMe
SSDs
and only
NVMe SSD
Foundation |  Foundation Considerations | 6
configurations.
For more
details,
CVM Foundation Portable Foundation Standalone
(Windows, macOS) Foundation VM

If IPv6 is disabled Cannot image nodes. IPMI IPv4 required on IPMI IPv4 required on
the nodes the nodes

Can configure the No. Manually No. Manually Yes


VLAN of Foundation configure in the configure in Windows
vSwitch of the host. or macOS.

Can configure the Yes Yes Yes


VLAN of CVM/hosts

LACP support Yes Yes Yes

Note: ESX Note: ESX Note: ESX


supports supports supports
both static both static both static
and dynamic and dynamic and dynamic
LAGs. But this LAGs. But this LAGs. But this
installer is not installer is not installer is not
capable of capable of capable of
configuring the configuring the configuring the
ESX vSwitch to ESX vSwitch to ESX vSwitch to
be compatible be compatible be compatible
with either type with either type with either type
of LAGs. Only of LAGs. Only of LAGs. Only
vCenter can vCenter can vCenter can
configure it. So configure it. So configure it. So
you may not you may not you may not
select Static or select Static or select Static or
Dynamic here. If Dynamic here. If Dynamic here. If
your production your production your production
switch will have switch will have switch will have
LAGs, then first LAGs, then first LAGs, then first
run this installer run this installer run this installer
over a switch over a switch over a switch
with no LAG with no LAG with no LAG
or a fallback- or a fallback- or a fallback-
enabled enabled enabled
dynamic LAG. dynamic LAG. dynamic LAG.
Then after Then after Then after
installation, installation, installation,
move the move the move the
nodes to the nodes to the nodes to the
production production production
switch, and switch, and switch, and
configure the configure the configure the
vSwitches vSwitches vSwitches
manually in manually in manually in
vCenter. vCenter. vCenter.

Multi-homing support Yes Yes Yes

RDMA support Yes Yes Yes

How to use? Access using http:// Launch the executable Deploy as a VM on


CVM_IP:8000/ for Windows 10+ or VirtualBox, Fusion,
macOS 10.13.1+ Workstation, AHV,
ESX, and so on

Foundation |  Foundation Considerations | 7


CVM vCPU and vRAM Allocation
To know the minimum CVM configurations based on the platform category available at field,
see Controller VM (CVM) Field Specifications in the Acropolis Advanced Administration Guide.

Hyper-V Installation Requirements


Ensure that the following requirements are met before installing Hyper-V:

Windows Active Directory Domain Controller


Requirements:

• The primary domain controller version must at least be 2008 R2.

Note: If you have Volume Shadow Copy Service (VSS) based back up tool (for example
Veeam), functional level of Active Directory must be 2008 or higher.

• Install and run Active Directory Web Services (ADWS). By default, connections are made
over TCP port 9389 and firewall policies enable an exception on this port for ADWS.
To test that ADWS is installed and run on a domain controller, log on as a domain
administrator to a Windows host in the same domain with the RSAT-AD-Powershell feature
installed. Execute the following PowerShell command:
> (Get-ADDomainController).Name

If the command prints the primary name of the domain controller, then ADWS is installed and
the port is open.

• The domain controller must run a DNS server.

Note: If any of the preceding requirements are not met, you must manually create an Active
Directory computer object for the Nutanix storage in the Active Directory, and add a DNS
entry for the name.

• Ensure that the Active Directory domain is configured correctly for consistent time
synchronization.
Accounts and Privileges:

• An Active Directory account with permission to create new Active Directory computer
objects for either a storage container or Organizational Unit (OU) where Nutanix nodes are
placed. The credentials of this account are not stored anywhere.
• An account that has sufficient privileges to join a Windows host to a domain. The credentials
of this account are not stored anywhere. These credentials are only used to join the hosts to
the domain.
The following additional information are required:

• The IP address of the primary domain controller.

Note: The primary domain controller IP address is set as the primary DNS server on all the
Nutanix hosts. It is also set as the NTP server in the Nutanix storage cluster to synchronize
time between Controller VMs, hosts and Active Directory.

• The fully qualified domain name to which the Nutanix hosts and the storage cluster is going
to be joined.

Foundation |  Foundation Considerations | 8


SCVMM

Note: Relevant only if you have SCVMM in your environment.

Requirements:

• The SCVMM version must be 2012 R2 or later. The tool must be installed on a Windows
Server 2012 or a later version.
• The SCVMM server must allow PowerShell remote execution.
To test this scenario, log on by using the SCVMM administrator account in a Windows host
and run the following PowerShell command on a Windows host that is different to the
SCVMM host (for example, run the command from the domain controller). If the command
returns the name of the SCVMM server, then PowerShell remote execution on the SCVMM
server is permitted.
> Invoke-Command -ComputerName scvmm_server -ScriptBlock {hostname} -Credential MYDOMAIN
\username

Replace scvmm_server with the SCVMM host name and MYDOMAIN with Active Directory domain
name.

Note: If the SCVMM server does not allow PowerShell remote execution, you can perform the
SCVMM setup manually by using the SCVMM user interface.

• The ipconfig command must run in a PowerShell window on the SCVMM server. To verify
run the following command.
> Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {ipconfig} -Credential
MYDOMAIN\username

Replace scvmm_server_name with the SCVMM host name and MYDOMAIN with Active Directory
domain name.
• The SMB client configuration in the SCVMM server must have RequireSecuritySignature set
to False. To verify, run the following command.
> Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {Get-SMBClientConfiguration |
FL RequireSecuritySignature}

Replace scvmm_server_name with the SCVMM host name.


To change it, you can use the following command in the PowerShell on the SCVMM host by
logging in as a domain administrator.
Set-SMBClientConfiguration -RequireSecuritySignature $False -Force

If you are changing it from True to False, it is important to confirm that the policies on
the SCVMM host have the correct value. Changing the value of RequireSecuritySignature
may not take effect if there is a policy with the opposite value. On the SCVMM host, run
rsop.msc to review the resultant set of policy details, and verify the value by going to,
Servername > Computer Configuration > Windows Settings > Security Settings > Local
Policies > Security Options: Policy Microsoft network client: Digitally sign communications
(always). The value displayed in RSOP must be, Disabled or Not Defined for the change
to persist. Also, if the RSOP shows it as Enabled, the group policies that are configured in
the domain to apply to the SCVMM server must be updated to Disabled. Otherwise, the
RequireSecuritySignature will revert back to the value of True. After setting the policy in
Active Directory and propagating to the domain controllers, refresh the SCVMM server

Foundation |  Foundation Considerations | 9


policy by running the command gpupdate /force. Confirm in RSOP that the value is
Disabled.

Note: If security signing is mandatory, then you must enable Kerberos in the Nutanix cluster.
In this case, it is important to ensure that the time remains synchronized between the Active
Directory server, the Nutanix hosts, and the Nutanix Controller VMs. The Nutanix hosts and
the Controller VMs use the Active Directory server as the NTP server. So, ensure that Active
Directory domain is configured correctly for consistent time synchronization.

Accounts and Privileges:

• When adding a host or a cluster to the SCVMM, the run-as account that is used to manage
the host or the cluster must be different from the service account that was used to install
SCVMM.
• Run-as account must be a domain account and must have local administrator privileges on
the Nutanix hosts. This can be a domain administrator account. When the Nutanix hosts are
joined to the domain, the domain administrator account automatically takes administrator
privileges on the host. If the domain account used as the run-as account in SCVMM is not a
domain administrator account, you must manually add the run-as account to the list of local
administrators on each host by running sconfig.

• SCVMM domain account with administrator privileges on SCVMM and PowerShell remote
execution privileges.
• If you want to install SCVMM server, a service account with local administrator privileges on
the SCVMM server.

IP Addresses

• One IP address for each Nutanix host.


• One IP address for each Nutanix Controller VM.
• One IP address for each Nutanix host IPMI interface.
• One IP address for the Nutanix storage cluster.
• One IP address for the Hyper-V failover cluster.

Note: For N nodes, (3*N + 2) IP addresses are required. All IP addresses must be in the same
subnet.

DNS Requirements

• Each Nutanix host must be assigned a name of 15 characters or less, which gets
automatically added to the DNS server during domain joining.
• The Nutanix storage cluster must be assigned a name of 15 characters or less. Then, add the
name to the DNS server when the storage cluster joins the domain.
• The Hyper-V failover cluster must be assigned a name of 15 characters or less, which gets
automatically added to the DNS server when the failover cluster is created.
• After the Hyper-V configuration, all names must resolve to an IP address in the Nutanix
hosts, the SCVMM server (if applicable), or any other host that needs access to the Nutanix
storage, for example, a host running the Hyper-V Manager.

Foundation |  Foundation Considerations | 10


Storage Access Requirements

• Virtual machine and virtual disk paths must always refer to the Nutanix storage cluster by
FQDN and not the external IP address. If you use the IP address, it directs all the I/O to a
single node in the cluster and compromises performance and scalability.

Note: For external non-Nutanix hosts that require access to Nutanix SMB shares, see Nutanix
SMB Shares Connection Requirements from Outside the Cluster.

Host Maintenance Requirements

• When applying Windows updates to the Nutanix hosts, the hosts should be restarted one
at a time, ensuring that Nutanix services of the Controller VM on the restarted host come
up fully prior to proceeding with the update of the next host. This can be accomplished
by using Cluster Aware Updating as well as using a Nutanix-provided script, which can be
plugged into the Cluster Aware Update Manager as a pre-update script. This pre-update
script ensures that the Nutanix services are restarted on one host at a time maintaining
availability of storage throughout the update procedure. For more information about cluster-
aware updating, see Installing Windows Updates with Cluster-Aware Updating.

Note: Ensure that automatic Windows updates are not enabled for the Nutanix hosts in the
domain policies.

• If you place a host that is managed by SCVMM in maintenance mode, the Controller VM
running on the host is placed in the saved state by default , which might create issues. To
properly place a host in the maintenance mode refer to SCVMM Operation in the Hyper-V
Administration for Acropolis Guide.
• If you are using Microsoft Hyper-V on HPE DX models, ensure that the software and drivers
on Hyper-V are compatible with the firmware version installed on the nodes. For more
information, see Deploying Drivers and Software on Hyper-V for HPE DX on page 43 .

Network Requirements
When configuring a Nutanix block a set of IP addresses is required to be allocated to the
cluster. Ensure that chosen IP addresses do not overlap with any hosts or services within the
environment. You will also need to make sure to open the software ports that are used to
manage cluster components and to enable communication between components such as the
Controller VM, Web console, Prism Central, hypervisor, and the Nutanix hardware. Nutanix
recommends that you specify information such as a DNS server and NTP server even if the
cluster is not connected to the Internet or runs in a non-production environment.

Existing Customer Network


You will need the following information during the cluster configuration:

• Default gateway
• Network mask
• DNS server
• NTP server
Check whether a proxy server is in place in the network. To perform this check, you need the IP
address and port number of that server when enabling Nutanix Support on the cluster.

Foundation |  Foundation Considerations | 11


New IP Addresses
Each node in a Nutanix cluster requires three IP addresses, one for each of the following
components:

• IPMI interface
• Hypervisor host
• Nutanix Controller VM
All Controller VMs and hypervisor hosts must be on the same subnet. No systems other than
the Controller VMs and hypervisor hosts can be on this network.

Software Ports Required for Management and Communication


For more information about the ports that are required for Foundation, refer Foundation Ports.

Foundation |  Foundation Considerations | 12


PREPARE FACTORY-IMAGED NODES
FOR IMAGING
This procedure describes how to install a selected hypervisor and the Nutanix Controller VM
on discovered nodes and how to configure the nodes into a cluster. "Discovered nodes" are
factory prepared nodes that are not currently part of any cluster, and are reachable within
the same subnet. This procedure runs the Foundation tool through the Nutanix Controller VM
(Controller VM–based Foundation).

Before you begin

• Make sure that the nodes that you want to image are factory-prepared nodes that have not
been configured in any way and are not part of a cluster.
• Physically install the Nutanix nodes at your site. For general installation instructions, see
Mounting the Block in the Getting Started Guide. For installation instructions specific to your
model type, see Rack Mounting in the Nutanix Rack Mounting Guide.
Your workstation must be connected to the network on the same subnet as the nodes you
want to image. Foundation does not require an IPMI connection or any special network
port configuration to image discovered nodes. See Network Requirements for general
information about the network topology and port access required for a cluster.
• Determine the appropriate network (gateway and DNS server IP addresses), cluster (name,
virtual IP address), and node (Controller VM, hypervisor, and IPMI IP address ranges)
parameter values needed for installation.

Note: The use of a DHCP server is not supported for Controller VMs, so make sure to assign
static IP addresses to Controller VMs.

Note: Nutanix uses an internal virtual switch to manage network communications between
the Controller VM and the hypervisor host. This switch is associated with a private network
on the default VLAN and uses the 192.168.5.0/24 address space. For the hypervisor, IPMI
interface, and other devices on the network (including the guest VMs that you create on
the cluster), do not use a subnet that overlaps with the 192.168.5.0/24 subnet on the default
VLAN. If you want to use an overlapping subnet for such devices, make sure that you use a
different VLAN.

• Download the following files from Nutanix Support portal:

• AOS installer named nutanix_installer_package-version#.tar.gz from the AOS (NOS)


download page.
• Hypervisor ISO if you want to install Hyper-V or ESXi. The user must provide the
supported Hyper-V or ESXi ISO (see Hypervisor ISO Images on page 47); Hyper-V and
ESXi ISOs are not available on the support portal.
It is not necessary to download AHV because the AOS bundle includes an AHV
installation bundle. However, you can download an AHV installation bundle if you want to
install a non-default version.
• If you are using Microsoft Hyper-V on HPE DX models, ensure that the software and drivers
on Hyper-V are compatible with the firmware version installed on the nodes. For more
information, see Deploying Drivers and Software on Hyper-V for HPE DX on page 43 .
This procedure to deploy software and drivers is to be carried out after cluster creation and
before moving the nodes to production.

Foundation |  Prepare Factory-Imaged Nodes for Imaging | 13


• Make sure that IPv6 is enabled on the network to which the nodes are connected and IPv6
multicast is supported.
• If the nodes contain self-encrypting drives (SEDs), disable encryption on the SEDs before
imaging the nodes. If the nodes contain only SEDs, you can enable encryption after you
image the nodes. If the nodes contain both regular hard disk drives (HDDs) and SEDs, do not
enable encryption on the SEDs at any time during the lifetime of the cluster.
For information about enabling and disabling encryption, see the Data-at-Rest Encryption
chapter in the AOS Security Guide.

Note: It is important that you unlock the SEDs before imaging in order to prevent any data
loss. To unlock the SEDs, contact Nutanix Support or refer to the KB article 000003750 in the
Nutanix Support portal.

About this task

Note: This method can image discovered nodes or create a single cluster from discovered nodes
or both. This method is limited to factory prepared nodes running AOS 4.5 or later. If you want to
image factory prepared nodes running an earlier AOS (NOS) version or image bare-metal nodes,
see Prepare Bare-Metal Nodes for Imaging on page 19.

To image the nodes and create a cluster, do the following:

Procedure

1. Run discovery and launch Foundation (see Node Discovery and Foundation Launch on
page 14).

2. Update Foundation to the latest version (see Upgrading the CVM Foundation by Using the
Foundation Java Applet on page 18).

Note:

• This step is optional for platforms other than HPE DX.


• For HPE DX platform, the minimum supported version of Foundation is 4.4.1. If the
Foundation version installed on the discovered nodes is older than 4.4.1, upgrade
Foundation to 4.4.1 or a newer version. Perform the upgrade process with a Linux
workstation. Do not use a Windows workstation to perform the upgrade as it is
not supported.

3. Run CVM Foundation (see Configuring Foundation VM by Using the Foundation GUI on
page 37).

4. After the cluster is created successfully, begin configuring the cluster (see Configuring a
New Cluster in Prism on page 42).

Node Discovery and Foundation Launch


The process of discovering nodes and launching Foundation requires the Nutanix nodes and
the workstation that you use for imaging are in the same broadcast domain or in a VLAN-
segmented network.

Foundation |  Prepare Factory-Imaged Nodes for Imaging | 14


Discovering Nodes in the Same Broadcast Domain

About this task


To discover nodes in a network that does not use VLANs, do the following:

Procedure

1. Access the Nutanix Support portal (https://my.nutanix.com).

2. Browse to Downloads > Foundation, and then click FoundationApplet-offline.zip.

3. Extract the contents of the downloaded ZIP file into the workstation that you want to use for
imaging, and then double-click nutanix_foundation_applet.jnlp.
The discovery process begins and a window appears with a list of discovered nodes.

Note: A security warning message may appear to indicate that the list of nodes is from an
unknown source. To run the application, click Accept and Run.

Figure 1: Foundation Launcher Window

Discovering Nodes in a VLAN-Segmented Network


Nutanix nodes running AHV version 20160215 or later include a network configuration tool.
You can use the tool to assign a VLAN tag to the public interface on the Controller VM and
one or more physical interfaces on the host. You can also use the tool to assign an IP address
to the Controller VM and hypervisor. After the network configuration is complete, start the
Foundation service running on the Controller VM of that host to discover and image other
Nutanix nodes. Foundation uses the VLAN sniffer provided in CVM to detect free Nutanix
nodes and nodes in other VLANs. The VLAN sniffer uses the Neighbor Discovery protocol for IP
version 6. Therefore, the VLAN sniffer requires that the physical switch to which the nodes are
connected, supports IPv6 broadcast and multicast. During the imaging process, Controller VM–
based Foundation also assigns the specified VLAN tag (assumed to be that of the production
VLAN) to the corresponding interfaces on the selected nodes, eliminating the need to perform
additional VLAN assignment tasks for those nodes.

Foundation |  Prepare Factory-Imaged Nodes for Imaging | 15


Before you begin
Connect the Nutanix nodes to a switch.

About this task

Note: Use the network configuration tool only on factory-prepared nodes that are not part of a
cluster. Using the tool on a node that is part of a cluster makes the node inaccessible to the other
nodes in the cluster. If there are issues, the only way to resolve an issue is to reconfigure the node
to the previous IP addresses by using the network configuration tool again.

To configure the network for a node, do the following:

Procedure

1. Connect a console to one of the nodes and log on to the Acropolis host by using the root
credentials.

2. Change your working directory to /root/nutanix-network-crashcart/, and then start the


network configuration utility.
root@ahv# ./network_configuration

3. In the network configuration utility, do the following:

a. Review the network card details to ascertain interface properties and identify connected
interfaces.
b. Use the arrow keys to go to the interface that you want to configure, and then use the
Spacebar key to select the interface.
Repeat this step for each interface that you want to configure.
c. Use the arrow keys to navigate through the user interface and specify values for the
following parameters:

• VLAN Tag. VLAN tag to use for the selected interfaces.


• Netmask. Network mask of the subnet to which you want to assign the interfaces.
• Gateway. Default gateway for the subnet.
• Controller VM IP. IP address for the Controller VM.
• Hypervisor IP. IP address for the hypervisor.
d. Use the arrow keys to click Done, and then click Enter.
The network configuration utility configures the interfaces.

Launching Foundation
Launching Foundation depends on whether you used the Foundation Applet to discover nodes
in the same broadcast domain or the crash cart user interface to discover nodes in a VLAN-
segmented network.

About this task


To launch the Foundation GUI, do one of the following:

Foundation |  Prepare Factory-Imaged Nodes for Imaging | 16


Procedure

• If you used the Foundation Applet to discover nodes in the same broadcast domain, do the
following:

a. Select the node on which you want to run Foundation.


The selected node is imaged first and then the other nodes. Select nodes only with
a status field value of Free, which indicates that it is not currently part of a cluster. A
value of Unavailable indicates that the node is part of an existing cluster or otherwise
unavailable. To rerun the discovery process, click Retry discovery.

Note: A warning message stating that the value provided is not the highest available
version of Foundation found in the discovered nodes may appear. If you select a node
using an earlier Foundation version (one that does not recognize one or more of the node
models), installation may fail when Foundation attempts to image a node of an unknown
model. Therefore, select the node with the highest Foundation version among the nodes to
be imaged. If you do not intend to select any of the nodes that have the higher Foundation
version, ignore the warning and proceed.

b. (Optional but recommended) Upgrade Foundation on the selected node to the latest
version. See Upgrading the CVM Foundation by Using the Foundation Java Applet on
page 18.
c. With the node having the latest Foundation version selected, click the Launch Foundation
button.
Foundation searches the network subnet for unconfigured Nutanix nodes (factory
prepared nodes that are not part of a cluster) and then displays information about
the discovered blocks and nodes in the Discovered Nodes screen. (It does not display
information about nodes that are powered off or in a different subnet.) The discovery
process normally takes just a few seconds.

Note: If you want Foundation to image nodes from an existing cluster, you must first either
remove the target nodes from the cluster or destroy the cluster.

• If you used the crash cart user interface to discover nodes in a VLAN-segmented network, in
a browser on your workstation, enter the following URL: http://CVM_IP_address:8000
Replace CVM_IP_address with the IP address that you assigned to the Controller VM when
using the network configuration tool.

CVM Foundation Upgrade


You can upgrade the CVM Foundation from version 3.12 or later to the latest version by using
the Foundation GUI, Foundation Java applet, or Prism web console.
For information about upgrading Foundation by using the Prism web console, see Upgrading
Foundation chapter in the Acropolis Upgrade Guide.

Note: Ensure that you use the minimum version of Foundation required by your hardware
platform. To determine whether Foundation needs an upgrade for a hardware platform, see
the respective System Specifications guide. If the nodes you want to include in the cluster are
of different models, determine which of their minimum Foundation versions is the most recent
version, and then upgrade Foundation on all the nodes to that version.

Upgrading the CVM Foundation by Using the GUI


You can update the CVM Foundation by using the Foundation GUI.

Foundation |  Prepare Factory-Imaged Nodes for Imaging | 17


Procedure
To update the CVM Foundation by using the Foundation GUI, click the version link in the
Foundation GUI.

Upgrading the CVM Foundation by Using the Foundation Java Applet


The Foundation Java applet includes an option to upgrade or downgrade the CVM Foundation
on a discovered node. Nutanix recommends CVM Foundation update but it is optional. Ensure
that the node is not already configured. Upgrade the CVM Foundation on any one node and
use this node to upgrade the CVM Foundation on the other nodes for imaging. If the node is
configured, do not use the Java applet. Instead, update the CVM Foundation by using the Prism
web console (see the Upgrading Foundation chapter in the Acropolis Upgrade Guide).

Before you begin


1. Download the Foundation .tar file from the Nutanix Support portal to the workstation on
which you plan to run the Foundation Java applet.
2. Download and start the Foundation Java applet.

About this task


To upgrade Foundation on a discovered node by using the Foundation Java applet, do the
following:

Procedure

1. In the Foundation Java applet, select the node on which you want to upgrade the CVM
Foundation.

2. Click Upgrade Foundation.

3. Browse to the folder where you downloaded the Foundation .tar file and double-click the
.tar file.
The upgrade process begins. After the upgrade completes, Genesis restarts on the node, and
that in turn restarts the Foundation service. After the Foundation service becomes available,
the upgrade process reports the status of the upgrade.

Foundation |  Prepare Factory-Imaged Nodes for Imaging | 18


PREPARE BARE-METAL NODES FOR
IMAGING
Prepare Bare-Metal Nodes for Imaging
You can perform bare-metal (also referred to as standalone) imaging from a workstation with
access to the IPMI interfaces of the nodes. Imaging a cluster in the field requires installing
tools (such as Oracle VM VirtualBox, VMware Fusion ) on the workstation and setting up the
environment to run these tools. This chapter describes how to install a selected hypervisor
and the Nutanix Controller VM on bare-metal nodes and configure the nodes into one or
more clusters. "Bare metal" nodes are not factory-prepared and cannot be detected through
discovery. However, you can also use this method to image factory-prepared nodes.

Before you begin

• Physically install the nodes at your site. For installing Nutanix hardware platforms, see the NX
Series Hardware Administration Guide for your model type. For installing hardware from any
other manufacturer, see that manufacturer's documentation.
• Set up the installation environment (see Preparing the Workstation on page 20).
• Ensure that you have the appropriate global, node, and cluster parameter values needed for
installation. The use of a DHCP server is not supported for Controller VMs, so make sure to
assign static IP addresses to Controller VMs.

Note: If the Foundation VM is configured with an IP address that is different from other
clusters that require imaging in a network (for example Foundation VM is configured with a
public IP address while the cluster resides in a private network), repeat step 8 in Installing
the Foundation VM on page 22 to configure a new static IP address for the Foundation
VM.

• If the nodes contain self-encrypting drives (SEDs), disable encryption on the SEDs before
imaging the nodes. If the nodes contain only SEDs, enable encryption after you image the
nodes. If the nodes contain both regular hard disk drives (HDDs) and SEDs, do not enable
encryption on the SEDs at any time during the lifetime of the cluster.
For information about enabling and disabling encryption, see the Data-at-Rest Encryption
chapter in the AOS Security Guide.

Note: It is important that you unlock the SEDs before imaging in order to prevent any data
loss. To unlock the SEDs, contact Nutanix Support or refer to the KB article 000003750 in the
Nutanix Support portal.

Note: After you prepare the bare-metal nodes for Foundation, configure the Foundation VM by
using the GUI. For details, see Node Configuration and Foundation Launch on page 34.

Considerations for Bare-Metal Imaging


Restrictions

• To avoid Foundation timeout errors, make sure that the CD-ROM appears first in the BIOS
boot order.
• If Spanning Tree Protocol (STP) is enabled on the ports that are connected to the Nutanix
host, Foundation might time out during the imaging process. Therefore, ensure to disable

Foundation |  Prepare Bare-Metal Nodes for Imaging | 19


STP by using PortFast or an equivalent feature on the ports that are connected to the
Nutanix host before starting Foundation.
• Avoid connecting any device (for example, plugging a USB port on a node) that presents
virtual media, such as a CD-ROM. Connecting such devices could conflict with the installation
when the Foundation tool tries to mount the virtual CD-ROM hosting the install ISO.
• During bare-metal imaging, you assign IP addresses to the hypervisor host, the Controller
VMs, and the IPMI interfaces. Do not assign IP addresses from a subnet that overlaps with
the 192.168.5.0/24 address space on the default VLAN. Nutanix uses an internal virtual
switch to manage network communications between the Controller VM and the hypervisor
host. This switch is associated with a private network on the default VLAN and uses the
192.168.5.0/24 address space. If you want to use an overlapping subnet, make sure that you
use a different VLAN.
• (For Cisco UCS Domain Mode only) Foundation creates service profiles and policies with
names of the form fdtnNodeSerial (where NodeSerial is the serial number of the server). If
a service profile exists with the name that Foundation is attempting to use, that service
profile is replaced and any associated server is disassociated. Therefore, if the UCS® Manager
instance manages servers that will not be imaged with Foundation, make sure that the
service profiles associated with those servers do not use the fdtnNodeSerial format.

Recommendations

• Nutanix recommends that imaging and configuration of bare-metal nodes be performed or


supervised by Nutanix sales engineers or partners. If a Nutanix sales engineer or a partner is
unavailable, contact Nutanix Support for assistance.
• Connect to a flat switch (no routing tables) instead of a managed switch (routing tables)
to protect the production environment against configuration errors. Foundation includes a
multi-homing feature that allows you to image nodes by using the production IP addresses
even when connected to a flat switch. For information about the network topology and port
access required for a cluster, see Network Requirements on page 11.

Limitations

• Configuration of a node's virtual switches to use LACP. Perform this configuration manually
after imaging.
• Configuration of network adapters to use jumbo frames during imaging. Perform this
configuration manually after imaging.

Preparing the Workstation


A workstation is needed to host the Foundation VM during imaging. You can perform these
steps either before going to the installation site (if you use a portable laptop) or at the site (if
an active internet connection is available). You can also run the Foundation VM from an existing
Nutanix cluster.

Before you begin


Get a workstation (laptop or desktop computer) that you can use for the installation. The
workstation must have at least 3 GB of memory (Foundation VM size plus 1 GB), 80 GB of disk
space (preferably SSD), and a physical (wired) network adapter.
To prepare the workstation, do the following:

Foundation |  Prepare Bare-Metal Nodes for Imaging | 20


Procedure

1. Go to the Nutanix Support portal and download the following files to a temporary directory
on the workstation:

File Location Description

Foundation_VM_OVF-version#.tar On the Nutanix Support .tar file contains:


portal, go to Downloads >
Foundation. • Foundation_VM-version#.ovf.
This file is the Foundation
VM OVF configuration
file for the version#
release. For example,
Foundation_VM-3.1.ovf

• Foundation_VM-version#-
disk1.vmdk. This file is
the Foundation VM
VMDK file for the version#
release. For example,
Foundation_VM-3.1-
disk1.vmdk.

nutanix_installer_package-version#.tar.gz
On the Nutanix Support File used for imaging the
portal, go to Downloads > nodes with desired AOS
AOS (NOS). release.

Note:

• For post installation performance validation execute the Four


Corners Microbenchmark using X-ray. For more information,
refer https://portal.nutanix.com/page/documents/list?
type=software&filterKey=software&filterVal=X-Ray.
• To install a hypervisor other than AHV, you must provide the ISO image of
the hypervisor (see Hypervisor ISO Images on page 47). Make sure that the
hypervisor ISO image is available on the workstation.
• If you are using Microsoft Hyper-V on HPE DX models, ensure that the software
and drivers on Hyper-V are compatible with the firmware version installed on the
nodes. For more information, see Deploying Drivers and Software on Hyper-V for
HPE DX on page 43. This procedure to deploy software and drivers is to be
carried out after cluster creation and before moving the nodes to production.
• Verify the support for the hypervisor and the corresponding version of the
hypervisor. For details, see Verify Hypervisor Support on page 48.

2. Download the installer for Oracle VM VirtualBox (a free open source tool used to create
a virtualized environment on the workstation) and install it with the default options. For
installation and start-up instructions (https://www.virtualbox.org/wiki/Documentation), see
the Oracle VM VirtualBox User Manual.

Note: You can also use any other virtualization environment (VMware ESXi, AHV, and so on)
instead of Oracle VM VirtualBox.

Foundation |  Prepare Bare-Metal Nodes for Imaging | 21


3. Go to the location where you downloaded the Foundation .tar file and extract the contents.
$ tar -xf Foundation_VM_OVF-version#.tar

Note: If the tar utility is not available, use the appropriate utility for your environment.

4. Copy the extracted files to the VirtualBox VMs folder that you created.

Installing the Foundation VM


Import the Foundation VM into Oracle VM VirtualBox.

About this task


To install the Foundation VM on the workstation, do the following:

Procedure

1. Start Oracle VM VirtualBox.

2. Click the File menu and select Import Appliance... from the pull-down list.

3. In the Import Virtual Appliance dialog box, browse to the location of the Foundation .ovf
file, and select the Foundation_VM-version#.ovf file.

4. Click Next.

5. Click Import.

6. In the left pane, select Foundation_VM-version#, and click Start.


The VM operating system boots and the Foundation VM console launches.

7. If the login screen appears, log on as the nutanix user with password nutanix/4u.

Foundation |  Prepare Bare-Metal Nodes for Imaging | 22


8. (Optional) If you want to enable the file drag-and-drop functionality between your
workstation and the Foundation VM, install Oracle Additions as follows:

a. On the VirtualBox window for the Foundation VM, select Devices > Insert Guest Additions
CD Image... from the menu.

Note: To add optical drive (if not previously added):

• Go to VM Settings > Storage.


• Click the Add new storage attachment option in the bottom of the window.
• Select Optical Drive.
VM Settings > Storage. Click the Add new storage attachment option in the bottom left.
Select Optical Drive.

A VBOXADDITIONS CD entry appears on the Foundation VM desktop.


b. When the Open Autorun Prompt dialog box appears, click OK, and then click Run.
c. Enter the root password nutanix/4u and click Authenticate.

d. After the installation completes, press the return key to close the VirtualBox Guest
Additions installation window.
e. Right-click the VBOXADDITIONS CD entry on the desktop and select Eject.
f. Restart the Foundation VM by selecting System > Shutdown... > Restart from the Linux
GUI.
g. After the Foundation VM restarts, select Devices > Drag 'n' Drop > Bidirectional from the
menu.

Foundation |  Prepare Bare-Metal Nodes for Imaging | 23


9. To determine if the Foundation VM was able to get an IP address from the DHCP server,
open a terminal session and run the ifconfig command.
If the Foundation VM has a valid IP address, skip to the next step.

Note: The Foundation VM must be on a public network to copy the selected ISO files to the
Foundation VM. This requirement requires setting a static IP address and setting it again when
the workstation is on a different (typically private) network for the installation.

Note: To assign a static IP address for Foundation VM version >= 5.0.2:

• Go to System > Preferences > Network connections > ipv4 settings and provide
the following details:

• IP address
• Gateway
• Netmask
• Restart the VM.

To assign a static IP address for Foundation VM version < 5.0.2:

a. Double-click the set_foundation_ip_address icon on the Foundation VM desktop.

Figure 2: Foundation VM: Desktop


b. In the dialog box, click Run in Terminal.

Foundation |  Prepare Bare-Metal Nodes for Imaging | 24


Figure 3: Foundation VM: Terminal Window
c. In the Select Action dialog box, select Device configuration.

Note: Use the indication keys only to perform selections in the terminal window. Mouse
clicks do not work.

Figure 4: Foundation VM: Action Box


d. In the Select A Devicedialog box, select eth0.

Figure 5: Foundation VM: Device Configuration Box


e. In the Network Configuration dialog box:

Foundation |  Prepare Bare-Metal Nodes for Imaging | 25


• 1. Remove the asterisk in the Use DHCP field (which is set by default).
2. Enter appropriate addresses in the Static IP, Netmask, and Default gateway IP
fields.
3. Click the OK.

Figure 6: Foundation VM: Network Configuration Box


f. Click Save.
g. Click Save & Quit.
The configuration is saved and the terminal window closes.

Uploading Installation Files to the Foundation VM


The file system on the Foundation VM includes hypervisor-specific directories. Copy the files
that you downloaded either from the Nutanix Support portal or obtained from a hypervisor
vendor into the respective directories.

About this task


To upload the installation and installation-related files to the Foundation VM, do the following:

Note: You can also upload the AOS and Hypervisors installation files using the Foundation UI.

• To upload the AOS installation files go the AOS page on the Foundation UI >
Manage AOS files > Add.
• To upload the Hypervisor installation files go to the Hypervisor page on the
Foundation UI > select the hypervisor type > Manage <Hypervisor> files > Add.

Procedure

1. Copy nutanix_installer_package-version#.tar.gz to the /home/nutanix/foundation/nos


directory.
To install hypervisors other than AHV, copy the ISO files to the corresponding directory on
the Foundation VM as follows:

• ESXi ISO image: /home/nutanix/foundation/isos/hypervisor/esx

• Hyper-V ISO image: /home/nutanix/foundation/isos/hypervisor/hyperv

• AHV ISO Image. To download the AHV bundle other than the one bundled with AOS: /
home/nutanix/foundation/isos/hypervisor/kvm

Foundation |  Prepare Bare-Metal Nodes for Imaging | 26


2. If you downloaded the diagnostics files for one or more hypervisors, copy them to the
appropriate directories on the Foundation VM. The directories for the diagnostic files are as
follows:

» Diagnostic file for AHV (diagnostic.raw.img.gz): /home/nutanix/foundation/isos/diags/


kvm

» Diagnostic file for ESXi (diagnostics-disk1.vmdk, diagnostics.mf, and diagnostics.ovf):


/home/nutanix/foundation/isos/diags/esx

» Diagnostic file for Hyper-V (diagnostics_uvm.vhd.gz): /home/nutanix/foundation/isos/


diags/hyperv

Setting Up the Network


The nodes and workstation must have network access to each other through a switch at the
site. Set up the network onsite before imaging the nodes through the Foundation tool.

Before you begin

• Enable IPv6 on the network to which the nodes are connected and ensure that IPv6
multicast is supported.
• For Lenovo HX and HPE nodes, use a 10GbE switch to image via Foundation.
• Connect two cables uplink in case of NX N-SKU as there is no shared LOM port for BMC
traffic on NX N-SKU. Use the port that is dedicated for BMC for BMC traffic.

About this task


To set up the network connections, do the following:

Procedure

1. Connect the IPMI port to a switch.

Note: If you use a shared IPMI port to reinstall the hypervisor, follow the instructions in KB
3834.

More information on specific models is as follows:

• Nutanix NX Series: Connect the dedicated IPMI port and any one of the data ports to the
switch. Ensure that you use the dedicated IPMI port instead of the shared IPMI port. In
addition to the dedicated IPMI port, we highly recommend that you use a 10G data port.
You can use a 1G port instead of a 10G data port at the cost of increased imaging time or
imaging failure. If you use SFP+ 10G NICs and a 1G RJ45 switch for imaging, connect the
10G data port to the switch by using one of our approved GBICs. If the BMC is configured
to use it, you can also use the shared IPMI/1G data port in place of the dedicated port.
However, the shared IPMI/1G data port is less reliable than the dedicated port. The IPMI
LAN interfaces of the nodes must be in failover mode (factory default setting).
Nutanix is compatible with any IEEE 802.3 compliant Ethernet switches. SFP module
compatibility is determined by the switch vendor. Cables shipped with Nutanix appliances
will work with most switch vendors. Certain switch vendors will not work with third party
SFP modules. Use the switch vendor's preferred SFP.
If you choose to use the shared IPMI port on G4 and later platforms, make sure that the
connected switch can auto-negotiate to 100 Mbps. This auto-negotiation capability is
required because the shared IPMI port can support 1 Gbps throughput only when the host

Foundation |  Prepare Bare-Metal Nodes for Imaging | 27


is online. If the switch cannot auto-negotiate to 100 Mbps when the host goes offline,
make sure to use the dedicated IPMI port instead of the shared port (the dedicated IPMI
port support 1 Gbps throughput always). Older platforms support only 10/100 Mbps
throughput.
The exact location of the port depends on the model type. To determine the port location,
see the respective vendor hardware documentation. The following figure illustrates the
location of the network ports in NX-3060 G8:

Figure 7: Port Locations (NX-3060 G8)


• Lenovo Converged HX Series: Connect both the system management (IMM) port and one
of the 10 GbE ports. The following figure illustrates the location of the network ports in
HX3500 and HX5500:

Figure 8: Port Locations (HX System)


• Dell XC series: Connect the iDRAC port and one of the data ports. Some Dell XC Series
systems, such as the Dell XC430-4, support imaging over a 1 GbE network connection.
Whereas other systems, such as the Dell XC640-10, require a 10 GbE connection for
imaging. Nutanix recommends that you use a 10 GbE port regardless of the model of your
appliance. The following figure illustrates the location of the network ports:

Figure 9: Port Locations (XC System)


• IBM POWER Servers: Connect the dedicated IPMI port and a data port to the network to
which the Foundation VM is connected.
• HPE DX series: Connect the iLO port and the data ports to your network switch. Ensure
that the connected data ports are of the same data speed and not a combination of
different data speeds. Also, ensure that the TCP port 8000 is open between Foundation

Foundation |  Prepare Bare-Metal Nodes for Imaging | 28


and the iLO port. If you use SFP+ or SFP28 data ports with a 1G RJ45 switch during
imaging, connect the data port to the switch by using an approved transceiver.
In the DX360 and DX380 models, there are four data ports located to the right of the iLO
port. These data ports are not supported.

2. Connect the NIC with the maximum speed on the nodes to the switch.

3. Connect the installation workstation to the same switch as the nodes.

What to do next
Update the Foundation VM service, if necessary. After you prepare the bare-metal nodes for
Foundation, configure the Foundation VM by using the GUI. For details, see Node Configuration
and Foundation Launch on page 34.

Foundation VM Upgrade
You can upgrade the Foundation VM either by using the GUI or CLI.

Note: Ensure that you use the minimum version of Foundation required by your hardware
platform. To determine whether Foundation needs an upgrade for a hardware platform, see
the respective System Specifications guide. If the nodes you want to include in the cluster are
of different models, determine which of their minimum Foundation versions is the most recent
version, and then upgrade Foundation on all the nodes to that version.

Upgrading the Foundation VM by Using the GUI


The Foundation GUI enables you to perform one-click updates either over the air or from a
.tar file that you manually upload to the Foundation VM. The over-the-air update process
downloads and installs the latest Foundation version from the Nutanix Support portal. By
design, the over-the-air update process downloads and installs a .tar file that does not include
Lenovo packages. Therefore, for Lenovo platforms, update Foundation by using an uploaded
.tar file.

Before you begin


If you want to install a .tar file of your choice (required for Lenovo platforms and optional
for other platforms), download the Foundation .tar file to the workstation that you use to
access or run Foundation. Installers are available on the Foundation download page at (https://
portal.nutanix.com/#/page/Foundation).

About this task


To update the Foundation VM by using the GUI, do the following:

Procedure

1. Open the Foundation GUI.

2. On the main menu, select Settings from the pull-down list.

3. Click Upgrade Software.

4. Select the Foundation tab.


The screen displays the latest Foundation version for upgrade.

Foundation |  Prepare Bare-Metal Nodes for Imaging | 29


5. In the Update Foundation dialog box, do one of the following:

» (Do not use with Lenovo platforms) To perform a one-click over-the-air update, click
Update.
» (For Lenovo platforms; optional for other platforms) To update Foundation by using an
installer that you downloaded to the workstation, click Browse, browse and select the .tar
file, and then click Install.

Upgrading the Foundation VM by Using the CLI


You can upgrade the Foundation VM from version 3.1 or later to version 4.6.x by using the CLI.

About this task


Do the following:

Procedure

1. Download the Foundation upgrade bundle (foundation-<version#>.tar.gz) from the Nutanix


Support portal to the /home/nutanix/ directory.

2. Change your working directory to /home/nutanix/.

3. Upgrade Foundation.
$ ./foundation/bin/foundation_upgrade -t foundation-<version#>.tar.gz

Foundation |  Prepare Bare-Metal Nodes for Imaging | 30


FOUNDATION APP FOR IMAGING
Foundation is available as a native Mac or Windows application that launches Foundation GUI
in a browser. Unlike standalone VM, Foundation app provides a simpler alternative that skips
configuring and mounting a VM.

Installing Foundation App on macOS

About this task


Perform the following steps to install and launch Foundation app on macOS:

Procedure

1. Disable the "stealth mode" firewall option on macOS.

2. Download the Foundation .dmg file from the Nutanix portal.

3. Double-click the Foundation .dmg file.

4. To install the Foundation app, drag the Foundation app to the Application folder.
The Foundation app is installed.

5. Control-click the Foundation app icon, then choose Open from the shortcut menu.

Note: To upgrade the app, download and install a higher version of the app from the Nutanix
Support portal.

The Foundation GUI is launched in the default browser or you can browse to "http://
localhost:8000".

6. Allow the app to accept incoming connections when prompted by your Mac computer.

7. To close the app, right-click on the Foundation icon in the launcher and click Force Quit.

Installing Foundation App on Windows

About this task


To install and launch Foundation app, perform the following steps on the Windows PC.

Note: The installation stops any running Foundation process. If you have initiated Foundation
with a previously installed app, ensure that it is complete before launching the installation again.

Procedure

1. Ensure that IPv6 is enabled on the network interface that connects the Windows PC to the
switch.

2. Download the Foundation .msi installer file from the Nutanix Support portal.

Foundation |  Foundation App for Imaging | 31


3. Perform the installation either through the wizard or a silent installation:

» Double-click the .msi file to run the installation wizard.

» Run the msiexec.exe /i portable_foundation.msi /qb /l*v install.log command.


/qb+: Displays the basic user interface and a modal dialog box when the installation is
completed.
/q: Performs the silent installation.
This command saves the installation details in the install.log file that you can use for
debugging a failed installation.

4. Double-click the Foundation icon on the desktop or start menu.

Note: To upgrade the app, download a higher version of the app from the Nutanix portal and
perform the installation again. The new installation stops any running Foundation operation
and updates the older version to the higher version. If initiated Foundation with the older app,
ensure that it is complete before doing a new installation of the higher version.

The Foundation GUI is launched in the default browser or you can browse to "http://
localhost:8000/gui/index.html".

Uninstalling Foundation App on macOS

About this task


Perform the following steps to uninstall the Foundation app on macOS:

Procedure

1. If the Foundation app is running, right-click on the Foundation icon in launcher and click
Force Quit.

2. Delete the downloaded Foundation .dmg installer file.

3. Drag the Foundation app from the Application folder to Trash.

Uninstalling Foundation App on Windows

About this task

Note: Foundation app does not remove the log and configuration files generated during
uninstallation. For a clean installation, Nutanix recommends that you delete these files manually.

Perform one of the following to uninstall the Foundation app on Windows:

Procedure

1. Use the Apps & features option to uninstall Foundation App on Windows.
Or

Foundation |  Foundation App for Imaging | 32


2. Run the following command in the command prompt:
msiexec.exe /X{BCD56AA1-664C-4EE8-8E01-AED3F0368234} /qb+ /l*v uninstall.log

/qb+: Displays the basic user interface and a modal dialog box when the uninstallation is
completed.
/q: Performs the silent uninstallation.
This command saves the uninstallation details in the uninstall.log file that you can use for
debugging a failed uninstallation.

Upgrading Foundation App


About this task
To upgrade the Foundation app, download a higher version of the app from the Nutanix
Support portal and perform a new installation. The new installation stops any running
Foundation operation and updates the older version to the higher version. If you initiated
Foundation with the older app, ensure that it is complete before performing a new installation
of the higher version.

Foundation |  Foundation App for Imaging | 33


NODE CONFIGURATION AND
FOUNDATION LAUNCH
Node Configuration and Foundation Launch
Configure the Foundation VM with appropriate details. Perform the configurations either by
populating the details through a configuration file automatically or by using the GUI manually.
The configuration file stores the values to most inputs sought by the Foundation GUI that are
mandatory. The configuration file:

• Serves as a reusable baseline file that helps skip repeat manual entry of configuration details.
• Plan the configuration details in advance.
• Invite others to review and edit your planned configuration.
• Import NX nodes from a Salesforce order to avoid manually adding NX nodes.

Configuring the Foundation GUI Automatically


You can configure the Foundation GUI fields automatically using a configuration file. This file
stores the values to most inputs sought by the Foundation GUI that are mandatory.

Before you begin


To configure the Foundation GUI automatically, do the following:

Procedure

1. On the Foundation VM desktop, double-click the Nutanix Foundation icon.

2. In a web browser inside the Foundation VM, browse to the http://localhost:8000/gui/


index.html URL.

3. After you assign an IP address to the Foundation VM, browse to the http://<foundation-
vm-ip-address>:8000/gui/index.html URL from a web browser outside the VM .

4. On the Start page, you can import a Foundation configuration file created on the
install.nutanix.com portal.
To create or edit this configuration file, log into install.nutanix.com with your Nutanix
Support portal credentials. You can populate this file with either partial or complete
Foundation GUI configuration details.

Note: The configuration file only stores configuration details and not AOS or hypervisor
images. You can upload images only in the Foundation GUI.

a. Click Next.

Foundation |  Node Configuration and Foundation Launch | 34


5. On the Nodes page, list the nodes within blocks in the table. The following fields are
populated:

• BLOCK SERIAL
• NODE
• VLAN
• IPMI MAC
• IPMI IP
• HOST IP
• CVM IP
• HOSTNAME OF HOST
To configure values to the table, select the Tools drop-down list, and select one of the
following options:

• Add Nodes Manually: To manually add nodes to the list if they are not populated
automatically.

Note: You can manually add nodes only in standalone Foundation.


If you are manually adding multiple blocks in a single instance, all added blocks
get the same number of nodes. To add blocks with different numbers of nodes,
add multiple blocks with highest number of nodes and then delete nodes for
each block, as applicable. Alternatively, you can also repeat the add process to
separately add blocks with different number of nodes

• Add Compute-Only Nodes: To add compute-only nodes. For more information about
compute only nodes, see Compute-Only Node Configuration (AHV Only) in the Prism
Web Console Guide.
• Range Autofill: To bulk-assign the IP addresses and hostnames for each node.

Note: Unlike CVM Foundation, standalone Foundation does not validate these IP
addresses by checking for its uniqueness. Hence, manually cross-check and ensure that
the IP addresses are unique and valid.

• Reorder Blocks: To match the order of IP addresses and hypervisor hostnames that you
want to assign.
• Select Only Failed Nodes: To select all the failed nodes.
• Remove Unselected Rows: To remove a node from the Foundation process, de-select a
node, and click the Remove Unselected Rows option from the Tools drop-down.

a. Click Next.

Foundation |  Node Configuration and Foundation Launch | 35


6. On the Cluster page, provide the cluster details, configure cluster formation, or just
image the nodes without forming a cluster. You can also enable network segmentation to
separate CVM network traffic from guest VMs and hypervisors' network traffic.

Note:

• The Cluster Virtual IP field is essential for Hyper-V clusters but optional for ESXi
and AHV clusters.
• To provide multiple DNS or NTP servers, enter a list of IP addresses as
a multi-line input. For best practices in configuring NTP servers, see the
Recommendations for Time Synchronization section in the Prism Web Console
Guide.

7. On the AOS page, you can specify and upload AOS images and also view version of
existing installed AOS image on the nodes. If all discovered nodes' CVMs already have
same AOS version that you want to use, skip updating CVMs with AOS.

8. On the Hypervisor page, you can:

• Specify and upload hypervisor image files.


• View the version of existing installed hypervisors on the nodes.
• Upload the latest hypervisor allowlist .json file.

Note:

• You can select one or more nodes to be storage-only nodes that host AHV only.
However, image the remaining nodes with another hypervisor and form a multi-
hypervisor cluster.
• For discovered nodes, if you skip updating CVMs with AOS, you can still reimage
the hypervisors. Hypervisor-only imaging is supported. However, do not image
CVMs with AOS before imaging the hypervisors.
• [Hyper-V only] If you choose Hyper-V from the Choose Hyper-V SKU list, select
the SKU that you want to use.
The following four Hyper-V SKUs are supported: Standard, Datacenter,
Standard with GUI, and Datacenter with GUI.

a. Click Next.

9. (For standalone Foundation only) On the IPMI page, provide the IPMI access credentials for
each node. Standalone Foundation needs IPMI remote access to all the nodes.
To provide credentials for all the nodes, select the Tools drop-down list to either use the
Range Autofill option or assign a vendor's default IPMI credentials to all nodes.

10. Click Start.


The Installation in Progress page displays the progress status and allows you to view the
individual Log details for in-progress or completed operations of all the nodes. Click Review
Configuration to have a read-only view of the configuration details while the installation is
in progress.

Note: You can cancel an ongoing installation in standalone Foundation but not in CVM
Foundation.

Foundation |  Node Configuration and Foundation Launch | 36


Results
After all the operations are completed, the Installation finished page appears.

Note: If you missed any configuration, want to reconfigure, or perform the installation again, click
Reset to return to the Start page.

Configuring Foundation VM by Using the Foundation GUI


Before you begin
To configure the Foundation VM by using the GUI, do the following:

• Assign IP addresses to the hypervisor host, the Controller VMs, and the IPMI interfaces.
Do not assign IP addresses from a subnet that overlaps with the 192.168.5.0/24 address
space on the default VLAN. Nutanix uses an internal virtual switch to manage network
communications between the Controller VM and the hypervisor host. This switch is
associated with a private network on the default VLAN and uses the 192.168.5.0/24 address
space. If you want to use an overlapping subnet, make sure that you use a different VLAN.
• Nutanix does not support mixed-vendor cluster. For restriction details in Nutanix nodes'
cluster, see the Product Mixing Restrictions section in the NX Series Hardware Administration
Guide.
• For a single-imaged node that you must reimage or form a 1-node cluster by using
CVM Foundation, ensure that you launch the CVM Foundation from another node. CVM
Foundation can configure its' own node only if the operation includes one or more nodes
along with its' own node.
• Upgrade Foundation to a higher or relevant version. You can also update the foundation-
platforms submodule on Foundation. Updating the submodule enables Foundation to
support the latest hardware models or components qualified after the release of installed
Foundation version.

Procedure

1. On the Foundation VM desktop, double-click the Nutanix Foundation icon.

2. In a web browser inside the Foundation VM, browse to the "http://localhost:8000/gui/


index.html" URL.

3. After you assign an IP address to the Foundation VM, browse to the http://<foundation-
vm-ip-address>:8000/gui/index.html URL from a web browser outside the VM .

Foundation |  Node Configuration and Foundation Launch | 37


4. On the Start page, configure the details either automatically or manually:

» To configure automatically:
1. Download the Foundation configuration file from the https://install.nutanix.com
portal.

Note: The configuration file only stores configuration details and not AOS or
hypervisor images. You can upload images only in the Foundation GUI.

2. Update this file with either partial or complete Foundation VM configuration details.
3. To select the updated file, click the import the configuration file link.
» To configure manually, provide the following details:

• Specify whether you want RDMA to passthrough to the CVMs.

Note: Foundation does not configure the RDMA LAN. Enable RDMA after cluster
creation is complete from the Prism Web Console. For more information, see
Isolating the Backplane Traffic on an Existing RDMA Cluster in the Security
Guide.

• (Optional) Configure LACP or LAG for network connections between the nodes and
the switch.
• Assign VLANs to IPMI and CVM/host networks with standalone Foundation 4.3.2 or
later.
• Select the workstation network adapter that connects to the nodes' network.
• Specify the subnets and gateway addresses for the cluster and the IPMI network.
• Create and assign two IP addresses to the Foundation application or standalone
workstation for multi-homing.

5. On the Nodes page, select the Tools drop-down list, and select one of the following
options:

Option Description

Add Nodes Manually Add nodes manually if they are not already
populated.

Note: You can manually add nodes


only in standalone Foundation.
If you manually add multiple blocks
in a single instance, all added blocks
get the same number of nodes. To
add blocks with different numbers
of nodes, add multiple blocks with
highest number of nodes and then
delete nodes for each block, as
applicable. Alternatively, you can
also repeat the add process to
separately add blocks with different
number of nodes.

Foundation |  Node Configuration and Foundation Launch | 38


Option Description

Add Compute-Only Nodes Add compute-only nodes. For more


information about compute only nodes,
see the Compute-Only Node Configuration
(AHV Only) section in the Prism Web
Console Guide.

Range Autofill Assign the IP addresses and hostnames in


bulk for each node.

Note: Unlike CVM Foundation,


standalone Foundation does not
validate these IP addresses by checking
for its uniqueness. Therefore, manually
cross-check and ensure that the IP
addresses are unique and valid.

Reorder Blocks (Optional) Reorder IP addresses and


hypervisor hostnames.

Select Only Failed Nodes To debug the issues Select all the failed
nodes.

Remove Unselected Rows (Optional) De-select nodes and click


Remove Unselected Rows to remove
nodes.

Note: For AHV hypervisor, the hostname has the following restrictions:

• The maximum character length is 64.


• Consists of a-z, A-Z, 0-9, “-” and “.” only.
• Hostnames must start and end with a number or letter.

6. On the Cluster page, you can:

• Provide the cluster details.


• Configure cluster formation.
• Image the nodes without creating a cluster.
• Set the timezone of the FoundationVM to your actual TimeZone.
• Enable network segmentation to separate CVM network traffic from guest VMs and
hypervisors' network traffic.

Note:

• The Cluster Virtual IP field is essential for Hyper-V clusters but optional for ESXi
and AHV clusters.
• To provide multiple DNS or NTP servers, enter a list of IP addresses as
a multi-line input. For best practices in configuring NTP servers, see the
Recommendations for Time Synchronization section in the Prism Web Console
Guide.

Foundation |  Node Configuration and Foundation Launch | 39


7. On the AOS page, upload an AOS image or view the existing installed version of AOS
image on each node.

Note: Skip updating CVMs with AOS if all discovered nodes' CVMs already have the same
AOS version that you want to use.

8. On the Hypervisor page, you can:

• Specify and upload hypervisor image files.


• View version of existing installed hypervisors on the nodes.
• Upload the latest hypervisor allowlist JSON file that can be downloaded from the
Nutanix Support portal. This file lists the supported hypervisors.

Note:

• You can select one or more nodes to be storage-only nodes, which host AHV
only. You must image rest of the nodes with another hypervisor and from a
multi-hypervisor cluster.
• For discovered nodes, if you skip updating CVMs with AOS, you can still reimage
the hypervisors. However, imaging CVMs with AOS without imaging hypervisors
are not supported.
• [Hyper-V only] If you choose Hyper-V, from the Choose Hyper-V SKU list, select
the SKU that you want to use.
Four Hyper-V SKUs are supported: Standard, Datacenter, Standard with GUI,
and Datacenter with GUI.

9. (For standalone Foundation only) On the IPMI page, provide the IPMI access credentials for
each node. Standalone Foundation needs IPMI remote access to all the nodes.
To provide credentials for all the nodes, select the Tools drop-down list to either use the
Range Autofill option or assign a vendor's default IPMI credentials to all nodes.

10. Click Start.


The Installation in Progress page displays the progress status and allows you to view the
individual Log details for in-progress or completed operations of all the nodes. Click Review
Configuration to have a read-only view of the configuration details while the installation is
in progress.

Note: You can cancel an ongoing installation in standalone Foundation but not in CVM
Foundation.

Results
After all the operations are completed, the Installation finished page appears.

Note: If you missed any configuration, want to reconfigure, or perform the installation again, click
Reset to return to the Start page.

Foundation |  Node Configuration and Foundation Launch | 40


CHECK FOUNDATION VERSION
About this task
To check the Foundation version:

Procedure

1. Log on to Prism Web Console.

2. Click Settings (gear icon) at the top right corner.

3. Click Upgrade Software on the Settings side bar.

4. Click the Foundation tab. The Foundation version is displayed under Current Version.

Foundation |  Check Foundation Version | 41


POST-INSTALLATION STEPS

Configuring a New Cluster in Prism


About this task
Once the cluster is created it can be configured through the Prism Web console. A storage
pool and a container are provisioned automatically when the cluster is created, but many other
options require user input. The following are common cluster configuration tasks performed
soon after creating a cluster.

Procedure

1. Verify that the cluster has passed the latest Nutanix Cluster Check (NCC) tests.

a. Check the installed NCC version and update it if a recent version is available (see the
Software and Firmware Upgrades section).
b. Run NCC if you downloaded a newer version or did not run it as part of the install.
Run NCC from a command line. Open a command window, log on to any Controller VM
in the cluster. Establish SSH session, and then run the following command:
nutanix@cvm$ ncc health_checks run_all

If the check reports a status other than PASS, resolve the reported issues before
proceeding. If you are unable to resolve the issues, contact Nutanix Support for
assistance.
c. Configure NCC so that the cluster checks run and emailed according to your desired
frequency.
nutanix@cvm$ ncc --set_email_frequency=num_hrs

where num_hrs is a positive integer of at least 4 to specify how frequently NCC runs and
results are emailed. For example, to run NCC and email results every 12 hours, specify 12;
or every 24 hours, specify 24, and so on. For other commands related to automatically
emailing NCC results, see Automatically Emailing NCC Results in the Nutanix Cluster
Check (NCC) Guide for your version of NCC.

2. Specify the timezone of the cluster.


While logged on to the Controller VM (see previous step), run the following commands:
nutanix@cvm$ ncli
ncli> cluster set-timezone timezone=cluster_timezone

Replace cluster_timezone with the timezone of the cluster (for example, America/
Los_Angeles, Europe/London, or Asia/Tokyo). Restart all Controller VMs in the cluster after
changing the timezone. A cluster can only tolerate outage of a single Controller VM at a
time hence restart Controller VMs in a rolling fashion. Ensure that a Controller VM is fully
operational after a restart prior to proceeding with the restart of the next one. For more
information about using the nCLI, see the Command Reference.

3. Specify an outgoing SMTP server (see the Configuring an SMTP Server section).

Foundation |  Post-Installation Steps | 42


4. If the site security policy allows Nutanix customer support to access the cluster, enable the
remote support tunnel (see the Controlling Remote Connections section).

Caution: Failing to enable remote support prevents Nutanix Support from directly
addressing cluster issues. Nutanix recommends that all customers allow email alerts at
minimum because it allows proactive support of customer issues.

5. If the site security policy allows Nutanix Support to collect cluster status information,
enable the Pulse feature (see the Configuring Pulse section).
This information is used by Nutanix Support to send automated hardware failure alerts, as
well as diagnose potential problems and assist proactively.

6. Add a list of alert email recipients, or if the security policy does not allow it, disable alert
emails (see the Configuring Email Alerts section).
You can also specify email recipients for specific alerts (see the Configuring Alert Policies
section).

7. If the site security policy permits automatic downloads of upgrade software packages for
cluster components, enable the feature (see the Software and Firmware Upgrades section).

Note: To ensure that automatic download of updates can function, allow access to the
following URLs through your firewall:

• *.compute-*.amazonaws.com:80
• release-api.nutanix.com:80

8. License the cluster (see the License Management section).

9. If you are using Microsoft Hyper-V hypervisor on HPE DX platform models, ensure that
the software and drivers on Hyper-V are compatible with the firmware version installed on
the nodes. For more information, see Deploying Drivers and Software on Hyper-V for HPE
DX on page 43. This procedure to deploy software and drivers is to be carried out after
cluster creation and before moving the nodes to production.

10. For ESXi and Hyper-V clusters, add the hosts to the appropriate management interface.

• vCenter: See the Nutanix vSphere Administration Guide.


• SCVMM: See the Nutanix Hyper-V Administration Guide.

Deploying Drivers and Software on Hyper-V for HPE DX


About this task
This section is applicable only if you are using Microsoft Hyper-V on HPE DX platform models.
Hyper-V installer ISO image is provided by Microsoft and is generally an uncustomized image.
It does not contain any HPE-specific software or drivers. HPE provides a compatible set of
drivers, software, and firmware in a single bundle called Service Pack for ProLiant (SPP). This
bundle is delivered as a single ISO image. To be in line with HPE supported versions, the drivers
and software in the SPP ISO image needs to be deployed on top of the uncustomized Hyper-V
hypervisor installation. This section details the procedure to deploy the drivers and software in
the SPP on a node which is running Hyper-V.
This procedure should be performed during the following scenarios:

• After cluster creation and before moving the nodes to production.

Foundation |  Post-Installation Steps | 43


• After upgrading Hyper-V to the newer version.
• After upgrading the firmware to a new SPP version using Nutanix LCM.

Before you begin

• Ensure that you have login credentials for Prism.


• Ensure that you have HPE Passport login credentials. This is required to download the
appropriate HPE SPP ISO image from the HPE website. The SPP ISO image contains the
necessary software and drivers corresponding to the SPP firmware version installed on the
node.
• The node should be running Hyper-V hypervisor.
• A Windows or Linux workstation is available. It should be connected to the same network as
the nodes to perform remote software and driver deployment on the nodes, while they are
online and running Hyper-V. The SPP ISO image uses HPE Smart Update Manager (SUM) as
the deployment tool.
For more information about HPE SUM, refer to the Smart Update Manager User Guide from
the HPE website.
The following procedure describes the steps to deploy drivers and software (from the SPP) on
a node which is running Hyper-V.

Procedure

1. Log in to the Prism Web Console and navigate to the Life Cycle Manager page.

2. Note the SPP firmware version installed on the node. To view the SPP firmware version,
you may have to upgrade LCM to the latest version and carry out the perform inventory
operation.
For information on how to carry out perform inventory, see Performing Inventory With the
Life Cycle Manager section in Life Cycle Manager Guide.

3. Log in to the HPE website using the HPE Passport login credentials. Download the SPP
ISO image (corresponding to the SPP firmware version installed on the node) to your
workstation.

4. Mount the SPP ISO image on your workstation.

5. Run the script launch_sum on your workstation to launch the HPE Smart Update Manager
(SUM) in GUI mode using a web browser.

Foundation |  Post-Installation Steps | 44


6. To update the Hyper-V drivers using Service Pack for ProLiant (SPP) ISO, perform the
steps from one of the following options.

a. Option 1: for SPP version 2021.10.0 or newer.

• Shutdown the CVM.


nutanix@cvm$ cvm_shutdown -P now

• Log on to the Hyper-V host with Remote Desktop Connection and pause the Hyper-V
host in the failover cluster using PowerShell.
Suspend-ClusterNode

b. Option 2: for SPP version older than 2021.10.0.


Place the node in maintenance mode.
For information about how to place a node in maintenance mode, see Placing the
Controller VM and Hyper-V Host in Maintenance Mode section in Hyper-V Administration
for Acropolis.

7. Using Hyper-V IP address, add the node to HPE SUM from your workstation.

8. Run the inventory on your workstation and wait for the components to be ready for
deployment.
The message Ready for deployment appears along with the list of deployable components.

9. Select the following components:

• iLO 5 Channel Interface Driver for Windows Server 2016 and Server 2019
• Identifiers for Intel Xeon Scalable Processors for Windows
• Network drivers as applicable to your node configuration

• HPE Mellanox CX4LX and CX5 Driver for Windows Server 2019
• HPE Intel i40eb Driver for Windows Server 2019
• HPE Intel i40ea Driver for Windows Server 2019
• HPE Intel ixs Driver for Windows Server 2019

Note: Ensure that the remaining components are deselected.

10. Start the installation of the updates from your workstation.


Once the installation is complete, a message appears conveying the same.

11. Click View Log to view the installation log file. Examine the log file to verify whether all the
selected components were updated successfully and if the node needs to be rebooted to
complete any update.

Foundation |  Post-Installation Steps | 45


12. Restart the node by performing the following steps:

a. Shut down the node.


For information on how to shut down a node, see Shutting Down a Node in a Cluster
(Hyper-V) section in Hyper-V Administration for Acropolis.
b. Using the IPMI IP address and the login credentials of the node, log in to the iLO web
GUI of the node. Then, click the Power button and select Momentary Press to power-on
the host.
c. Log in to the CVM of another node in the same cluster using SSH. Take the CVM (which
is getting updated) out of maintenance mode and verify whether the CVM has been
included back to the metadata ring.
For information on how to perform this, see Starting a Node in a Cluster (Hyper-V)
section in Hyper-V Administration for Acropolis.

Foundation |  Post-Installation Steps | 46


HYPERVISOR ISO IMAGES
An AHV ISO image is included as part of Foundation. However, customers must provide ISO
images for other hypervisors. Check with your hypervisor manufacturer's representative, or
download an ISO image from their support site:

Note: For the Lenovo Converged HX Series platform, use the custom ISOs that are available on
the VMware website (www.vmware.com) at Downloads > Product Downloads > vSphere >
Custom ISOs.

Ensure to list the MD5 checksum of the hypervisor ISO image in the ISO allowlist file used by
Foundation. See Verify Hypervisor Support on page 48.
The following table describes the fields that appear in the iso_whitelist.json file for each ISO
image.

Table 2: iso_whitelist.json Fields

Name Description

(n/a) Displays the MD5 value for that ISO image.


min_foundation Displays the earliest Foundation version that supports this ISO
image. For example, "2.1" indicates you can install this ISO image
using Foundation version 2.1 or later (but not an earlier version).
hypervisor Displays the hypervisor type (ESX, Hyper-V, or AHV). The "AHV"
designation means AHV. Entries with a "linux" hypervisor are not
available; they are for Nutanix internal use only.
min_nos Displays the earliest AOS version compatible with this hypervisor
ISO. A null value indicates that there are no restrictions.
friendly_name Displays a descriptive name for the hypervisor version, for
example "ESX 6.0" or "Windows 2012r2".
version Displays the hypervisor version, for example "6.0" or "2012r2".
unsupported_hardware Lists the Nutanix models that you cannot use on this ISO. A
blank list indicates that there are no model restrictions. However,
conditional restrictions such as the limitation that Haswell-based
models support only ESXi version 5.5 U2a or later does reflect in
this field.
skus (Hyper-V only) Lists which Hyper-V types (datacenter and standard) are
supported with this ISO image.
compatible_versions Reflects through regular expressions the hypervisor versions that
can co-exist with the ISO version in an Acropolis cluster (primarily
for internal use).
deprecated (optional Indicates that this hypervisor image is not supported by the
field) mentioned Foundation version and higher versions. If the value is
“null”, the image is supported by all Foundation versions to date.
filesize Displays the file size of the hypervisor ISO image.

Foundation |  Hypervisor ISO Images | 47


The following sample entries are from the allowlist for an ESX and an AHV image:
"iso_whitelist": {
"478e2c6f7a875dd3dacaaeb2b0b38228": {
"min_foundation": "2.1",
"hypervisor": "esx",
"min_nos": null,
"friendly_name": "ESX 6.0",
"version": "6.0",
"filesize": 329611264,
"unsupported_hardware": [],
"compatible_versions": {
"esx": ["^6\\.0.*"]
},

"a2a97a6af6a3e397b43e3a4c7a86ee37": {
"min_foundation": "3.0",
"hypervisor": "kvm",
"min_nos": null,
"friendly_name": "20160127",
"compatible_versions": {
"kvm": [
"^el6.nutanix.20160127$"
]
},
"version": "20160127",
"deprecated": "3.1",
"unsupported_hardware": []
},

Verify Hypervisor Support


The list of supported ISO images appears in a iso_whitelist.json file used by Foundation
to validate ISO image files. The files are identified in the allowlist by their MD5 value (not file
name), therefore verify that the MD5 value of the ISO you want to use is listed in the allowlist
file.

Before you begin


Download the latest allowlist file from the Foundation page on the Nutanix support portal
(https://portal.nutanix.com/#/page/Foundation). For information about the contents of the
allowlist file, see Hypervisor ISO Images on page 47.

About this task


To determine whether a hypervisor is supported, do the following:

Procedure

1. Obtain the MD5 checksum of the ISO that you want to use.

2. Open the downloaded allowlist file in a text editor and perform a search for the MD5
checksum.

What to do next
If the MD5 checksum is listed in the allowlist file, save the file to the workstation that hosts the
Foundation VM. If the allowlist file on the Foundation VM does not contain the MD5 checksum,
you can replace that file with the downloaded file before you begin installation.

Foundation |  Hypervisor ISO Images | 48


Updating an iso_whitelist.json File on Foundation VM
About this task
To update an iso_whitelist.json file on Foundation VM:

Procedure

1. On the Foundation page, click Hypervisor and select a hypervisor from the drop-down list
below Select a hypervisor installer.

2. To upload a new whitelist.json file, click Manage Whitelist and click upload it.

3. After selecting the file, click Upload.

Note: To verify if the iso_whitelist.json is updated successfully, open the Manage Whitelist
menu and check for the date of the newly updated file.

Foundation |  Hypervisor ISO Images | 49


TROUBLESHOOTING
This section provides guidance for fixing problems that might occur during a Foundation
installation.

• To set up IPMI static IP address, see Setting IPMI Static IP Address on page 50.
• For issues related to IPMI configuration of bare-metal nodes, see Fixing IPMI Configuration
Issues on page 51.
• For issues with the imaging, see Fixing Imaging Issues on page 53.

Setting IPMI Static IP Address


You can assign a static IP address for an IPMI port by resetting the BIOS configuration.

About this task

Note: Do not perform the following procedure for HPE DX series. The label on the HPE DX
chassis contains the iLO MAC address that is also used to perform MAC-based imaging.

To configure a static IP address for the IPMI port on a node, do the following:

Procedure

1. Connect a VGA monitor and USB keyboard to the node.

2. Start the node.

3. Press the Delete key during boot up when prompted to enter the BIOS setup mode.
The BIOS Setup Utility screen appears.

4. Click the IPMI tab to display the IPMI screen.

5. Select BMC Network Configuration and press the Enter key.

Foundation |  Troubleshooting | 50
6. Select Update IPMI LAN Configuration, press Enter, and then select Yes in the dialog box.

7. Select Configuration Address Source, press Enter, and then select Static in the dialog box.

8. Select Station IP Address, press Enter, and then enter the IP address for the IPMI port on
that node in the dialog box.

9. Select Subnet Mask, press Enter, and then enter the corresponding submask value in the
dialog box.

10. Select Gateway IP Address, press Enter, and then enter the IP address for the node's
network gateway in the dialog box.

11. When all the field entries are updated with the correct values, press the F4 key to save the
settings and exit the BIOS setup mode.

Fixing IPMI Configuration Issues


About this task
In a bare-metal workflow when the IPMI port configuration fails for one or more nodes in the
cluster, or it works but type detection fails and an error message is displayed notifying the
IPMI IP address is unreachable, the installation process stops before imaging any of the nodes.
(Foundation will not proceed with the imaging if an IPMI port configuration failure is detected,
but it will attempt to configure the port address on all nodes before stopping.) Possible reasons
for a failure include the following:

• One or more IPMI MAC addresses are invalid or there are conflicting IP addresses. Go to the
IPMI screen and correct the IPMI MAC and IP addresses as needed.

Foundation |  Troubleshooting | 51
• There is a user name/password mismatch. Go to the IPMI page and correct the IPMI
username and password fields as required.
• One or more nodes are connected to the switch through the wrong network interface. Verify
that the first 1 GbE network interface of each node is connected to the switch (see Setting
Up the Network on page 27).
• The Foundation VM is not in the same broadcast domain as the Controller VMs for
discovered nodes or the IPMI interface for added (bare-metal or undiscovered) nodes. This
problem typically occurs because (a) a non-flat switch is used, (b) IP addresses of the node
are not in the same subnet as the Foundation VM, and (c) multi-homing is not configured.

• If all the nodes are in the Foundation VM subnet, go to the Node page and correct the IP
addresses as needed.
• If the nodes are in multiple subnets, go to the Cluster page and configure multihoming.
• The IPMI interface is not set to failover. Go to BIOS settings to verify the interface
configuration.
To identify and resolve IPMI port configuration problems, do the following:

Procedure

1. Go to the Block & Node Config screen and review the problematic IP address for the failed
nodes (nodes with a red X next to the IPMI address field).
Hovering the cursor over the address displays a pop-up message with troubleshooting
information. This can help to diagnose the problem. See the service.log file (in /home/
nutanix/foundation/log) and the individual node log files for more detailed information.

Figure 10: Foundation: IPMI Configuration Error

2. When the issue is resolved, click the Configure IPMI button at the top of the screen.

Figure 11: Configure IPMI Button

3. Repeat the preceding steps as necessary to fix all the IPMI configuration errors.

4. When all nodes have green check marks in the IPMI address column, click Image Nodes at
the top of the screen to begin the imaging step.
If you cannot fix the IPMI configuration problem for one or more of the nodes, click Proceed
bypass those nodes and continue to the imaging step for the other nodes. In this case,
configure the IPMI port address manually for each bypassed node (see Setting IPMI Static IP
Address on page 50).

Foundation |  Troubleshooting | 52
Fixing Imaging Issues
About this task
When imaging fails for one or more nodes in the cluster, the progress bar turns red and a
red check appears next to the hypervisor address field for any node that was not imaged
successfully. Possible reasons for a failure include the following:

• A type failure was detected. Check connectivity to the IPMI (bare-metal workflow).
• There are network connectivity issues such as the following:

• The connection is dropping intermittently. If intermittent failures persist, look for


conflicting IPs.
• [Hyper-V only] SAMBA service is not running. If Hyper-V displays a warning that it failed
to mount the install share, restart SAMBA with the command "sudo service smb restart".
• Foundation ran out of disk space during the hypervisor or Phoenix preparation phase. Free
up some space by deleting unwanted ISO images. In addition, a Foundation crash could
leave a /tmp/tmp* directory that contains a copy of an ISO image that you can unmount (if
necessary) and delete. Foundation requires 9 GB of free space for Hyper-V and 3 GB for
ESXi or AHV.
• The host boots but returns the error indicating an issue with reaching the Foundation VM.
The message varies per hypervisor. For example, on ESXi you might see a ks.cfg:line 12:
"/.pre" script returned with an error error message. Ensure that you assign the host an IP
address on the same subnet as the Foundation VM or multihoming is configured. Also check
for IP address conflicts.
To identify and resolve imaging problems, do the following:

Procedure

1. See the individual log file for any failed nodes for information about the problem.

• Controller VM location for Foundation logs: ~/data/logs/foundation and ~/data/logs/


foundation.out[.timestamp]

• Bare metal location for Foundation logs: /home/nutanix/foundation/log

2. When the issue is resolved, click the Image Nodes (bare metal workflow) button.

Figure 12: Image Nodes Button (bare-metal)

3. Repeat the preceding steps as necessary to resolve any other imaging errors.
If you are unable to resolve the issue for one or more of the nodes, it is possible to image
these nodes one at a time (Contact Support for help).

Foundation |  Troubleshooting | 53
COPYRIGHT
Copyright 2023 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual
property laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the
United States and/or other jurisdictions. All other brand and product names mentioned herein
are for identification purposes only and may be trademarks of their respective holders.

Foundation |  Copyright | 54

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy