Field Installation Guide v5 - 3
Field Installation Guide v5 - 3
Foundation Considerations.......................................................................................5
Foundation Use Case Matrix.................................................................................................................................5
CVM vCPU and vRAM Allocation....................................................................................................................... 8
Hyper-V Installation Requirements.....................................................................................................................8
Network Requirements............................................................................................................................................11
ii
Post-Installation Steps............................................................................................. 42
Configuring a New Cluster in Prism................................................................................................................42
Deploying Drivers and Software on Hyper-V for HPE DX......................................................................43
Troubleshooting..........................................................................................................50
Setting IPMI Static IP Address.......................................................................................................................... 50
Fixing IPMI Configuration Issues........................................................................................................................51
Fixing Imaging Issues............................................................................................................................................ 53
Copyright.......................................................................................................................54
iii
FIELD INSTALLATION OVERVIEW
For a node to join a Nutanix cluster, it must have a hypervisor and AOS combination that
Nutanix supports. AOS is the operating system of the Nutanix Controller VM, which is a VM
that must be running in the hypervisor to provide Nutanix-specific functionality. Find the
complete list of supported hypervisor/AOS combinations at https://portal.nutanix.com/page/
documents/compatibility-interoperability-matrix.
Foundation is the official deployment software of Nutanix. Foundation allows you to configure
a pre-imaged node, or image a node with a hypervisor and an AOS of your choice. Foundation
also allows you to form a cluster out of nodes whose hypervisor and AOS versions are
the same, with or without re-imaging. Foundation is available for download at https://
portal.nutanix.com/#/page/Foundation.
If you already have a running cluster and want to add nodes to it, you must use the Expand
Cluster option in Prism, instead of using Foundation. Expand Cluster allows you to directly re-
image a node whose hypervisor/AOS version does not match the cluster's version, or a node
that is only running DiscoveryOS. More details on DiscoveryOS are provided below.
Nutanix and its OEM partners install some software on a node at the factory, before shipping
it to the customer. For shipments inside the USA, this software is a hypervisor and an AOS.
For Nutanix factory nodes, the hypervisor is AHV. In case of the OEM factories, it is up to the
vendor to decide what hypervisor to ship to the customer. However, they always install AOS,
regardless of the hypervisor.
For shipments outside the USA, Nutanix installs a light-weight software called DiscoveryOS,
which allows the node to be discovered in Foundation or in the Expand Cluster option of Prism.
Since a node with DiscoveryOS is not pre-imaged with a hypervisor and an AOS, it must go
through imaging first before joining a cluster. Both Foundation and Expand Cluster allow you to
directly image it with the correct hypervisor and AOS.
Vendors who do not have an OEM agreement with Nutanix ship a node without any software
(not even DiscoveryOS) installed on it. Foundation supports bare-metal imaging of such nodes.
In contrast, Expand Cluster does not support direct bare-metal imaging. Therefore, if you want
to add a software-less node to an existing cluster, you must first image it using Foundation,
then use Expand Cluster.
This document only explains procedures that apply to NX and OEM nodes. For non-OEM nodes,
you must perform the bare-metal imaging procedures specifically adapted for those nodes.
For those procedures, see the vendor-specific field installation guides, available on the Nutanix
Support portal at https://portal.nutanix.com/page/documents/list?type=compatibilityList.
Note: Mixed-vendor clusters are not supported. For more information, see the Product Mixing
Restrictions in the NX Series Hardware Administration Guide.
• To re-image factory-prepared nodes, or create a cluster from these nodes, or both, see
Prepare Factory-Imaged Nodes for Imaging on page 13.
• To image bare-metal nodes and optionally configure them into a cluster, see Prepare Bare-
Metal Nodes for Imaging on page 19 .
If IPv6 is disabled Cannot image nodes. IPMI IPv4 required on IPMI IPv4 required on
the nodes the nodes
Note: If you have Volume Shadow Copy Service (VSS) based back up tool (for example
Veeam), functional level of Active Directory must be 2008 or higher.
• Install and run Active Directory Web Services (ADWS). By default, connections are made
over TCP port 9389 and firewall policies enable an exception on this port for ADWS.
To test that ADWS is installed and run on a domain controller, log on as a domain
administrator to a Windows host in the same domain with the RSAT-AD-Powershell feature
installed. Execute the following PowerShell command:
> (Get-ADDomainController).Name
If the command prints the primary name of the domain controller, then ADWS is installed and
the port is open.
Note: If any of the preceding requirements are not met, you must manually create an Active
Directory computer object for the Nutanix storage in the Active Directory, and add a DNS
entry for the name.
• Ensure that the Active Directory domain is configured correctly for consistent time
synchronization.
Accounts and Privileges:
• An Active Directory account with permission to create new Active Directory computer
objects for either a storage container or Organizational Unit (OU) where Nutanix nodes are
placed. The credentials of this account are not stored anywhere.
• An account that has sufficient privileges to join a Windows host to a domain. The credentials
of this account are not stored anywhere. These credentials are only used to join the hosts to
the domain.
The following additional information are required:
Note: The primary domain controller IP address is set as the primary DNS server on all the
Nutanix hosts. It is also set as the NTP server in the Nutanix storage cluster to synchronize
time between Controller VMs, hosts and Active Directory.
• The fully qualified domain name to which the Nutanix hosts and the storage cluster is going
to be joined.
Requirements:
• The SCVMM version must be 2012 R2 or later. The tool must be installed on a Windows
Server 2012 or a later version.
• The SCVMM server must allow PowerShell remote execution.
To test this scenario, log on by using the SCVMM administrator account in a Windows host
and run the following PowerShell command on a Windows host that is different to the
SCVMM host (for example, run the command from the domain controller). If the command
returns the name of the SCVMM server, then PowerShell remote execution on the SCVMM
server is permitted.
> Invoke-Command -ComputerName scvmm_server -ScriptBlock {hostname} -Credential MYDOMAIN
\username
Replace scvmm_server with the SCVMM host name and MYDOMAIN with Active Directory domain
name.
Note: If the SCVMM server does not allow PowerShell remote execution, you can perform the
SCVMM setup manually by using the SCVMM user interface.
• The ipconfig command must run in a PowerShell window on the SCVMM server. To verify
run the following command.
> Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {ipconfig} -Credential
MYDOMAIN\username
Replace scvmm_server_name with the SCVMM host name and MYDOMAIN with Active Directory
domain name.
• The SMB client configuration in the SCVMM server must have RequireSecuritySignature set
to False. To verify, run the following command.
> Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {Get-SMBClientConfiguration |
FL RequireSecuritySignature}
If you are changing it from True to False, it is important to confirm that the policies on
the SCVMM host have the correct value. Changing the value of RequireSecuritySignature
may not take effect if there is a policy with the opposite value. On the SCVMM host, run
rsop.msc to review the resultant set of policy details, and verify the value by going to,
Servername > Computer Configuration > Windows Settings > Security Settings > Local
Policies > Security Options: Policy Microsoft network client: Digitally sign communications
(always). The value displayed in RSOP must be, Disabled or Not Defined for the change
to persist. Also, if the RSOP shows it as Enabled, the group policies that are configured in
the domain to apply to the SCVMM server must be updated to Disabled. Otherwise, the
RequireSecuritySignature will revert back to the value of True. After setting the policy in
Active Directory and propagating to the domain controllers, refresh the SCVMM server
Note: If security signing is mandatory, then you must enable Kerberos in the Nutanix cluster.
In this case, it is important to ensure that the time remains synchronized between the Active
Directory server, the Nutanix hosts, and the Nutanix Controller VMs. The Nutanix hosts and
the Controller VMs use the Active Directory server as the NTP server. So, ensure that Active
Directory domain is configured correctly for consistent time synchronization.
• When adding a host or a cluster to the SCVMM, the run-as account that is used to manage
the host or the cluster must be different from the service account that was used to install
SCVMM.
• Run-as account must be a domain account and must have local administrator privileges on
the Nutanix hosts. This can be a domain administrator account. When the Nutanix hosts are
joined to the domain, the domain administrator account automatically takes administrator
privileges on the host. If the domain account used as the run-as account in SCVMM is not a
domain administrator account, you must manually add the run-as account to the list of local
administrators on each host by running sconfig.
• SCVMM domain account with administrator privileges on SCVMM and PowerShell remote
execution privileges.
• If you want to install SCVMM server, a service account with local administrator privileges on
the SCVMM server.
IP Addresses
Note: For N nodes, (3*N + 2) IP addresses are required. All IP addresses must be in the same
subnet.
DNS Requirements
• Each Nutanix host must be assigned a name of 15 characters or less, which gets
automatically added to the DNS server during domain joining.
• The Nutanix storage cluster must be assigned a name of 15 characters or less. Then, add the
name to the DNS server when the storage cluster joins the domain.
• The Hyper-V failover cluster must be assigned a name of 15 characters or less, which gets
automatically added to the DNS server when the failover cluster is created.
• After the Hyper-V configuration, all names must resolve to an IP address in the Nutanix
hosts, the SCVMM server (if applicable), or any other host that needs access to the Nutanix
storage, for example, a host running the Hyper-V Manager.
• Virtual machine and virtual disk paths must always refer to the Nutanix storage cluster by
FQDN and not the external IP address. If you use the IP address, it directs all the I/O to a
single node in the cluster and compromises performance and scalability.
Note: For external non-Nutanix hosts that require access to Nutanix SMB shares, see Nutanix
SMB Shares Connection Requirements from Outside the Cluster.
• When applying Windows updates to the Nutanix hosts, the hosts should be restarted one
at a time, ensuring that Nutanix services of the Controller VM on the restarted host come
up fully prior to proceeding with the update of the next host. This can be accomplished
by using Cluster Aware Updating as well as using a Nutanix-provided script, which can be
plugged into the Cluster Aware Update Manager as a pre-update script. This pre-update
script ensures that the Nutanix services are restarted on one host at a time maintaining
availability of storage throughout the update procedure. For more information about cluster-
aware updating, see Installing Windows Updates with Cluster-Aware Updating.
Note: Ensure that automatic Windows updates are not enabled for the Nutanix hosts in the
domain policies.
• If you place a host that is managed by SCVMM in maintenance mode, the Controller VM
running on the host is placed in the saved state by default , which might create issues. To
properly place a host in the maintenance mode refer to SCVMM Operation in the Hyper-V
Administration for Acropolis Guide.
• If you are using Microsoft Hyper-V on HPE DX models, ensure that the software and drivers
on Hyper-V are compatible with the firmware version installed on the nodes. For more
information, see Deploying Drivers and Software on Hyper-V for HPE DX on page 43 .
Network Requirements
When configuring a Nutanix block a set of IP addresses is required to be allocated to the
cluster. Ensure that chosen IP addresses do not overlap with any hosts or services within the
environment. You will also need to make sure to open the software ports that are used to
manage cluster components and to enable communication between components such as the
Controller VM, Web console, Prism Central, hypervisor, and the Nutanix hardware. Nutanix
recommends that you specify information such as a DNS server and NTP server even if the
cluster is not connected to the Internet or runs in a non-production environment.
• Default gateway
• Network mask
• DNS server
• NTP server
Check whether a proxy server is in place in the network. To perform this check, you need the IP
address and port number of that server when enabling Nutanix Support on the cluster.
• IPMI interface
• Hypervisor host
• Nutanix Controller VM
All Controller VMs and hypervisor hosts must be on the same subnet. No systems other than
the Controller VMs and hypervisor hosts can be on this network.
• Make sure that the nodes that you want to image are factory-prepared nodes that have not
been configured in any way and are not part of a cluster.
• Physically install the Nutanix nodes at your site. For general installation instructions, see
Mounting the Block in the Getting Started Guide. For installation instructions specific to your
model type, see Rack Mounting in the Nutanix Rack Mounting Guide.
Your workstation must be connected to the network on the same subnet as the nodes you
want to image. Foundation does not require an IPMI connection or any special network
port configuration to image discovered nodes. See Network Requirements for general
information about the network topology and port access required for a cluster.
• Determine the appropriate network (gateway and DNS server IP addresses), cluster (name,
virtual IP address), and node (Controller VM, hypervisor, and IPMI IP address ranges)
parameter values needed for installation.
Note: The use of a DHCP server is not supported for Controller VMs, so make sure to assign
static IP addresses to Controller VMs.
Note: Nutanix uses an internal virtual switch to manage network communications between
the Controller VM and the hypervisor host. This switch is associated with a private network
on the default VLAN and uses the 192.168.5.0/24 address space. For the hypervisor, IPMI
interface, and other devices on the network (including the guest VMs that you create on
the cluster), do not use a subnet that overlaps with the 192.168.5.0/24 subnet on the default
VLAN. If you want to use an overlapping subnet for such devices, make sure that you use a
different VLAN.
Note: It is important that you unlock the SEDs before imaging in order to prevent any data
loss. To unlock the SEDs, contact Nutanix Support or refer to the KB article 000003750 in the
Nutanix Support portal.
Note: This method can image discovered nodes or create a single cluster from discovered nodes
or both. This method is limited to factory prepared nodes running AOS 4.5 or later. If you want to
image factory prepared nodes running an earlier AOS (NOS) version or image bare-metal nodes,
see Prepare Bare-Metal Nodes for Imaging on page 19.
Procedure
1. Run discovery and launch Foundation (see Node Discovery and Foundation Launch on
page 14).
2. Update Foundation to the latest version (see Upgrading the CVM Foundation by Using the
Foundation Java Applet on page 18).
Note:
3. Run CVM Foundation (see Configuring Foundation VM by Using the Foundation GUI on
page 37).
4. After the cluster is created successfully, begin configuring the cluster (see Configuring a
New Cluster in Prism on page 42).
Procedure
3. Extract the contents of the downloaded ZIP file into the workstation that you want to use for
imaging, and then double-click nutanix_foundation_applet.jnlp.
The discovery process begins and a window appears with a list of discovered nodes.
Note: A security warning message may appear to indicate that the list of nodes is from an
unknown source. To run the application, click Accept and Run.
Note: Use the network configuration tool only on factory-prepared nodes that are not part of a
cluster. Using the tool on a node that is part of a cluster makes the node inaccessible to the other
nodes in the cluster. If there are issues, the only way to resolve an issue is to reconfigure the node
to the previous IP addresses by using the network configuration tool again.
Procedure
1. Connect a console to one of the nodes and log on to the Acropolis host by using the root
credentials.
a. Review the network card details to ascertain interface properties and identify connected
interfaces.
b. Use the arrow keys to go to the interface that you want to configure, and then use the
Spacebar key to select the interface.
Repeat this step for each interface that you want to configure.
c. Use the arrow keys to navigate through the user interface and specify values for the
following parameters:
Launching Foundation
Launching Foundation depends on whether you used the Foundation Applet to discover nodes
in the same broadcast domain or the crash cart user interface to discover nodes in a VLAN-
segmented network.
• If you used the Foundation Applet to discover nodes in the same broadcast domain, do the
following:
Note: A warning message stating that the value provided is not the highest available
version of Foundation found in the discovered nodes may appear. If you select a node
using an earlier Foundation version (one that does not recognize one or more of the node
models), installation may fail when Foundation attempts to image a node of an unknown
model. Therefore, select the node with the highest Foundation version among the nodes to
be imaged. If you do not intend to select any of the nodes that have the higher Foundation
version, ignore the warning and proceed.
b. (Optional but recommended) Upgrade Foundation on the selected node to the latest
version. See Upgrading the CVM Foundation by Using the Foundation Java Applet on
page 18.
c. With the node having the latest Foundation version selected, click the Launch Foundation
button.
Foundation searches the network subnet for unconfigured Nutanix nodes (factory
prepared nodes that are not part of a cluster) and then displays information about
the discovered blocks and nodes in the Discovered Nodes screen. (It does not display
information about nodes that are powered off or in a different subnet.) The discovery
process normally takes just a few seconds.
Note: If you want Foundation to image nodes from an existing cluster, you must first either
remove the target nodes from the cluster or destroy the cluster.
• If you used the crash cart user interface to discover nodes in a VLAN-segmented network, in
a browser on your workstation, enter the following URL: http://CVM_IP_address:8000
Replace CVM_IP_address with the IP address that you assigned to the Controller VM when
using the network configuration tool.
Note: Ensure that you use the minimum version of Foundation required by your hardware
platform. To determine whether Foundation needs an upgrade for a hardware platform, see
the respective System Specifications guide. If the nodes you want to include in the cluster are
of different models, determine which of their minimum Foundation versions is the most recent
version, and then upgrade Foundation on all the nodes to that version.
Procedure
1. In the Foundation Java applet, select the node on which you want to upgrade the CVM
Foundation.
3. Browse to the folder where you downloaded the Foundation .tar file and double-click the
.tar file.
The upgrade process begins. After the upgrade completes, Genesis restarts on the node, and
that in turn restarts the Foundation service. After the Foundation service becomes available,
the upgrade process reports the status of the upgrade.
• Physically install the nodes at your site. For installing Nutanix hardware platforms, see the NX
Series Hardware Administration Guide for your model type. For installing hardware from any
other manufacturer, see that manufacturer's documentation.
• Set up the installation environment (see Preparing the Workstation on page 20).
• Ensure that you have the appropriate global, node, and cluster parameter values needed for
installation. The use of a DHCP server is not supported for Controller VMs, so make sure to
assign static IP addresses to Controller VMs.
Note: If the Foundation VM is configured with an IP address that is different from other
clusters that require imaging in a network (for example Foundation VM is configured with a
public IP address while the cluster resides in a private network), repeat step 8 in Installing
the Foundation VM on page 22 to configure a new static IP address for the Foundation
VM.
• If the nodes contain self-encrypting drives (SEDs), disable encryption on the SEDs before
imaging the nodes. If the nodes contain only SEDs, enable encryption after you image the
nodes. If the nodes contain both regular hard disk drives (HDDs) and SEDs, do not enable
encryption on the SEDs at any time during the lifetime of the cluster.
For information about enabling and disabling encryption, see the Data-at-Rest Encryption
chapter in the AOS Security Guide.
Note: It is important that you unlock the SEDs before imaging in order to prevent any data
loss. To unlock the SEDs, contact Nutanix Support or refer to the KB article 000003750 in the
Nutanix Support portal.
Note: After you prepare the bare-metal nodes for Foundation, configure the Foundation VM by
using the GUI. For details, see Node Configuration and Foundation Launch on page 34.
• To avoid Foundation timeout errors, make sure that the CD-ROM appears first in the BIOS
boot order.
• If Spanning Tree Protocol (STP) is enabled on the ports that are connected to the Nutanix
host, Foundation might time out during the imaging process. Therefore, ensure to disable
Recommendations
Limitations
• Configuration of a node's virtual switches to use LACP. Perform this configuration manually
after imaging.
• Configuration of network adapters to use jumbo frames during imaging. Perform this
configuration manually after imaging.
1. Go to the Nutanix Support portal and download the following files to a temporary directory
on the workstation:
• Foundation_VM-version#-
disk1.vmdk. This file is
the Foundation VM
VMDK file for the version#
release. For example,
Foundation_VM-3.1-
disk1.vmdk.
nutanix_installer_package-version#.tar.gz
On the Nutanix Support File used for imaging the
portal, go to Downloads > nodes with desired AOS
AOS (NOS). release.
Note:
2. Download the installer for Oracle VM VirtualBox (a free open source tool used to create
a virtualized environment on the workstation) and install it with the default options. For
installation and start-up instructions (https://www.virtualbox.org/wiki/Documentation), see
the Oracle VM VirtualBox User Manual.
Note: You can also use any other virtualization environment (VMware ESXi, AHV, and so on)
instead of Oracle VM VirtualBox.
Note: If the tar utility is not available, use the appropriate utility for your environment.
4. Copy the extracted files to the VirtualBox VMs folder that you created.
Procedure
2. Click the File menu and select Import Appliance... from the pull-down list.
3. In the Import Virtual Appliance dialog box, browse to the location of the Foundation .ovf
file, and select the Foundation_VM-version#.ovf file.
4. Click Next.
5. Click Import.
7. If the login screen appears, log on as the nutanix user with password nutanix/4u.
a. On the VirtualBox window for the Foundation VM, select Devices > Insert Guest Additions
CD Image... from the menu.
d. After the installation completes, press the return key to close the VirtualBox Guest
Additions installation window.
e. Right-click the VBOXADDITIONS CD entry on the desktop and select Eject.
f. Restart the Foundation VM by selecting System > Shutdown... > Restart from the Linux
GUI.
g. After the Foundation VM restarts, select Devices > Drag 'n' Drop > Bidirectional from the
menu.
Note: The Foundation VM must be on a public network to copy the selected ISO files to the
Foundation VM. This requirement requires setting a static IP address and setting it again when
the workstation is on a different (typically private) network for the installation.
• Go to System > Preferences > Network connections > ipv4 settings and provide
the following details:
• IP address
• Gateway
• Netmask
• Restart the VM.
Note: Use the indication keys only to perform selections in the terminal window. Mouse
clicks do not work.
Note: You can also upload the AOS and Hypervisors installation files using the Foundation UI.
• To upload the AOS installation files go the AOS page on the Foundation UI >
Manage AOS files > Add.
• To upload the Hypervisor installation files go to the Hypervisor page on the
Foundation UI > select the hypervisor type > Manage <Hypervisor> files > Add.
Procedure
• AHV ISO Image. To download the AHV bundle other than the one bundled with AOS: /
home/nutanix/foundation/isos/hypervisor/kvm
• Enable IPv6 on the network to which the nodes are connected and ensure that IPv6
multicast is supported.
• For Lenovo HX and HPE nodes, use a 10GbE switch to image via Foundation.
• Connect two cables uplink in case of NX N-SKU as there is no shared LOM port for BMC
traffic on NX N-SKU. Use the port that is dedicated for BMC for BMC traffic.
Procedure
Note: If you use a shared IPMI port to reinstall the hypervisor, follow the instructions in KB
3834.
• Nutanix NX Series: Connect the dedicated IPMI port and any one of the data ports to the
switch. Ensure that you use the dedicated IPMI port instead of the shared IPMI port. In
addition to the dedicated IPMI port, we highly recommend that you use a 10G data port.
You can use a 1G port instead of a 10G data port at the cost of increased imaging time or
imaging failure. If you use SFP+ 10G NICs and a 1G RJ45 switch for imaging, connect the
10G data port to the switch by using one of our approved GBICs. If the BMC is configured
to use it, you can also use the shared IPMI/1G data port in place of the dedicated port.
However, the shared IPMI/1G data port is less reliable than the dedicated port. The IPMI
LAN interfaces of the nodes must be in failover mode (factory default setting).
Nutanix is compatible with any IEEE 802.3 compliant Ethernet switches. SFP module
compatibility is determined by the switch vendor. Cables shipped with Nutanix appliances
will work with most switch vendors. Certain switch vendors will not work with third party
SFP modules. Use the switch vendor's preferred SFP.
If you choose to use the shared IPMI port on G4 and later platforms, make sure that the
connected switch can auto-negotiate to 100 Mbps. This auto-negotiation capability is
required because the shared IPMI port can support 1 Gbps throughput only when the host
2. Connect the NIC with the maximum speed on the nodes to the switch.
What to do next
Update the Foundation VM service, if necessary. After you prepare the bare-metal nodes for
Foundation, configure the Foundation VM by using the GUI. For details, see Node Configuration
and Foundation Launch on page 34.
Foundation VM Upgrade
You can upgrade the Foundation VM either by using the GUI or CLI.
Note: Ensure that you use the minimum version of Foundation required by your hardware
platform. To determine whether Foundation needs an upgrade for a hardware platform, see
the respective System Specifications guide. If the nodes you want to include in the cluster are
of different models, determine which of their minimum Foundation versions is the most recent
version, and then upgrade Foundation on all the nodes to that version.
Procedure
» (Do not use with Lenovo platforms) To perform a one-click over-the-air update, click
Update.
» (For Lenovo platforms; optional for other platforms) To update Foundation by using an
installer that you downloaded to the workstation, click Browse, browse and select the .tar
file, and then click Install.
Procedure
3. Upgrade Foundation.
$ ./foundation/bin/foundation_upgrade -t foundation-<version#>.tar.gz
Procedure
4. To install the Foundation app, drag the Foundation app to the Application folder.
The Foundation app is installed.
5. Control-click the Foundation app icon, then choose Open from the shortcut menu.
Note: To upgrade the app, download and install a higher version of the app from the Nutanix
Support portal.
The Foundation GUI is launched in the default browser or you can browse to "http://
localhost:8000".
6. Allow the app to accept incoming connections when prompted by your Mac computer.
7. To close the app, right-click on the Foundation icon in the launcher and click Force Quit.
Note: The installation stops any running Foundation process. If you have initiated Foundation
with a previously installed app, ensure that it is complete before launching the installation again.
Procedure
1. Ensure that IPv6 is enabled on the network interface that connects the Windows PC to the
switch.
2. Download the Foundation .msi installer file from the Nutanix Support portal.
Note: To upgrade the app, download a higher version of the app from the Nutanix portal and
perform the installation again. The new installation stops any running Foundation operation
and updates the older version to the higher version. If initiated Foundation with the older app,
ensure that it is complete before doing a new installation of the higher version.
The Foundation GUI is launched in the default browser or you can browse to "http://
localhost:8000/gui/index.html".
Procedure
1. If the Foundation app is running, right-click on the Foundation icon in launcher and click
Force Quit.
Note: Foundation app does not remove the log and configuration files generated during
uninstallation. For a clean installation, Nutanix recommends that you delete these files manually.
Procedure
1. Use the Apps & features option to uninstall Foundation App on Windows.
Or
/qb+: Displays the basic user interface and a modal dialog box when the uninstallation is
completed.
/q: Performs the silent uninstallation.
This command saves the uninstallation details in the uninstall.log file that you can use for
debugging a failed uninstallation.
• Serves as a reusable baseline file that helps skip repeat manual entry of configuration details.
• Plan the configuration details in advance.
• Invite others to review and edit your planned configuration.
• Import NX nodes from a Salesforce order to avoid manually adding NX nodes.
Procedure
3. After you assign an IP address to the Foundation VM, browse to the http://<foundation-
vm-ip-address>:8000/gui/index.html URL from a web browser outside the VM .
4. On the Start page, you can import a Foundation configuration file created on the
install.nutanix.com portal.
To create or edit this configuration file, log into install.nutanix.com with your Nutanix
Support portal credentials. You can populate this file with either partial or complete
Foundation GUI configuration details.
Note: The configuration file only stores configuration details and not AOS or hypervisor
images. You can upload images only in the Foundation GUI.
a. Click Next.
• BLOCK SERIAL
• NODE
• VLAN
• IPMI MAC
• IPMI IP
• HOST IP
• CVM IP
• HOSTNAME OF HOST
To configure values to the table, select the Tools drop-down list, and select one of the
following options:
• Add Nodes Manually: To manually add nodes to the list if they are not populated
automatically.
• Add Compute-Only Nodes: To add compute-only nodes. For more information about
compute only nodes, see Compute-Only Node Configuration (AHV Only) in the Prism
Web Console Guide.
• Range Autofill: To bulk-assign the IP addresses and hostnames for each node.
Note: Unlike CVM Foundation, standalone Foundation does not validate these IP
addresses by checking for its uniqueness. Hence, manually cross-check and ensure that
the IP addresses are unique and valid.
• Reorder Blocks: To match the order of IP addresses and hypervisor hostnames that you
want to assign.
• Select Only Failed Nodes: To select all the failed nodes.
• Remove Unselected Rows: To remove a node from the Foundation process, de-select a
node, and click the Remove Unselected Rows option from the Tools drop-down.
a. Click Next.
Note:
• The Cluster Virtual IP field is essential for Hyper-V clusters but optional for ESXi
and AHV clusters.
• To provide multiple DNS or NTP servers, enter a list of IP addresses as
a multi-line input. For best practices in configuring NTP servers, see the
Recommendations for Time Synchronization section in the Prism Web Console
Guide.
7. On the AOS page, you can specify and upload AOS images and also view version of
existing installed AOS image on the nodes. If all discovered nodes' CVMs already have
same AOS version that you want to use, skip updating CVMs with AOS.
Note:
• You can select one or more nodes to be storage-only nodes that host AHV only.
However, image the remaining nodes with another hypervisor and form a multi-
hypervisor cluster.
• For discovered nodes, if you skip updating CVMs with AOS, you can still reimage
the hypervisors. Hypervisor-only imaging is supported. However, do not image
CVMs with AOS before imaging the hypervisors.
• [Hyper-V only] If you choose Hyper-V from the Choose Hyper-V SKU list, select
the SKU that you want to use.
The following four Hyper-V SKUs are supported: Standard, Datacenter,
Standard with GUI, and Datacenter with GUI.
a. Click Next.
9. (For standalone Foundation only) On the IPMI page, provide the IPMI access credentials for
each node. Standalone Foundation needs IPMI remote access to all the nodes.
To provide credentials for all the nodes, select the Tools drop-down list to either use the
Range Autofill option or assign a vendor's default IPMI credentials to all nodes.
Note: You can cancel an ongoing installation in standalone Foundation but not in CVM
Foundation.
Note: If you missed any configuration, want to reconfigure, or perform the installation again, click
Reset to return to the Start page.
• Assign IP addresses to the hypervisor host, the Controller VMs, and the IPMI interfaces.
Do not assign IP addresses from a subnet that overlaps with the 192.168.5.0/24 address
space on the default VLAN. Nutanix uses an internal virtual switch to manage network
communications between the Controller VM and the hypervisor host. This switch is
associated with a private network on the default VLAN and uses the 192.168.5.0/24 address
space. If you want to use an overlapping subnet, make sure that you use a different VLAN.
• Nutanix does not support mixed-vendor cluster. For restriction details in Nutanix nodes'
cluster, see the Product Mixing Restrictions section in the NX Series Hardware Administration
Guide.
• For a single-imaged node that you must reimage or form a 1-node cluster by using
CVM Foundation, ensure that you launch the CVM Foundation from another node. CVM
Foundation can configure its' own node only if the operation includes one or more nodes
along with its' own node.
• Upgrade Foundation to a higher or relevant version. You can also update the foundation-
platforms submodule on Foundation. Updating the submodule enables Foundation to
support the latest hardware models or components qualified after the release of installed
Foundation version.
Procedure
3. After you assign an IP address to the Foundation VM, browse to the http://<foundation-
vm-ip-address>:8000/gui/index.html URL from a web browser outside the VM .
» To configure automatically:
1. Download the Foundation configuration file from the https://install.nutanix.com
portal.
Note: The configuration file only stores configuration details and not AOS or
hypervisor images. You can upload images only in the Foundation GUI.
2. Update this file with either partial or complete Foundation VM configuration details.
3. To select the updated file, click the import the configuration file link.
» To configure manually, provide the following details:
Note: Foundation does not configure the RDMA LAN. Enable RDMA after cluster
creation is complete from the Prism Web Console. For more information, see
Isolating the Backplane Traffic on an Existing RDMA Cluster in the Security
Guide.
• (Optional) Configure LACP or LAG for network connections between the nodes and
the switch.
• Assign VLANs to IPMI and CVM/host networks with standalone Foundation 4.3.2 or
later.
• Select the workstation network adapter that connects to the nodes' network.
• Specify the subnets and gateway addresses for the cluster and the IPMI network.
• Create and assign two IP addresses to the Foundation application or standalone
workstation for multi-homing.
5. On the Nodes page, select the Tools drop-down list, and select one of the following
options:
Option Description
Add Nodes Manually Add nodes manually if they are not already
populated.
Select Only Failed Nodes To debug the issues Select all the failed
nodes.
Note: For AHV hypervisor, the hostname has the following restrictions:
Note:
• The Cluster Virtual IP field is essential for Hyper-V clusters but optional for ESXi
and AHV clusters.
• To provide multiple DNS or NTP servers, enter a list of IP addresses as
a multi-line input. For best practices in configuring NTP servers, see the
Recommendations for Time Synchronization section in the Prism Web Console
Guide.
Note: Skip updating CVMs with AOS if all discovered nodes' CVMs already have the same
AOS version that you want to use.
Note:
• You can select one or more nodes to be storage-only nodes, which host AHV
only. You must image rest of the nodes with another hypervisor and from a
multi-hypervisor cluster.
• For discovered nodes, if you skip updating CVMs with AOS, you can still reimage
the hypervisors. However, imaging CVMs with AOS without imaging hypervisors
are not supported.
• [Hyper-V only] If you choose Hyper-V, from the Choose Hyper-V SKU list, select
the SKU that you want to use.
Four Hyper-V SKUs are supported: Standard, Datacenter, Standard with GUI,
and Datacenter with GUI.
9. (For standalone Foundation only) On the IPMI page, provide the IPMI access credentials for
each node. Standalone Foundation needs IPMI remote access to all the nodes.
To provide credentials for all the nodes, select the Tools drop-down list to either use the
Range Autofill option or assign a vendor's default IPMI credentials to all nodes.
Note: You can cancel an ongoing installation in standalone Foundation but not in CVM
Foundation.
Results
After all the operations are completed, the Installation finished page appears.
Note: If you missed any configuration, want to reconfigure, or perform the installation again, click
Reset to return to the Start page.
Procedure
4. Click the Foundation tab. The Foundation version is displayed under Current Version.
Procedure
1. Verify that the cluster has passed the latest Nutanix Cluster Check (NCC) tests.
a. Check the installed NCC version and update it if a recent version is available (see the
Software and Firmware Upgrades section).
b. Run NCC if you downloaded a newer version or did not run it as part of the install.
Run NCC from a command line. Open a command window, log on to any Controller VM
in the cluster. Establish SSH session, and then run the following command:
nutanix@cvm$ ncc health_checks run_all
If the check reports a status other than PASS, resolve the reported issues before
proceeding. If you are unable to resolve the issues, contact Nutanix Support for
assistance.
c. Configure NCC so that the cluster checks run and emailed according to your desired
frequency.
nutanix@cvm$ ncc --set_email_frequency=num_hrs
where num_hrs is a positive integer of at least 4 to specify how frequently NCC runs and
results are emailed. For example, to run NCC and email results every 12 hours, specify 12;
or every 24 hours, specify 24, and so on. For other commands related to automatically
emailing NCC results, see Automatically Emailing NCC Results in the Nutanix Cluster
Check (NCC) Guide for your version of NCC.
Replace cluster_timezone with the timezone of the cluster (for example, America/
Los_Angeles, Europe/London, or Asia/Tokyo). Restart all Controller VMs in the cluster after
changing the timezone. A cluster can only tolerate outage of a single Controller VM at a
time hence restart Controller VMs in a rolling fashion. Ensure that a Controller VM is fully
operational after a restart prior to proceeding with the restart of the next one. For more
information about using the nCLI, see the Command Reference.
3. Specify an outgoing SMTP server (see the Configuring an SMTP Server section).
Caution: Failing to enable remote support prevents Nutanix Support from directly
addressing cluster issues. Nutanix recommends that all customers allow email alerts at
minimum because it allows proactive support of customer issues.
5. If the site security policy allows Nutanix Support to collect cluster status information,
enable the Pulse feature (see the Configuring Pulse section).
This information is used by Nutanix Support to send automated hardware failure alerts, as
well as diagnose potential problems and assist proactively.
6. Add a list of alert email recipients, or if the security policy does not allow it, disable alert
emails (see the Configuring Email Alerts section).
You can also specify email recipients for specific alerts (see the Configuring Alert Policies
section).
7. If the site security policy permits automatic downloads of upgrade software packages for
cluster components, enable the feature (see the Software and Firmware Upgrades section).
Note: To ensure that automatic download of updates can function, allow access to the
following URLs through your firewall:
• *.compute-*.amazonaws.com:80
• release-api.nutanix.com:80
9. If you are using Microsoft Hyper-V hypervisor on HPE DX platform models, ensure that
the software and drivers on Hyper-V are compatible with the firmware version installed on
the nodes. For more information, see Deploying Drivers and Software on Hyper-V for HPE
DX on page 43. This procedure to deploy software and drivers is to be carried out after
cluster creation and before moving the nodes to production.
10. For ESXi and Hyper-V clusters, add the hosts to the appropriate management interface.
Procedure
1. Log in to the Prism Web Console and navigate to the Life Cycle Manager page.
2. Note the SPP firmware version installed on the node. To view the SPP firmware version,
you may have to upgrade LCM to the latest version and carry out the perform inventory
operation.
For information on how to carry out perform inventory, see Performing Inventory With the
Life Cycle Manager section in Life Cycle Manager Guide.
3. Log in to the HPE website using the HPE Passport login credentials. Download the SPP
ISO image (corresponding to the SPP firmware version installed on the node) to your
workstation.
5. Run the script launch_sum on your workstation to launch the HPE Smart Update Manager
(SUM) in GUI mode using a web browser.
• Log on to the Hyper-V host with Remote Desktop Connection and pause the Hyper-V
host in the failover cluster using PowerShell.
Suspend-ClusterNode
7. Using Hyper-V IP address, add the node to HPE SUM from your workstation.
8. Run the inventory on your workstation and wait for the components to be ready for
deployment.
The message Ready for deployment appears along with the list of deployable components.
• iLO 5 Channel Interface Driver for Windows Server 2016 and Server 2019
• Identifiers for Intel Xeon Scalable Processors for Windows
• Network drivers as applicable to your node configuration
• HPE Mellanox CX4LX and CX5 Driver for Windows Server 2019
• HPE Intel i40eb Driver for Windows Server 2019
• HPE Intel i40ea Driver for Windows Server 2019
• HPE Intel ixs Driver for Windows Server 2019
11. Click View Log to view the installation log file. Examine the log file to verify whether all the
selected components were updated successfully and if the node needs to be rebooted to
complete any update.
Note: For the Lenovo Converged HX Series platform, use the custom ISOs that are available on
the VMware website (www.vmware.com) at Downloads > Product Downloads > vSphere >
Custom ISOs.
Ensure to list the MD5 checksum of the hypervisor ISO image in the ISO allowlist file used by
Foundation. See Verify Hypervisor Support on page 48.
The following table describes the fields that appear in the iso_whitelist.json file for each ISO
image.
Name Description
"a2a97a6af6a3e397b43e3a4c7a86ee37": {
"min_foundation": "3.0",
"hypervisor": "kvm",
"min_nos": null,
"friendly_name": "20160127",
"compatible_versions": {
"kvm": [
"^el6.nutanix.20160127$"
]
},
"version": "20160127",
"deprecated": "3.1",
"unsupported_hardware": []
},
Procedure
1. Obtain the MD5 checksum of the ISO that you want to use.
2. Open the downloaded allowlist file in a text editor and perform a search for the MD5
checksum.
What to do next
If the MD5 checksum is listed in the allowlist file, save the file to the workstation that hosts the
Foundation VM. If the allowlist file on the Foundation VM does not contain the MD5 checksum,
you can replace that file with the downloaded file before you begin installation.
Procedure
1. On the Foundation page, click Hypervisor and select a hypervisor from the drop-down list
below Select a hypervisor installer.
2. To upload a new whitelist.json file, click Manage Whitelist and click upload it.
Note: To verify if the iso_whitelist.json is updated successfully, open the Manage Whitelist
menu and check for the date of the newly updated file.
• To set up IPMI static IP address, see Setting IPMI Static IP Address on page 50.
• For issues related to IPMI configuration of bare-metal nodes, see Fixing IPMI Configuration
Issues on page 51.
• For issues with the imaging, see Fixing Imaging Issues on page 53.
Note: Do not perform the following procedure for HPE DX series. The label on the HPE DX
chassis contains the iLO MAC address that is also used to perform MAC-based imaging.
To configure a static IP address for the IPMI port on a node, do the following:
Procedure
3. Press the Delete key during boot up when prompted to enter the BIOS setup mode.
The BIOS Setup Utility screen appears.
Foundation | Troubleshooting | 50
6. Select Update IPMI LAN Configuration, press Enter, and then select Yes in the dialog box.
7. Select Configuration Address Source, press Enter, and then select Static in the dialog box.
8. Select Station IP Address, press Enter, and then enter the IP address for the IPMI port on
that node in the dialog box.
9. Select Subnet Mask, press Enter, and then enter the corresponding submask value in the
dialog box.
10. Select Gateway IP Address, press Enter, and then enter the IP address for the node's
network gateway in the dialog box.
11. When all the field entries are updated with the correct values, press the F4 key to save the
settings and exit the BIOS setup mode.
• One or more IPMI MAC addresses are invalid or there are conflicting IP addresses. Go to the
IPMI screen and correct the IPMI MAC and IP addresses as needed.
Foundation | Troubleshooting | 51
• There is a user name/password mismatch. Go to the IPMI page and correct the IPMI
username and password fields as required.
• One or more nodes are connected to the switch through the wrong network interface. Verify
that the first 1 GbE network interface of each node is connected to the switch (see Setting
Up the Network on page 27).
• The Foundation VM is not in the same broadcast domain as the Controller VMs for
discovered nodes or the IPMI interface for added (bare-metal or undiscovered) nodes. This
problem typically occurs because (a) a non-flat switch is used, (b) IP addresses of the node
are not in the same subnet as the Foundation VM, and (c) multi-homing is not configured.
• If all the nodes are in the Foundation VM subnet, go to the Node page and correct the IP
addresses as needed.
• If the nodes are in multiple subnets, go to the Cluster page and configure multihoming.
• The IPMI interface is not set to failover. Go to BIOS settings to verify the interface
configuration.
To identify and resolve IPMI port configuration problems, do the following:
Procedure
1. Go to the Block & Node Config screen and review the problematic IP address for the failed
nodes (nodes with a red X next to the IPMI address field).
Hovering the cursor over the address displays a pop-up message with troubleshooting
information. This can help to diagnose the problem. See the service.log file (in /home/
nutanix/foundation/log) and the individual node log files for more detailed information.
2. When the issue is resolved, click the Configure IPMI button at the top of the screen.
3. Repeat the preceding steps as necessary to fix all the IPMI configuration errors.
4. When all nodes have green check marks in the IPMI address column, click Image Nodes at
the top of the screen to begin the imaging step.
If you cannot fix the IPMI configuration problem for one or more of the nodes, click Proceed
bypass those nodes and continue to the imaging step for the other nodes. In this case,
configure the IPMI port address manually for each bypassed node (see Setting IPMI Static IP
Address on page 50).
Foundation | Troubleshooting | 52
Fixing Imaging Issues
About this task
When imaging fails for one or more nodes in the cluster, the progress bar turns red and a
red check appears next to the hypervisor address field for any node that was not imaged
successfully. Possible reasons for a failure include the following:
• A type failure was detected. Check connectivity to the IPMI (bare-metal workflow).
• There are network connectivity issues such as the following:
Procedure
1. See the individual log file for any failed nodes for information about the problem.
2. When the issue is resolved, click the Image Nodes (bare metal workflow) button.
3. Repeat the preceding steps as necessary to resolve any other imaging errors.
If you are unable to resolve the issue for one or more of the nodes, it is possible to image
these nodes one at a time (Contact Support for help).
Foundation | Troubleshooting | 53
COPYRIGHT
Copyright 2023 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual
property laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the
United States and/or other jurisdictions. All other brand and product names mentioned herein
are for identification purposes only and may be trademarks of their respective holders.
Foundation | Copyright | 54