Qawvware 3
Qawvware 3
en
VMware Pages
MyLibrary
Login
VMware Docs
Home
Languages
English
Deutsch
Français
Español
日本語
한국어
简体中文
繁體中文
Русский
Italiano
Nederlands
Português brasileiro
Dansk
Čeština
Polskie
Svenska
Türkçe
VMware Pages
Communities
Support
Downloads
MyLibrary
Login
VMware vSphere
Product Documentation
Technical Articles
VMware vSphere
Product Documentation Technical Articles
Expand All
vSphere 7.0
ESXi and vCenter Server
Release Notes
VMware vSphere 7.0 Release Notes
VMware vCenter Server Appliance Photon OS Security Patches
ESXi Update and Patch Release Notes
vCenter Server Update and Patch Releases
VMware Host Client Release Notes
Configuration Maximums
VMware ESXi Installation and Setup
VMware ESXi Upgrade
vCenter Server Installation and Setup
vCenter Server Upgrade
vSphere Authentication
Managing Host and Cluster Lifecycle
vCenter Server Configuration
vCenter Server and Host Management
vSphere Virtual Machine Administration
vSphere Host Profiles
vSphere Networking
vSphere Storage
vSphere Security
vSphere Resource Management
vSphere Availability
vSphere Monitoring and Performance
vSphere Single Host Management - VMware Host Client
Additional Resources
PDFs
vSphere Archive Packages
vSphere with Tanzu
VMware vSAN
vSphere Bitfusion
vSphere 6.7
ESXi and vCenter Server
vSphere Update Manager
VMware vSAN
VMware AppDefense
vSphere 6.5
ESXi and vCenter Server
vSphere Update Manager
VMware Virtual SAN
vSphere Data Protection
vSphere 6.0
ESXi and vCenter Server
vSphere Update Manager
VMware Virtual SAN
vCenter Orchestrator
vSphere Data Protection
vSphere 5.5
ESXi and vCenter Server
vSphere Update Manager
vSphere Replication
vCenter Orchestrator
vSphere Data Protection
Learn More About vSphere
vSphere Command-Line Interfaces, SDKs, and APIs
vSphere Technical Papers
What's New
Internationalization
Compatibility
Before You Begin
Installation and Upgrades for This Release
Open Source Components for VMware vSphere
Product Support Notices
Known Issues
What's New
This release of VMware vSphere 7.0 includes VMware ESXi 7.0 and VMware vCenter
Server 7.0. Read about the new and enhanced features in this release in What's New
in vSphere 7.0.
Internationalization
English
French
German
Spanish
Japanese
Korean
Simplified Chinese
Traditional Chinese
Components of vSphere 7.0, including vCenter Server, ESXi, the vSphere Client, and
the vSphere Host Client, do not accept non-ASCII input.
Compatibility
ESXi and vCenter Server Version Compatibility
The VMware Product Interoperability Matrix provides details about the compatibility
of current and earlier versions of VMware vSphere components, including ESXi,
VMware vCenter Server, and optional VMware products. Check the VMware Product
Interoperability Matrix also for information about supported management and backup
agents before you install ESXi or vCenter Server.
The vSphere Lifecycle Manager and vSphere Client are packaged with vCenter Server.
Hardware Compatibility for ESXi
To view a list of processors, storage devices, SAN arrays, and I/O devices that are
compatible with vSphere 7.0, use the ESXi 7.0 information in the VMware
Compatibility Guide.
Device Compatibility for ESXi
To determine which devices are compatible with ESXi 7.0, use the ESXi 7.0
information in the VMware Compatibility Guide.
Guest Operating System Compatibility for ESXi
To determine which guest operating systems are compatible with vSphere 7.0, use the
ESXi 7.0 information in the VMware Compatibility Guide.
Virtual Machine Compatibility for ESXi
Virtual machines that are compatible with ESX 3.x and later (hardware version 4)
are supported with ESXi 7.0. Virtual machines that are compatible with ESX 2.x and
later (hardware version 3) are not supported. To use such virtual machines on ESXi
7.0, upgrade the virtual machine compatibility. See the ESXi Upgrade documentation.
vSphere 7.0 requires one CPU license for up to 32 physical cores. If a CPU has more
than 32 cores, additional CPU licenses are required as announced in "Update to
VMware’s per-CPU Pricing Model". Prior to upgrading ESXi hosts, you can determine
the number of licenses required using the license counting tool described in
"Counting CPU licenses needed under new VMware licensing policy".
Read the ESXi Installation and Setup and the vCenter Server Installation and Setup
documentation for guidance about installing and configuring ESXi and vCenter
Server.
VMware's Configuration Maximums tool helps you plan your vSphere deployments. Use
this tool to view the limits for virtual machines, ESXi, vCenter Server, vSAN,
networking, and so on. You can also compare limits for two or more product
releases. The VMware Configuration Maximums tool is best viewed on larger format
devices such as desktops and laptops.
VMware Tools Bundling Changes in ESXi 7.0
In ESXi 7.0, a subset of VMware Tools 11.0.5 and VMware Tools 10.3.21 ISO images
are bundled with the ESXi 7.0 host.
The following VMware Tools 11.0.5 ISO image is bundled with ESXi:
The following VMware Tools 10.3.21 ISO image is bundled with ESXi:
linux.iso: VMware Tools image for Linux OS with glibc 2.5 or higher
The following VMware Tools 11.0.5 ISO images are available for download:
Follow the procedures listed in the following documents to download VMware Tools
for platforms not bundled with ESXi:
For information about upgrading with third-party customizations, see the ESXi
Upgrade documentation. For information about using Image Builder to make a custom
ISO, see the ESXi Installation and Setup documentation.
Upgrades and Installations Disallowed for Unsupported CPUs
Comparing the processors supported by vSphere 6.7, vSphere 7.0 no longer supports
the following processors:
The following CPUs are supported in the vSphere 7.0 release, but they may not be
supported in future vSphere releases. Please plan accordingly:
For instructions about upgrading ESXi hosts and vCenter Server, see the ESXi
Upgrade and the vCenter Server Upgrade documentation.
Open Source Components for vSphere 7.0
The copyright statements and licenses applicable to the open source software
components distributed in vSphere 7.0 are available at http://www.vmware.com. You
need to log in to your My VMware account. Then, from the Downloads menu, select
vSphere. On the Open Source tab, you can also download the source files for any
GPL, LGPL, or other similar licenses that require the source code or modifications
to source code to be made available for the most recent available release of
vSphere.
In vSphere 7.0, you can take advantage of the features available in the vSphere
Client (HTML5). The Flash-based vSphere Web Client has been deprecated and is no
longer available. For more information, see Goodbye, vSphere Web Client.
The VMware Host Client is a web-based application that you can use to manage
individual ESXi hosts that are not connected to a vCenter Server system.
In vSphere 7.0, TLS 1.2 is enabled by default. TLS 1.0 and TLS 1.1 are disabled
by default. If you upgrade vCenter Server to 7.0 and that vCenter Server instance
connects to ESXi hosts, other vCenter Server instances, or other services, you
might encounter communication problems.
To resolve this issue, you can use the TLS Configurator utility to enable older
versions of the protocol temporarily on 7.0 systems. You can then disable the older
less secure versions after all connections use TLS 1.2. For information, see
Managing TLS Protocol Configuration with the TLS Configurator Utility.
In vSphere 7.0, vCenter Server for Windows has been removed and support is not
available. For more information, see Farewell, vCenter Server for Windows.
In vSphere 7.0, the ESXi built-in VNC server has been removed. Users will no
longer be able to connect to a virtual machine using a VNC client by setting the
RemoteDisplay.vnc.enable configure to be true. Instead, users should use the VM
Console via the vSphere Client, the ESXi Host Client, or the VMware Remote Console,
to connect virtual machines. Customers desiring VNC access to a VM should use the
VirtualMachine.AcquireTicket("webmks") API, which offers a VNC-over-websocket
connection. The webmks ticket offers authenticated access to the virtual machine
console. For more information, please refer to the VMware HTML Console SDK
Documentation.
Deprecation of VMKLinux
In vSphere 7.0, VMKLinux driver compatibility has been deprecated and removed.
vSphere 7.0 will not contain support for VMKLinux APIs and associated VMKLinux
drivers. Custom ISO will not be able to have any VMKLinux async drivers. All
drivers contained in an ISO must be native drivers. All currently supported devices
which are not supported by native drivers will not function and will not be
recognized during installation or upgrade. VCG will not show any devices not
supported by a native driver as supported in vSphere 7.0.
In vSphere 7.0, 32-bit userworld support has been deprecated. Userworlds are
the components of ESXi used by partners to provide drivers, plugins, and other
system extensions (distributed as VIBs). Userworlds are not customer accessible.
vSphere 7.0 provides 64-bit userworld support through partner devkits and will
retain 32-bit userworld support through this major release. Support for 32-bit
userworlds will be permanently removed in the next major ESXi release. To avoid
loss of functionality, customers should ensure any vendor-supplied VIBs in use are
migrated to 64-bit before upgrading beyond the vSphere 7.0 release.
In vSphere 7.0, the Update Manager plugin used for administering vSphere Update
Manager has been replaced with the Lifecycle Manager plugin. Administrative
operations for vSphere Update Manager are still available under the Lifecycle
Manager plugin, along with new capabilities for vSphere Lifecycle Manager.
In a future vSphere release, support for Smart Card Authentication in DCUI will
be discontinued. In place of accessing DCUI using Personal Identity Verification
(PIV), Common Access Card (CAC), or SC650 smart card, users will be encouraged to
perform operations through vCenter, PowerCLI, API calls, or by logging in with a
username and password.
In vSphere 7.0, support for Coredump Partitions in Host Profiles has been
deprecated. In place of Coredump Partitions, users should transition to Coredump
Files.
In vSphere 7.0, vendor add-ons are accessible through vCenter Server's vSphere
Lifecycle Manager if the vCenter Server instance has been configured to use a proxy
or Update Manager Download Service. To access add-ons from MyVMware, navigate to
the Custom ISOs and Add-ons tab. Under the OEM Customized Installer CDs and Add-
ons, you can find the custom add-ons from each of the vendors. For more information
about vSphere Lifecycle Manager and vendor add-ons, see the Managing Host and
Cluster Lifecycle documentation.
Known Issues
NEW: vmnic and vmhba device names change after an upgrade to ESXi 7.0
On certain hardware platforms, vmnic and vmhba device names (aliases) might
change across an upgrade to ESXi 7.0 from an earlier ESXi version. This occurs on
systems whose firmware provides an ACPI _SUN method that returns a physical slot
number of 0 for devices that are not in a pluggable slot.
The vCenter Server Upgrade/Migration pre-checks fail when the Security Token
Service (STS) certificate does not contain a Subject Alternative Name (SAN) field.
This situation occurs when you have replaced the vCenter 5.5 Single Sign-On
certificate with a custom certificate that has no SAN field, and you attempt to
upgrade to vCenter Server 7.0. The upgrade considers the STS certificate invalid
and the pre-checks prevent the upgrade process from continuing.
Workaround: Replace the STS certificate with a valid certificate that contains
a SAN field then proceed with the vCenter Server 7.0 Upgrade/Migration.
Problems upgrading to vSphere 7.0 with pre-existing CIM providers
After upgrade, previously installed 32-bit CIM providers stop working because
ESXi requires 64-bit CIM providers. Customers may lose management API functions
related to CIMPDK, NDDK (native DDK), HEXDK, VAIODK (IO filters), and see errors
related to uwglibc dependency.
The syslog reports module missing, "32 bit shared libraries not loaded."
If you have configured vCenter Server for either Smart Card or RSA SecurID
authentication, see the VMware knowledge base article at
https://kb.vmware.com/s/article/78057 before starting the vSphere 7.0 upgrade
process. If you do not perform the workaround as described in the KB, you might see
the following error messages and Smart Card or RSA SecurID authentication does not
work.
"Smart card authentication may stop working. Smart card settings may not be
preserved, and smart card authentication may stop working."
or
"RSA SecurID authentication may stop working. RSA SecurID settings may not be
preserved, and RSA SecurID authentication may stop working."
Workaround: Before upgrading to vSphere 7.0, see the VMware knowledge base
article at https://kb.vmware.com/s/article/78057.
Upgrading a vCenter Server with an external Platform Services Controller from
6.7u3 to 7.0 fails with VMAFD error
Authentication using RSA SecurID will not work after upgrading to vCenter
Server 7.0. An error message will alert you to this issue when attempting to login
using your RSA SecurID login.
Migration of vCenter Server for Windows to vCenter Server appliance 7.0 fails
with the error message IP already exists in the network. This prevents the
migration process from configuring the network parameters on the new vCenter Server
appliance. For more information, examine the log file:
/var/log/vmware/upgrade/UpgradeRunner.log
Workaround:
Verify that all Windows Updates have been completed on the source vCenter
Server for Windows instance, or disable automatic Windows Updates until after the
migration finishes.
Retry the migration of vCenter Server for Windows to vCenter Server
appliance 7.0.
When you configure the number of virtual functions for an SR-IOV device by
using the max_vfs module parameter, the changes might not take effect
In vSphere 7.0, you can configure the number of virtual functions for an SR-IOV
device by using the Virtual Infrastructure Management (VIM) API, for example,
through the vSphere Client. The task does not require reboot of the ESXi host.
After you use the VIM API configuration, if you try to configure the number of SR-
IOV virtual functions by using the max_vfs module parameter, the changes might not
take effect because they are overridden by the VIM API configuration.
During a major upgrade, if the source instance of the vCenter Server appliance
is configured with multiple secondary networks other than the VCHA NIC, the target
vCenter Server instance will not retain secondary networks other than the VCHA NIC.
If the source instance is configured with multiple NICs that are part of DVS port
groups, the NIC configuration will not be preserved during the upgrade.
Configurations for vCenter Server appliance instances that are part of the standard
port group will be preserved.
Workaround: Verify that the new vCenter Server instance has been joined to an
Active Directory domain. See Knowledge Base article:
https://kb.vmware.com/s/article/2118543
Migrating a vCenter Server for Windows with an external Platform Services
Controller using an Oracle database fails
If there are non-ASCII strings in the Oracle events and tasks table the
migration can fail when exporting events and tasks data. The following error
message is provided: UnicodeDecodeError
Workaround: None.
After an ESXi host upgrade, a Host Profile compliance check shows non-compliant
status while host remediation tasks fail
The non-compliant status indicates an inconsistency between the profile and the
host.
This inconsistency might occur because ESXi 7.0 does not allow duplicate claim
rules, but the profile you use contains duplicate rules. For example, if you
attempt to use the Host Profile that you extracted from the host before upgrading
ESXi 6.5 or ESXi 6.7 to version 7.0 and the Host Profile contains any duplicate
claim rules of system default rules, you might experience the problems.
Workaround:
Remove any duplicate claim rules of the system default rules from the Host
Profile document.
Check the compliance status.
Remediate the host.
If the previous steps do not help, reboot the host.
After installing or upgrading to vCenter Server 7.0, when you navigate to the
Update panel within the vCenter Server Management Interface, the error message
"Check the URL and try again" displays. The error message does not prevent you from
using the functions within the Update panel, and you can view, stage, and install
any available updates.
Workaround: None.
Workaround: Either remove or remediate all hosts that failed attestation from
the Trusted Cluster.
Encrypted virtual machine fails to power on when DRS-enabled Trusted Cluster
contains an unattested host
In VMware® vSphere Trust Authority™, if you have enabled DRS on the Trusted
Cluster and one or more hosts in the cluster fails attestation, DRS might try to
power on an encrypted virtual machine on an unattested host in the cluster. This
operation puts the virtual machine in a locked state.
Workaround: Either remove or remediate all hosts that failed attestation from
the Trusted Cluster.
Migrating or cloning encrypted virtual machines across vCenter Server instances
fails when attempting to do so using the vSphere Client
Workaround: You must use the vSphere APIs to migrate or clone encrypted virtual
machines across vCenter Server instances.
Networking Issues
Workaround: To achieve the same networking performance as vSphere 6.7, you can
disable the queue-pair with a module parameter. To disable the queue-pair, run the
command:
When you migrate VMkernel ports from one port group to another, IPv6 traffic
does not pass through VMkernel ports using IPsec.
Workaround: Remove the IPsec security association (SA) from the affected
server, and then reapply the SA. To learn how to set and remove an IPsec SA, see
the vSphere Security documentation.
Higher ESX network performance with a portion of CPU usage increase
Workaround: Remove and add the network interface with only 1 rx dispatch queue.
For example:
Workaround: You can hot-remove and hot-add the affected Ethernet NICs of the VM
to restore traffic. On Linux guest operating systems, restarting the network might
also resolve the issue. If these workarounds have no effect, you can reboot the VM
to restore network connectivity.
Change of IP address for a VCSA deployed with static IP address requires that
you create the DNS records in advance
With the introduction of the DDNS, the DNS record update only works for VCSA
deployed with DHCP configured networking. While changing the IP address of the
vCenter server via VAMI, the following error is displayed:
./opt/vmware/share/vami/vami_config_net
Use option 6 to change the IP adddress of eth0. Once changed, execute the
following script:
./opt/likewise/bin/lw-update-dns
Restart all the services on the VCSA to update the IP information on the
DNS server.
It may take several seconds for the NSX Distributed Virtual Port Group (NSX
DVPG) to be removed after deleting the corresponding logical switch in NSX Manager.
As the number of logical switches increases, it may take more time for the NSX
DVPG in vCenter Server to be removed after deleting the corresponding logical
switch in NSX Manager. In an environment with 12000 logical switches, it takes
approximately 10 seconds for an NSX DVPG to be deleted from vCenter Server.
Workaround: None.
Hostd runs out of memory and fails if a large number of NSX Distributed Virtual
port groups are created.
Workaround:To support the use of NSX Distributed Virtual port groups, increase
the amount of memory in your ESXi hosts. If you verify that your system has
adequate memory to support your VMs, you can directly increase the memory of hostd
using the following command.
Note that this will cause hostd to use memory normally reserved for your
environment's VMs. This may have the affect of reducing the number of VMs your ESXi
host can support.
DRS may incorrectly launch vMotion if the network reservation is configured on
a VM
Workaround: Make all transport nodes join the transport zone by N-VDS or the
same VDS 7.0 instance.
When adding a VMkernel NIC (vmknic) to an NSX portgroup, vCenter Server reports
the error "Connecting VMKernel adapter to a NSX Portgroup on a Stateless host is
not a supported operation. Please use Distributed Port Group instead."
For stateless ESXi on Distributed Virtual Switch (DVS), the vmknic on a NSX
port group is blocked. You must instead use a Distributed Port Group.
For stateful ESXi on DVS, vmknic on NSX port group is supported, but vSAN
may have an issue if it is using vmknic on a NSX port group.
If you navigate to the Edit Settings dialog for physical network adapters and
attempt to enable SR-IOV, the operation might fail when using QLogic 4x10GE
QL41164HFCU CNA. Attempting to enable SR-IOV might lead to a network outage of the
ESXi host.
Workaround: Use the following command on the ESXi host to enable SRIOV:
esxcfg-module
New vCenter Server fails if the hosts in a cluster using Distributed Resource
Scheduler (DRS) join NSX-T networking by a different Virtual Distributed Switch
(VDS) or combination of NSX-T Virtual Distributed Switch (NVDS) and VDS
In vSphere 7.0, when using NSX-T networking on vSphere VDS with a DRS cluster,
if the hosts do not join the NSX transport zone by the same VDS or NVDS, it can
cause vCenter Server to fail.
Workaround: Have hosts in a DRS cluster join the NSX transport zone using the
same VDS or NVDS.
Storage Issues
VMFS datastores are not mounted automatically after disk hot remove and hot
insert on HPE Gen10 servers with SmartPQI controllers
When SATA disks on HPE Gen10 servers with SmartPQI controllers without
expanders are hot removed and hot inserted back to a different disk bay of the same
machine, or when multiple disks are hot removed and hot inserted back in a
different order, sometimes a new local name is assigned to the disk. The VMFS
datastore on that disk appears as a snapshot and will not be mounted back
automatically because the device name has changed.
Workaround: None. SmartPQI controller does not support unordered hot remove and
hot insert operations.
Setting the loglevel for nvme_pcie driver fails with an error
When you set the loglevel for nvme_pcie driver with the command esxcli nvme
driver loglevel set -l <log level>, the action fails with the error message:
This command was retained for compatibility consideration with NVMe driver, but
it is not supported for nvme_pcie driver.
Workaround: None. This condition will exist when the nvme_pcie feature is
enabled.
ESXi might terminate I/O to NVMeOF devices due to errors on all active paths
Occasionally, all active paths to NVMeOF device register I/O errors due to link
issues or controller state. If the status of one of the paths changes to Dead, the
High Performance Plug-in (HPP) might not select another path if it shows high
volume of errors. As a result, the I/O fails.
VOMA check is not supported for NVMe based VMFS datastores and will fail with
the error:
Example:
Workaround: None. If you need to analyse VMFS metadata, collect it using the -l
option, and pass to VMware customer support. The command for collecting the dump
is:
If an FCD and a VM are encrypted with different crypto keys, your attempts to
attach the encrypted FCD to the encrypted VM using the VM reconfigure API might
fail with the error message:
This problem does not occur when a non-head extent of the spanned VMFS
datastore fails along with the head extent. In this case, the entire datastore
becomes inaccessible and no longer allows I/Os.
In contrast, when only a non-head extent fails, but the head extent remains
accessible, the datastore heartbeat appears to be normal. And the I/Os between the
host and the datastore continue. However, any I/Os that depend on the failed non-
head extent start failing as well. Other I/O transactions might accumulate while
waiting for the failing I/Os to resolve, and cause the host to enter the non
responding state.
Workaround: Fix the PDL condition of the non-head extent to resolve this issue.
After recovering from APD or PDL conditions, VMFS datastore with enabled
support for clustered virtual disks might remain inaccessible
You can encounter this problem only on datastores where the clustered virtual
disk support is enabled. When the datastore recovers from an All Paths Down (APD)
or Permanent Device Loss (PDL) condition, it remains inaccessible. The VMkernel log
might show multiple SCSI3 reservation conflict messages similar to the following:
The problem can occur because the ESXi host participating in the cluster loses
SCSI reservations for the datastore and cannot always reacquire them automatically
after the datastore recovers.
where the <device name> is the name of the device on which the datastore is
created.
Virtual NVMe Controller is the default disk controller for Windows 10 guest
operating systems
The Virtual NVMe Controller is the default disk controller for the following
guest operating systems when using Hardware Version 15 or later:
Windows 10
Windows Server 2016
Windows Server 2019
Some features might not be available when using a Virtual NVMe Controller. For
more information, see https://kb.vmware.com/s/article/2147714
Note: Some clients use the previous default of LSI Logic SAS. This includes
ESXi host client and PowerCLI.
Claim rules determine which multipathing plugin, such as NMP, HPP, and so on,
owns paths to a particular storage device. ESXi 7.0 does not support duplicate
claim rules. However, the ESXi 7.0 host does not alert you if you add duplicate
rules to the existing claim rules inherited through an upgrade from a legacy
release. As a result of using duplicate rules, storage devices might be claimed by
unintended plugins, which can cause unexpected outcome.
Workaround: Do not use duplicate core claim rules. Before adding a new claim
rule, delete any existing matching claim rule.
A CNS query with the compliance status filter set might take unusually long
time to complete
The CNS QueryVolume API enables you to obtain information about the CNS
volumes, such as volume health and compliance status. When you check the compliance
status of individual volumes, the results are obtained quickly. However, when you
invoke the CNS QueryVolume API to check the compliance status of multiple volumes,
several tens or hundreds, the query might perform slowly.
Workaround: Avoid using bulk queries. When you need to get compliance status,
query one volume at a time or limit the number of volumes in the query API to 20 or
fewer. While using the query, avoid running other CNS operations to get the best
performance.
New Deleted CNS volumes might temporarily appear as existing in the CNS UI
After you delete an FCD disk that backs a CNS volume, the volume might still
show up as existing in the CNS UI. However, your attempts to delete the volume
fail. You might see an error message similar to the following:
The object or item referred to could not be found.
Workaround: The next full synchronization will resolve the inconsistency and
correctly update the CNS UI.
New Attempts to attach multiple CNS volumes to the same pod might occasionally
fail with an error
When you attach multiple volumes to the same pod simultaneously, the attach
operation might occasionally choose the same controller slot. As a result, only one
of the operations succeeds, while other volume mounts fail.
This might occur when, for example, you use an incompliant storage policy to
create a CNS volume. The operation fails, while the vSphere Client shows the task
status as successful.
Workaround: The successful task status in the vSphere Client does not guarantee
that the CNS operation succeeded. To make sure the operation succeeded, verify its
results.
New Unsuccessful delete operation for a CNS persistent volume might leave the
volume undeleted on the vSphere datastore
This issue might occur when the CNS Delete API attempts to delete a persistent
volume that is still attached to a pod. For example, when you delete the Kubernetes
namespace where the pod runs. As a result, the volume gets cleared from CNS and
the CNS query operation does not return the volume. However, the volume continues
to reside on the datastore and cannot be deleted through the repeated CNS Delete
API operations.
Workaround: None.
When you change the vCenter IP address (PNID change), the registered vendor
providers go offline.
When you use cross vCenter vMotion to move a VM's storage and host to a
different vCenter server instance, you might receive the error The operation is not
allowed in the current state.
This error appears in the UI wizard after the Host Selection step and before
the Datastore Selection step, in cases where the VM has an assigned storage policy
containing host-based rules such as encryption or any other IO filter rule.
Workaround: Assign the VM and its disks to a storage policy without host-based
rules. You might need to decrypt the VM if the source VM is encrypted. Then retry
the cross vCenter vMotion action.
Storage Sensors information in Hardware Health tab shows incorrect values on
vCenter UI, host UI, and MOB
When you navigate to Host > Monitor > Hardware Health > Storage Sensors on
vCenter UI, the storage information displays either incorrect or unknown values.
The same issue is observed on the host UI and the MOB path
“runtime.hardwareStatusInfo.storageStatusInfo” as well.
Workaround: None.
vSphere UI host advanced settings shows the current product locker location as
empty with an empty default
vSphere UI host advanced settings shows the current product locker location as
empty with an empty default. This is inconsistent as the actual product location
symlink is created and valid. This causes confusion to the user. The default cannot
be corrected from UI.
Workaround: User can use the esxcli command on the host to correct the current
product locker location default as below.
1. Remove the existing Product Locker Location setting with: "esxcli system
settings advanced remove -o ProductLockerLocation"
2. Re-add the Product Locker Location setting with the appropriate default:
Add the setting: esxcli system settings advanced add -d "Path to VMware
Tools repository" -o ProductLockerLocation -t string -s $PRODUCT_LOCKER_DEFAULT
You can combine all the above steps in step 2 by issuing the single command:
Workaround: None.
The postcustomization section of the customization script runs before the guest
customization
When you run the guest customization script for a Linux guest operating system,
the precustomization section of the customization script that is defined in the
customization specification runs before the guest customization and the
postcustomization section runs after that. If you enable Cloud-Init in the guest
operating system of a virtual machine, the postcustomization section runs before
the customization due to a known issue in Cloud-Init.
When you perform group migration operations on VMs with multiple disks and
multi-level snapshots, the operations might fail with the error
com.vmware.vc.GenericVmConfigFault Failed waiting for data. Error 195887167.
Connection closed by remote host, possibly due to timeout.
Workaround: Retry the migration operation on the failed VMs one at a time.
Deploying an OVF or OVA template from a URL fails with a 403 Forbidden error
The URLs that contain an HTTP query parameter are not supported. For example,
http://webaddress.com?file=abc.ovf or the Amazon pre-signed S3 URLs.
Workaround: Download the files and deploy them from your local file system.
Importing or deploying local OVF files containing non-ASCII characters in their
name might fail with an error
When you import local .ovf files containing non-ASCII characters in their name,
you might receive 400 Bad Request Error. When you use such .ovf files to deploy a
virtual machine in the vSphere Client, the deployment process stops at 0%. As a
result, you might receive 400 Bad Request Error or 500 Internal Server Error.
Workaround:
Remove the non-ASCII characters from the .ovf and .vmdk file names.
To edit the .ovf file, open it with a text editor.
Search the non-ASCII .vmdk file name and change it to ASCII.
Import or deploy the saved files again.
New The third level of nested objects in a virtual machine folder is not
visible
As a result, from the VMs and Templates inventory tree you cannot see the
objects in the third nested folder.
Workaround: To see the objects in the third nested folder, navigate to the
second nested folder and select the VMs tab.
Some VMs might be in orphaned state after cluster wide APD recovers, even if HA
and VMCP are enabled on the cluster.
Workaround: You must unregister and reregister the orphaned VMs manually within
the cluster after the APD recovers.
If you do not manually reregister the orphaned VMs, HA attempts failover of the
orphaned VMs, but it might take between 5 to 10 hours depending on when APD
recovers.
The overall functionality of the cluster is not affected in these cases and HA
will continue to protect the VMs. This is an anomaly in what gets displayed on VC
for the duration of the problem.
You cannot enable NSX-T on a cluster that is already enabled for managing image
setup and updates on all hosts collectively
NSX-T is not compatible with the vSphere Lifecycle Manager functionality for
image management. When you enable a cluster for image setup and updates on all
hosts in the cluster collectively, you cannot enable NSX-T on that cluster.
However, you can deploy NSX Edges to this cluster.
Workaround: Move the hosts to a new cluster that you can manage with baselines
and enable NSX-T on that new cluster.
vSphere Lifecycle Manager and vSAN File Services cannot be simultaneously
enabled on a vSAN cluster in vSphere 7.0 release
Workaround: None.
ESXi 7.0 hosts cannot be added to а cluster that you manage with a single image
by using vSphere Auto Deploy
Attempting to add ESXi hosts to а cluster that you manage with a single image
by using the "Add to Inventory" workflow in vSphere Auto Deploy fails. The failure
occurs because no patterns are matched in an existing Auto Deploy ruleset. The task
fails silently and the hosts remain in the Discovered Hosts tab.
Workaround:
Remove the ESXi hosts that did not match the ruleset from the Discovered
Hosts tab.
Create a rule or edit an existing Auto Deploy rule, where the host target
location is a cluster managed by an image.
Reboot the hosts.
The hosts are added to the cluster that you manage by an image in vSphere
Lifecycle Manager.
When a hardware support manager is unavailable, vSphere High Availability (HA)
functionality is impacted
If hardware support manager is unavailable for a cluster that you manage with a
single image, where a firmware and drivers addon is selected and vSphere HA is
enabled, the vSphere HA functionality is impacted. You may experience the following
errors.
Configuring vSphere HA on a cluster fails.
Cannot complete the configuration of the vSphere HA agent on a host:
Applying HA VIBs on the cluster encountered a failure.
Remediating vSphere HA fails: A general system error occurred: Failed to
get Effective Component map.
Disabling vSphere HA fails: Delete Solution task failed. A general system
error occurred: Cannot find hardware support package from depot or hardware support
manager.
Workaround:
If the hardware support manager is temporarily unavailable, perform the
following steps.
Reconnect the hardware support manager to vCenter Server.
Select a cluster from the Hosts and Cluster menu.
Select the Configure tab.
Under Services, click vSphere Availability.
Re-enable vSphere HA.
If the hardware support manager is permanently unavailable, perform the
following steps.
Remove the hardware support manager and the hardware support package from
the image specification
Re-enable vSphere HA.
Select a cluster from the Hosts and Cluster menu.
Select the Updates tab.
Click Edit .
Remove the firmware and drivers addon and click Save.
Select the Configure tab.
Under Services, click vSphere Availability.
Re-enable vSphere HA.
Workaround:
Call IOFilter API UninstallIoFilter_Task from the vCenter Server managed
object (IoFilterManager).
Remediate the cluster in vSphere Lifecycle Manager.
Call IOFilter API ResolveInstallationErrorsOnCluster_Task from the vCenter
Server managed object (IoFilterManager) to update the database.
Workaround: Аfter the cluster remediation operation has finished, perform one
of the following tasks.
Right-click the failed ESXi host and select Reconfigure for vSphere HA.
Disable and re-enable vSphere HA for the cluster.
Workaround: Аfter the cluster remediation operation has finished, disable and
re-enable vSphere HA for the cluster.
Checking for recommended images in vSphere Lifecycle Manager has slow
performance in large clusters
In large clusters with more than 16 hosts, the recommendation generation task
could take more than an hour to finish or may appear to hang. The completion time
for the recommendation task depends on the number of devices configured on each
host and the number of image candidates from the depot that vSphere Lifecycle
Manager needs to process before obtaining a valid image to recommend.
Workaround: None.
Checking for hardware compatibility in vSphere Lifecycle Manager has slow
performance in large clusters
In large clusters with more than 16 hosts, the validation report generation
task could take up to 30 minutes to finish or may appear to hang. The completion
time depends on the number of devices configured on each host and the number of
hosts configured in the cluster.
Workaround: None
Incomplete error messages in non-English languages are displayed, when
remediating a cluster in vSphere Lifecycle Manager
You can encounter incomplete error messages for localized languages in the
vCenter Server user interface. The messages are displayed, after a cluster
remediation process in vSphere Lifecycle Manager fails. For example, your can
observe the following error message.
The error message in English language: Virtual machine 'VMC on DELL EMC
-FileServer' that runs on cluster 'Cluster-1' reported an issue which prevents
entering maintenance mode: Unable to access the virtual machine configuration:
Unable to access file[local-0] VMC on Dell EMC - FileServer/VMC on Dell EMC -
FileServer.vmx
Workaround: None.
Importing an image with no vendor addon, components, or firmware and drivers
addon to a cluster which image contains such elements, does not remove the image
elements of the existing image
Only the ESXi base image is replaced with the one from the imported image.
Workaround: After the import process finishes, edit the image, and if needed,
remove the vendor addon, components, and firmware and drivers addon.
When you convert a cluster that uses baselines to a cluster that uses a single
image, a warning is displayed that vSphere HA VIBs will be removed
Workaround: This message can be ignored. The conversion process installs the
vmware-fdm component.
If vSphere Update Manager is configured to download patch updates from the
Internet through a proxy server, after upgrade to vSphere 7.0 that converts Update
Manager to vSphere Lifecycle Manager, downloading patches from VMware patch
repository might fail
Miscellaneous Issues
When applying a host profile with version 6.5 to a ESXi host with version 7.0,
the compliance check fails
Applying a host profile with version 6.5 to a ESXi host with version 7.0,
results in Coredump file profile reported as not compliant with the host.
The Actions drop-down menu does not contain any items when your browser is set
to language different from English
When your browser is set to language different from English and you click the
Switch to New View button from the virtual machine Summary tab of the vSphere
Client inventory, the Actions drop-down menu in the Guest OS panel does not contain
any items.
Workaround: Select the Actions drop-down menu on the top of the virtual machine
page.
Mellanox ConnectX-4 or ConnectX-5 native ESXi drivers might exhibit minor
throughput degradation when Dynamic Receive Side Scaling (DYN_RSS) or Generic RSS
(GEN_RSS) feature is turned on
Mellanox ConnectX-4 or ConnectX-5 native ESXi drivers might exhibit less than 5
percent throughput degradation when DYN_RSS and GEN_RSS feature is turned on, which
is unlikely to impact normal workloads.
Workaround: You can disable DYN_RSS and GEN_RSS feature with the following
commands:
# reboot
RDMA traffic between two VMs on the same host might fail in PVRDMA environment
In vSphere 6.7 and earlier, HCA was used for local RDMA traffic if SRQ was
enabled. vSphere 7.0 uses HCA loopback with VMs using versions of PVRDMA that have
SRQ enabled with a minimum of HW v14 using RoCE v2.
The current version of Marvell FastLinQ adapter firmware does not support
loopback traffic between QPs of the same PF or port.
There are limitations with the Marvell FastLinQ qedrntv RoCE driver and
Unreliable Datagram (UD) traffic. UD applications involving bulk traffic might fail
with qedrntv driver. Additionally, UD QPs can only work with DMA Memory Regions
(MR). Physical MRs or FRMR are not supported. Applications attempting to use
physical MR or FRMR along with UD QP fail to pass traffic when used with qedrntv
driver. Known examples of such test applications are ibv_ud_pingpong and
ib_send_bw.
Standard RoCE and RoCEv2 use cases in a VMware ESXi environment such as iSER,
NVMe-oF (RoCE) and PVRDMA are not impacted by this issue. Use cases for UD traffic
are limited and this issue impacts a small set of applications requiring bulk UD
traffic.
Marvell FastLinQ hardware does not support RDMA UD traffic offload. In order to
meet the VMware PVRDMA requirement to support GSI QP, a restricted software only
implementation of UD QP support was added to the qedrntv driver. The goal of the
implementation is to provide support for control path GSI communication and is not
a complete implementation of UD QP supporting bulk traffic and advanced features.
Workaround: Bulk UD QP traffic is not supported with qedrntv driver and there
is no workaround at this time. VMware ESXi RDMA (RoCE) use cases like iSER, NVMe,
RDMA and PVRDMA are unaffected by this issue.
Servers equipped with QLogic 578xx NIC might fail when frequently connecting or
disconnecting iSCSI LUNs
Workaround: None.
ESXi might fail during driver unload or controller disconnect operation in
Broadcom NVMe over FC environment
In Broadcom NVMe over FC environment, ESXi might fail during driver unload or
controller disconnect operation and display an error message such as: @BlueScreen:
#PF Exception 14 in world 2098707:vmknvmeGener IP 0x4200225021cc addr 0x19
Workaround: None.
ESXi does not display OEM firmware version number of i350/X550 NICs on some
Dell servers
The inbox ixgben driver only recognizes firmware data version or signature for
i350/X550 NICs. On some Dell servers the OEM firmware version number is programmed
into the OEM package version region, and the inbox ixgben driver does not read this
information. Only the 8-digit firmware signature is displayed.
Workaround: To display the OEM firmware version number, install async ixgben
driver version 1.7.15 or later.
X710 or XL710 NICs might fail in ESXi
When you initiate certain destructive operations to X710 or XL710 NICs, such as
resetting the NIC or manipulating VMKernel's internal device tree, the NIC hardware
might read data from non-packet memory.
Workaround: Do not reset the NIC or manipulate vmkernel internal device state.
NVMe-oF does not guarantee persistent VMHBA name after system reboot
NVMe-oF is a new feature in vSphere 7.0. If your server has a USB storage
installation that uses vmhba30+ and also has NVMe over RDMA configuration, the
VMHBA name might change after a system reboot. This is because the VMHBA name
assignment for NVMe over RDMA is different from PCIe devices. ESXi does not
guarantee persistence.
Workaround: None.
Backup fails for vCenter database size of 300 GB or greater
If the vCenter database size is 300 GB or greater, the file-based backup will
fail with a timeout. The following error message is displayed: Timeout! Failed to
complete in 72000 seconds
Workaround: None.
Checking the compliance state of a ESXi 7.0 host against a host profile with
version 6.5 or 6.7, results in an error for vmhba and vmrdma devices
When checking the compliance of a ESXi 7.0 host that uses nmlx5_core or
nvme_pcie driver against a host profile with version 6.5 or 6.7, you may observe
the following errors, where address1 and address2 are specific to the affected
system.
A vmhba device with bus type logical, address1 is not present on your host.
A vmrdma device with bus type logical, address2 is not present on your
host.
When you restore a vCenter Server 7.0 which is upgraded from 6.x with External
Platform Services Controller to vCenter Server 7.0, the restore might fail and
display the following error: Failed to retrieve appliance storage list
Workaround: During the first stage of the restore process, increase the storage
level of the vCenter Server 7.0. For example if the vCenter Server 6.7 External
Platform Services Controller setup storage type is small, select storage type large
for the restore process.
Enabled SSL protocols configuration parameter is not configured during a host
profile remediation process
Workaround: To enable TLSV 1.0 or TLSV 1.1 SSL protocols for SFCB, log in to an
ESXi host by using SSH, and run the following ESXCLI command: esxcli system wbem -P
<protocol_name>
Unable to configure Lockdown Mode settings by using Host Profiles
Lockdown Мode cannot be configured by using a security host profile and cannot
be applied to multiple ESXi hosts at once. You must manually configure each host.
Workaround: In vCenter Server 7.0, you can configure Lockdown Mode and manage
Lockdown Mode exception user list by using a security host profile.
When a host profile is applied to a cluster, Enhanced vMotion Compatibility
(EVC) settings are missing from the ESXi hosts
Some settings in the VMware config file /etc/vmware/config are not managed by
Host Profiles and are blocked, when the config file is modified. As a result, when
the host profile is applied to a cluster, the EVC settings are lost, which causes
loss of EVC functionalities. For example, unmasked CPUs can be exposed to
workloads.
Workaround: Reconfigure the relevant EVC baseline on cluster to recover the EVC
settings.
Using a host profile that defines a core dump partition in vCenter Server 7.0
results in an error
In vCenter Server 7.0, configuring and managing a core dump partition in a host
profile is not available. Attempting to apply a host profile that defines a core
dump partition, results in the following error: No valid coredump partition found.
Workaround: None. In vCenter Server 7.0., Host Profiles supports only file-
based core dumps.
When a host profile is copied from an ESXi host or a host profile is edited,
the user input values are lost
Some of the host profile keys are generated from hash calculation even when
explicit rules for key generation are provided. As a result, when you copy settings
from a host or edit a host profile, the user input values in the answer file are
lost.
Workaround: In vCenter Server 7.0, when a host profile is copied from an ESXi
host or a host profile is modified, the user input settings are preserved.
HTTP requests from certain libraries to vSphere might be rejected
The HTTP reverse proxy in vSphere 7.0 enforces stricter standard compliance
than in previous releases. This might expose pre-existing problems in some third-
party libraries used by applications for SOAP calls to vSphere.
The syntax in this example violates an HTTP protocol header field requirement
that mandates a colon after SOAPAction. Hence, the request is rejected in flight.
The SNMP firewall ruleset is a dynamic state, which is handled during runtime.
When a host profile is applied, the configuration of the ruleset is managed
simultaneously by Host Profiles and SNMP, which can modify the firewall settings
unexpectedly.
You might see a dump file when using Broadcom driver lsi_msgpt3, lsi_msgpt35
and lsi_mr3
Workaround: You can safely ignore this message. You can remove the lsuv2-lsi-
drivers-plugin with the following command:
In ESXi 7.0, SR-IOV configuration is applied without a reboot and the device
driver is reloaded. ESXi hosts might have third party extensions perform device
configurations that need to run after the device driver is loaded during boot. A
reboot is required for those third party extensions to re-apply the device
configuration.
Workaround: You must reboot after configuring SR-IOV to apply third party
device configurations.
Recommended Content
Company
About Us
Executive Leadership
Newsroom
Investor Relations
Customer Stories
Diversity, Equity & Inclusion
Environment, Social & Governance
Careers
Blogs
Communities
Acquisitions
Office Locations
VMware Cloud Trust Center
COVID-19 Resources
Support
VMware Customer Connect
Support Policies
Product Documentation
Compatibility Guide
End User Terms & Conditions
Twitter
YouTube
Facebook
LinkedIn
Contact Sales
© 2021 VMware, Inc. Terms of Use Your California Privacy Rights Privacy
Accessibility Site Map Trademarks Glossary Help Feedback
We use cookies to provide you with the best experience on our website, to improve
usability and performance and thereby improve what we offer to you. Our website may
also use third-party cookies to display advertising that is more relevant to you.
By clicking on the “Accept All” button you agree to the storing of cookies on your
device. If you want to know more about how we use cookies, please see our Cookie
Policy.