Hyperconverged SAN Best Practice Guide
Hyperconverged SAN Best Practice Guide
Hyperconverged SAN Best Practice Guide
April 2020
Previous Changes 31
Changes to this document
The most recent version of this document is available from here:
https://datacore.custhelp.com/app/answers/detail/a_id/838
The Host servers and the Storage arrays, managed by SANsymphony, are all physically
separate from the DataCore Server.
Converged
Only the Host servers are physically separate from the DataCore Server. All Storage arrays
managed by SANsymphony are internal to the DataCore Server.
Hyper-Converged
The Host servers are all guest virtual machines that run on the same hypervisor server as
SANsymphony. All Storage arrays managed by SANsymphony are internal to the DataCore
Server.
In a virtual machine (e.g. VMware ESXi, Citrix Hypervisor, Oracle VM Server, Microsoft Hyper-V)
Hybrid-Converged
SANsymphony runs in the root/parent partition of the Windows operating system along with
the Hyper-V feature.
Note: The standalone “Microsoft Hyper-V Server” products, which consist of a cut-down
version of Windows containing only the hypervisor, Windows Server driver model, and
virtualization components, are not supported.
Here is an example where a SANsymphony Virtual Machine has had its Startup RAM fixed to
always use 30GB on startup:
This does not apply to other Hosts running in Hyper-V Virtual Machines.
An example of IP networking settings for this configuration is shown on the next page.
An example of IP networking settings when using fibre channel mirrors with Microsoft Cluster Services.
An example of IP networking settings for this configuration is shown on the next page.
An example of IP networking settings when using fibre channel mirrors without Microsoft Cluster Services.
An example of IP networking settings for this configuration is shown on the next page.
An example of IP networking settings when using iSCSI mirrors with Microsoft Cluster Services.
An example of IP networking settings for this configuration is shown on the next page.
An example of IP networking settings when using iSCSI mirrors without Microsoft Cluster Services.
Also see
On using NIC Teaming with SANsymphony
https://datacore.custhelp.com/app/answers/detail/a_id/1300
MS Initiator ‘ports’
While technically, the MS initiator is available from ‘any’ network interface on the DataCore
Server, the convention here is to treat MS Initiator connections used explicitly for any iSCSI
Loopback connections as a separate, dedicated, IP address.
While looping back on the same IP/MAC address technically works, the overall performance
for virtual disks using this I/O path will be severely restricted compared to looping virtual
disks between different MAC addresses.
Note that the DataCore Front End ports are never used as the initiator - only the target - and
that the interface used for the Microsoft Initiator never connects back to itself.
Here is the same example but with the incorrect initiator/target loopback connections
(highlighted in red) when using vNICs.
Note that the DataCore Front End ports are never used as the initiator - only the target - and
that the interface used for the Microsoft Initiator never connects back to itself.
Using the same MS Initiator IP address as those configured for iSCSI Loopback connections
connect them into the corresponding ‘remote’ DataCore Server’s Front End port as shown
below. The following illustration is an example showing the correct initiator/remote target
connections (highlighted in red) when using vNICs.
The same correct initiator/remote target connections (highlighted in red) when using pNICs.
Note that unlike vNICs, an IP switch is required to provide both the ‘loop’ connection
between the physical NICs and a route to the ‘remote’ DataCore Front End port on the other
DataCore Server.
Please refer to VMware’s own documentation on how to discover and disable this setting
for your own Network Interface.
vCenter
‘Virtual machine memory usage’ should be ‘disabled’ for the SANsymphony virtual
machines
See: “Memory usage alarm triggers for certain types of Virtual Machines in ESXi 6.x”
https://kb.vmware.com/s/article/2149787
VMkernel
ISCSI VHMBA
‘Delayed ACK’ = ‘Disabled’
Via ‘vSphere Client → Advanced Settings → Delayed Ack’
Then uncheck both ‘inherit from parent’ and ‘DelayedAck’
Port Binding
This is only supported with SANsymphony version 10.0 PSP 6 Update 5 or later.
This applies to all iSCSI Ports configured in the SANsymphony virtual machine.
For earlier versions of SANsymphony please refer to the ‘Known Issues - VMware ESXi
Hosts/ iSCSI’ section of this document.
Also see
Memory
Set to ‘High Shares’.
Set ‘Memory Reservation’ to ‘All’.
Latency Sensitivity
Set to ‘High’. [1]
Via ‘VM settings → Options tab → Latency Sensitivity’.
See also:
“Performance Best Practices for VMware vSphere 6.7”
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/perf
ormance/vsphere-esxi-vcenter-server-67-performance-best-practices.pdf
Also see:
Enable or Disable UEFI Secure Boot for a Virtual Machine
ESXi 6.7
https://docs.vmware.com/en/VMware-
vSphere/6.7/com.vmware.vsphere.security.doc/GUID-898217D4-689D-4EB5-866C-
888353FE241C.html
ESXi 6.5
https://docs.vmware.com/en/VMware-
vSphere/6.5/com.vmware.vsphere.security.doc/GUID-898217D4-689D-4EB5-866C-
888353FE241C.html
1
Testing is recommended. Depending on underlying hardware, better performance may be possible by
setting this to ‘Normal’.
Disk storage
When using RDM as storage for use in SANsymphony Disk Pools, disable ESXi’s ‘SCSI
INQUIRY Caching’. This allows SANsymphony to detect and report any unexpected changes
in storage device paths managed by ESXi’s RDM feature.
Also see
Virtual Machines with RDMs Must Ignore SCSI INQUIRY Cache
https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.storage.doc/GUID-
2852DA1C-507B-4B85-B211-B5CD3C8DC6F2.html
If VMware’s VMDirectPath I/O pass-through is not appropriate for the storage being used as
RDM then contact the storage vendor to find out which is their own preferred VMware ‘SCSI
Adaptor’ for highest performance (e.g. VMware’s own Paravirtual SCSI Adapter or the LSI
Logic SCSI Adaptor).
More physical DataCore Servers can be added to create a highly available configuration. See
the following pages.
Each ESXi host’s own VMkernel is configured to login to two separate iSCSI targets:
Notes
For clarity, so as not to make the diagram too complicated, the example shows just one
SANsymphony Front-End port per DataCore Server. Use more Front-End port connections to
increase overall throughput.
These Known Issues may have been found during DataCore’s own testing but others may
have been reported by our users when a solution was found that was not to do with
DataCore’s own products.
DataCore cannot be held responsible for incorrect information regarding another vendor’s
products and no assumptions should be made that DataCore has any communication with
these other vendors regarding the issues listed here.
We always recommend that the vendor’s should be contacted directly for more information
on anything listed in this section.
For ‘Known issues’ that apply to DataCore Software’s own products, please refer to the
relevant DataCore Software Component’s release notes.
Microsoft Windows
ISCSI Loopback connections
SANsymphony’s ‘Preferred Server’ setting is ignored for virtual disks using iSCSI
Loopback connections.
Apart from the two DataCore Servers that are managing the mirrored storage sources in a
virtual disk -- and will be able to detect the preferred server setting – the ‘other’ DataCore
Servers will not be able to (for that same virtual disk). Set MPIO path state to ‘Standby’ for
each virtual disk served to the ‘other’ DataCore Server(s). See below.
1. Open the MS iSCSI initiator UI and under Targets, select the remote target IQN
address
2. Click devices and identify the Device target port with disk index to update
6. Set the path state to Standby and click OK to save the change
This known issue only applies when SANsymphony is installed in the root partitions of
clustered, Hyper-V Windows servers where virtual disks are ‘looped-back’ to the Windows
Operating system for use by Hyper-V Virtual Machines.
A limitation exists in the DataCore SCSI Port driver - used by the DataCore Loopback Ports -
whereby if the DataCore Server providing the ‘Active’ cluster path is stopped, the remaining
DataCore Server providing the ‘Standby’ path for the Hyper-V VMs is unable to take the SCSI
Reservation (previously held by the stopped DataCore Server). This will result in a SCSI
Reservation Conflict and prevent any Hyper-V VM from being able to access the DataCore
Disks presented by the remaining DataCore Server.
In this case please use iSCSI connections as ‘Loopbacks’ for SANsymphony DataCore Disks
presented to Hyper-V Virtual Machines.
VMware ESXi
ISCSI connections (general)
Affects ESX 6.x and 5.x
Applies to SANsymphony 10.0 PSP 6 Update 5 and earlier
ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore
Server Front-end port is not supported (this also includes ESXi 'Port Binding'). The Front End
port will only accept the ‘first’ login and a unique iSCSI Session ID (ISID) will be created. All
subsequent connections coming from the same IQN even if it is a different interface will
result in an ISCSI Session ID (ISID) ‘conflict’ and the subsequent login attempt will be
rejected by the DataCore iSCSI Target. No further iSCSI logins will be allowed for this IQN
whilst there is already one active ISID connected.
Also note that if an unexpected iSCSI event results in a logout of an already-connected iSCSI
session then, during the reconnection phase, one of the other interface that shares the
same IQN but was rejected previously may now be able to login and this will prevent the
previously-connected interface from being able to re-connect.
See the following examples of supported and not-supported configuration when using
SANsymphony 10.0 PSP 6 Update 5 or earlier:
Two DataCore Servers each have two front end ports, each with their own IP address and each with their own IQN:
192.168.1.101 (iqn.dcs1-1)
192.168.2.101 (iqn.dcs1-2)
192.168.1.102 (iqn.dcs2-1)
192.168.2.102 (iqn.dcs2-2)
Each interface of the ESX Host connects to a separate Port on each DataCore Server:
(iqn.esx1) 192.168.1.1 ← ISCSI Fabric 1 → 192.168.1.101 (iqn.dcs1-1)
(iqn.esx1) 192.168.2.1 ← ISCSI Fabric 2 → 192.168.2.101 (iqn.dcs1-2)
(iqn.esx1) 192.168.1.2 ← ISCSI Fabric 1 → 192.168.1.102 (iqn.dcs2-1)
(iqn.esx1) 192.168.2.2 ← ISCSI Fabric 2 → 192.168.2.102 (iqn.dcs2-2)
This type of configuration is very easy to manage, especially if there are any connection problems.
Note that in this ‘un-supported’ example, both Interfaces from ESX1 are connected to the same Interface on the
DataCore Server.
Server Hardware
Affects ESX 6.5
HPE ProLiant Gen10 Servers Running VMware ESXi 6.5 (or Later) and Configured with a
Gen10 Smart Array Controller may lose connectivity to Storage Devices.
Search https://support.hpe.com/hpesc/public/home using keyword a00041660en_us
VMware Tools
Affects ESX 6.5
VMware Tools Version 10.3.0 Recall and Workaround Recommendations
VMware has been made aware of issues in some vSphere ESXi 6.5 configurations with the
VMXNET3 network driver for Windows that was released with VMware Tools 10.3.0.
As a result, VMware has recalled the VMware Tools 10.3.0 release. This release has been
removed from the VMware Downloads page - see: https://kb.vmware.com/s/article/57796
Failover
Affects ESX 6.7 only
Failover/Failback takes significantly longer than expected.
Users have reported to DataCore that before applying ESXi 6.7, Patch Release ESXi-6.7.0-
20180804001 (or later) failover could take in excess of 5 minutes. DataCore are
recommending (as always) to apply the most up-to-date patches to your ESXi operating
system. Also see: https://kb.vmware.com/s/article/56535
Citrix Hypervisor
SR’s may not get re-attached automatically to a virtual machine after a reboot of the Citrix
Hypervisor Host.
This prevents the DataCore Server’s virtual machine being able to access any storage used by
SANsymphony’s own Disk Pools which will prevent other virtual machines -- running on the
same Citrix Hypervisor Host as the DataCore Server -- from being able to access their virtual
disks. DataCore recommend to serve all virtual disks to virtual machines that are running on
the ‘other’ Citrix Hypervisor Hosts rather than directly to virtual machines running on the
same Citrix Hypervisor as the DataCore Server virtual machine.
October 2019
Added
General
This document has been reviewed for SANsymphony 10.0 PSP 9.
No additional settings or configurations are required.
June 2019
Added
Microsoft Hyper-V
The SANsymphony virtual machine settings - Memory
When running a DataCore Server inside a Virtual Machine, do not enable Hyper-V's 'Dynamic Memory' setting for
the SANsymphony Virtual Machine as this may cause less memory than expected to be assigned to
SANsymphony’s cache driver.
VMware ESXi
Disk storage - ESXi Raw Device Mapping (RDM)
When using RDM as storage for use in SANsymphony Disk Pools, disable ESXi’s ‘SCSI INQUIRY Caching’. This allows
SANsymphony to detect and report any unexpected changes in storage device paths managed by ESXi’s RDM
feature.
Also see
Virtual Machines with RDMs Must Ignore SCSI INQUIRY Cache
https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.storage.doc/GUID-2852DA1C-507B-4B85-
B211-B5CD3C8DC6F2.html
May 2019
Added
Storage virtualization deployment models
New section added.
Microsoft Hyper-V deployment examples Users have reported to DataCore that before applying ESXi 6.7, Patch
The information on Hyper-converged Hyper-V deployment has been rewritten along with new detailed diagrams
and explanatory technical notes.
October 2018
Added
The SANsymphony virtual machine settings - ISCSI Settings - General
DataCore Servers in virtual machines running SANsymphony 10.0 PSP 7 Update 2 or earlier should run the
appropriate PowerShell scripts found attached to the Customer Services FAQ:
SANsymphony - iSCSI Best Practices
https://datacore.custhelp.com/app/answers/detail/a_id/1626
September 2018
Added
VMware ESXi deployment
The information in this section was originally titled ‘Configure ESXi host and Guest VM for DataCore Server for low
latency’. The following settings have been added:
Turbo Boost should be disabled on the ESXi Host
Known Issues
Refer to the specific entries in this document for more details.
ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore Server Front-end
port is not supported (this also includes ESXi 'Port Binding'). Applies to SANsymphony 10.0 PSP 6 Update 5
and earlier.
HPE ProLiant Gen10 Servers Running VMware ESXi 6.5 (or Later) may lose connectivity to Storage Devices.
vHBAs and other PCI devices may stop responding when using Interrupt Remapping
VMware tools 10.0.2.0 issues and 10.0.3.0 recall with workaround.
Updated
Configure ESXi host and Guest VM for DataCore Server for low latency
This section has been renamed to ‘VMware ESXi deployment’. The following settings have been updated:
CPU
Virtual SCSI Controller
Disable interrupt coalescing
Refer to the specific entries in this document for more details and make sure that your existing settings are
changed/updated accordingly.
Removed
Microsoft Windows – Configuration Examples
Examples using the DataCore Loopback Port in a Hyper-V Cluster configuration have been removed as
SANsymphony is currently unable to assign PRES registrations to all other non-active Loopback connects for a
virtual disk. This could lead to certain failure scenarios where MSCS Reservation Conflicts prevent access for
some/all of the virtual disks on the remaining Hyper-V Cluster. Use ISCSI loopback connections instead.
DataCore are no longer making recommendations regarding NUMA/vCPU affinity settings as we have found that
different server vendors have different NUMA settings that can be changed, and that many users have reported
that making these changes made no difference and, in extreme cases, caused slightly worse performance than
before. Users who may have already changed these settings based using the previous document and are running
without issue do not need to revert anything.
Appendix A
This has been removed. The information that was there has been incorporated into relevant sections throughout
this document.
May 2018
Added
Updated the VSAN License feature to include 1 registered host
Removed example with MS Cluster using Loopback ports (currently not supported)
Updated
DataCore Server in Guest VM on ESXi table (corrected errors, added references to VMware documentation and
added Disable Delayed ACK)
Removed
ESXi 5.5 because of issues only fixed in ESXi 6.0 update2
https://kb.vmware.com/s/article/2129176
September 2016
Added
First publication of document
DataCore, the DataCore logo and SANsymphony are trademarks of DataCore Software Corporation. Other DataCore
product or service names or logos referenced herein are trademarks of DataCore Software Corporation. All other
products, services and company names mentioned herein may be trademarks of their respective owners.
ALTHOUGH THE MATERIAL PRESENTED IN THIS DOCUMENT IS BELIEVED TO BE ACCURATE, IT IS PROVIDED “AS IS”
AND USERS MUST TAKE ALL RESPONSIBILITY FOR THE USE OR APPLICATION OF THE PRODUCTS DESCRIBED AND
THE INFORMATION CONTAINED IN THIS DOCUMENT. NEITHER DATACORE NOR ITS SUPPLIERS MAKE ANY
EXPRESS OR IMPLIED REPRESENTATION, WARRANTY OR ENDORSEMENT REGARDING, AND SHALL HAVE NO
LIABILITY FOR, THE USE OR APPLICATION OF ANY DATACORE OR THIRD PARTY PRODUCTS OR THE OTHER
INFORMATION REFERRED TO IN THIS DOCUMENT. ALL SUCH WARRANTIES (INCLUDING ANY IMPLIED
WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT, FITNESS FOR A PARTICULAR PURPOSE AND AGAINST
HIDDEN DEFECTS) AND LIABILITY ARE HEREBY DISCLAIMED TO THE FULLEST EXTENT PERMITTED BY LAW.
No part of this document may be copied, reproduced, translated or reduced to any electronic medium or machine-
readable form without the prior written consent of DataCore Software Corporation