HCI1691 FINALv2 1600713111682001pujp

Download as pdf or txt
Download as pdf or txt
You are on page 1of 54
At a glance
Powered by AI
Some of the key takeaways from the presentation include reliability being a key focus, common storage issues reported to VMware support, an overview of NVMe over Fabric (NVMe-oF) and its features, and factors to consider when choosing a virtual disk type.

Common storage issues reported to VMware support include port binding, pathing, connectivity, configuration, queuing, latency, VMFS or VM hardware version, and connectivity issues with iSCSI, FC, and NAS.

NVMe over Fabric (NVMe-oF) provides distance connectivity to NVMe devices with minimal additional latency. It supports multiple transports and has features like simple command sets, large number of queues and commands per queue, and removes SCSI from the storage stack.

i o n

r ibut
is t
HCI1691

or d
Core Storage Best t i on
l ica
Practice Deep Dive pub
o r
tf
Configuring the New Storage Features
o
t: N
te n
o n
C
Jason Massae – VMware

2 0
Technical Marketing Architect
Cody Hosterman – Pure0Storage
d 2
rl
Technical Director VMware Solutions

w o
Engineering

VM
#vmworld # HCI1691
Disclaimer
i o n
r ib ut
is t under
development. o r d
This presentation may contain product features or functionality that are currently

This overview of new technology represents no commitment a t n


ioVMware to deliver these
b l ic from
features in any generally available product.
r p u
t
Features are subject to change, and must not be f oincluded in contracts, purchase orders, or
sales agreements of any kind.
N o
n t:
te
Technical feasibility and market demand
n
will affect final delivery.
Pricing and packaging forC
o
0 2 0 any new features/functionality/technology discussed or

d 2
presented, have not been determined.

o rl
w
The information in this presentation is for informational purposes only and may not be incorporated into any contract. There is no

VM
commitment or obligation to deliver any items presented herein.

©2020 VMware, Inc. 2


Your Speakers
i o n
r ut
ib
is t
Jason Massae Cody Hosterman
r d
o Solutions
Technical Director fornVMware
Technical Marketing Architect
a t i o
• vVols and Core Storage Engineering
b l ic
VMware
p u
Pure Storage
r
o
t f @CodyHosterman
@jbmassae
N o
e nt:
t
https://codyhosterman.com

o n
0 C
20 2
r l d
wo
VM
©2020 VMware, Inc. 3
i o n
r ibut
is t
or d
t i on
l ica
pub
Reliability is Key fo r
o t
t: N
te n
o n
0 C
2 02
orl d
M w
V
©2020 VMware, Inc. 4
i o n
ut
• GSS Top Storage Issue
Agenda t r ib
• NVMeoF
d is
or
• Latency PSP
t i on
l ica
• iSCSI
pub
f o r
• Queuing
o t
t: N
• VMFS
te n
o n
0 C • Misc

2 02
orl d
M w
V
©2020 VMware, Inc. 5
What are the Most Common Storage Issues GSS Hears?
Hello This is VMware Support, is This a New or Existing Case?
i o n
ut
r ib
is t
or d
t i on
l ic a
iSCSI FC b
pu UNMAP NAS
o r
otf
t: N
te n
o n
Port Binding
0 C
Configuration Latency VMFS or VM Configuration
Pathing
20 2 Pathing Connectivity HW Version

r l
Connectivity
o d Queuing Pathing Version Connectivity

M w
V
©2020 VMware, Inc. 6
i o n
r ibut
is t
NVMeoF or d
t i on
vSphere 7
l ica
pub
fo r
o t
t: N
te n
o n
0 C
2 02
orl d
M w
V
©2020 VMware, Inc. 7
NVMe over Fabric (NVMe-oF)
n
NVMe-oF

ut i o
t r ib
d is
or NVMe Host Software
NVMe-oF provides distance connectivity to NVMe
t i on
a
devices with minimal additional latency over a
c
Host Side Transport Abstraction
native local NVMe device
b l i
Key Features:
r pu
t fo

Fibre Channel
o

InfiniBand
N
• Support multiple transports

RoCE
:

TCP
e n t
• Simple command set

o n t

0 C
Up to 64k queues per NVM controller

02
• Up to 64K commands per queue

2
Controller Side Transport Abstraction

orl d
Removes SCSI from the storage stack

M w NVMe SSDs

V
©2020 VMware, Inc. 8
NVMe Access Modes
Protocols supported are FC and RoCE v2.
i o n
rut
ib
is t
r d
NVMeoF can be accessed:
o
SCSI ALUA States NVMe ANA States
t i
• Active-Activeon
a
access mode

l ic
Active Optimized Active Optimized

• bAsymmetric
Active Non-Optimized Active Non-Optimized

r pu Namespace Access

o
(ANA) mode
Standby Inaccessible

o t f
Unavailable Persistent Loss

t: N
Offline
te
Persistent Loss
n
o n
0 C
2 02
orl d
M w
V
©2020 VMware, Inc. 9
NVMe-oF Support in vSphere 7
High performance ESXi access to external storage arrays.
i o n
rut
ib
is t
Host Host
Overview r d
ooptimized controller
Host Multi-Path Host Multi-Path

i o n
NVMe is a highly
t
a
Software Software
interface that significantly improves

b l ic
performance for enterprise workloads
u
NVMe Driver NVMe Driver

Host FC Transport Host RoCE

f or p Initial NVMe-oF Protocols Supported


t
Driver Transport Driver

o
• FC

t: N
• RDMA (RoCE v2)

n
FC RDMA
Fabric

n te Fabric

o
FC Transport

0 2 0C ROCE

2
Driver Transport Driver

r l d
NVMe NVMe

wo
subsystem
Single/Multipath subsystem
Single/Multipath

VM
Devices Devices

Storage Storage

©2020 VMware, Inc. 10


About NVMeoF Over RDMA
NVMe over RDMA runs over lossless Ethernet.
i o n
r ib ut
is t
ESXi Host
r d
o to transport NVMe
t i on
RDMA is used
Host Multi-Path SW

l ic a
commands and payload between

NVMe RDMA Initiator


pub host and storage array.

fo r
t
Prerequisites:
o
Host RNIC (RDMA) Driver

t: N
• Host must have RNICs

te n
n
• Physical switches must be in lossless

o
RDMA

0C
Fabric
config

2 0 2 • Target storage is VMware certified

r l d
Target RNIC Driver

wo
storage array (VCG)

VM
NVMe-RDMA Target

©2020 VMware, Inc. 11


Example of Configuring NVMeoF RDMA
Work with your storage vendor for specific configuration details.
i o n
r ibut
is t
d
#esxcli system module parameters set -m nmlx4_core -p enable_rocev2=1
#reboot
or
Verify RNICs
t i on
Enable RoCEv2
l ic a
#esxcli system module parameters set -p disable_roce=0 -m bnxtnet

ub
#esxcli system module parameters set -m bnxtroce -p disable_rocev2=0

p
Verify module #reboot

f o r
o t
N
They may be other

t:
#esxcli system module parameters set -p enable_roce-1 -m qedentv
settings depending
on RNIC vendor #reboot
te n
o n
0 C
2 02
orl d
M w
V
©2020 VMware, Inc. 12
Screen Shot of Available RDMA adapters
i o n
r ibut
is t
or d
t i on
l ica
pub
fo r
o t
t: N
te n
o n
0 C
2 02
orl d
M w
V
©2020 VMware, Inc. 13
Configure Network for NVMeoF RDMA
i o n
r ibut
is t
or d
t i on
l ica
pub
fo r
o t
t: N
te n
o n
0 C
2 02
orl d
M w
V
©2020 VMware, Inc. 14
Add NVMe over RDMA Adapter
i o n
r ibut
is t
or d
t i on
l ica
pub
fo r
o t
t: N
te n
o n
0 C
2 02
orl d
M w
V
©2020 VMware, Inc. 15
Adding NVMe Storage Controller
i o n
r ibut
is t
or d
t i on
l ica
pub
fo r
o t
t: N
te n
o n
0 C
2 02
orl d
M w
V
©2020 VMware, Inc. 16
NVMeoF Initial Release
Allowing partners and customers to evaluate NVMeoF.
i o n
r ibut
is t
o r d
Un-Supportedn
Supported Features
a t i o Features

b l ic
• FC
r pu• RDM

t f o • Boot from SAN


• RDMA (RoCE v2)
No
• VAAI UNMAP, WRITE
n t: • Shared VMDK

n te
SAME, Locking
C o • VAAI Plugins

0 2 0 • VAAI XCOPY

d 2
• Multipathing with HPP

o rl
• Active/Active
• NVMeoF namespace for core-dump

•M w (Asymmetric Namespace Access) • vVols*


V ANA

©2020 VMware, Inc. 17


i o n
r ibut
is t
Latency PSP or d
Smarter Storage t i on
l ica
pub
fo r
o t
t: N
te n
o n
0 C
2 02
orl d
M w
V
©2020 VMware, Inc. 18
Enhanced Load Balancing Path Selection Policy
Round Robin latency-based path selection algorithm approach
i o n
r ib ut
vSphere Host
is t
d
P2: OIO=2

r
Latency=5ms

o Path
Storage

n
VMW

i o
PSP

a t
RR
Storage considerations
P1: OIO=5

b l ic • Latency
u
Latency=1ms

o r p • Pending IOs
Monitor inflowing IOs as sample IOs
t f
omoving average latency Sampling
Track issue and completion time, calculate
: N
/tP(Sampling
• Number of IOs

te n
P(avgLatency) = (Completiont - Issuet) IO count) • Time
n
o ‘m’ sampling
Stop sampling Window after
0 C
2
Select optimal path 0 2

o r l
IO Wait Timed = P(avgLatency) * (OIO + 1)

w
VM
Re-start sampling Window after ‘T’ interval

©2020 VMware, Inc. 19


Why use the New Latency Based Path Selection?
Because the Standard Round Robin is dumb.
i o n
utr ib
is t
Samples latency and balances for o r d
I/O size — so
A new PSP introduced
intelligently decides “this path
t i on is not ideal”.
a
(silently in 6.7) and
officially in 6.7 Update 1.
b l ic
Detects latency
r pu
If you have an unstable network, or if it becomes
imbalance across paths unstable, ESXi
t f o can make good decisions
and chooses the best
N o
paths for your I/Os.
n t:
Active/Active replication—in the situation of
n e
tuniform
The paths that are “bad”
C o access—this can be very helpful
0
will not be used until they
improve
20 2
r l d
wo
VM
©2020 VMware, Inc. 20
Latency Based Path Selection Configuration Options
i o n
r ib ut
is t
num-sampling-cycles: Default: 16
or d
t i on
Valid values are 16-2147483647
l a
ic
p u b
latency-eval-time: Default: f o r 180000
o t
t: N
n
Valid values are 30000-2147483647

n te
useANO: Default C o0
0 2 0
Valid values are2
r l d 0 or 1
1,o
If set tow
VM
ESXi ignores ALUA settings and uses active non-optimized paths if latency deemed good
enough .

©2020 VMware, Inc. 21


Configuring Latency Based Path Selection Policy
i o n
ut r ib
is t
SSH:
or d
t i n
esxcli storage nmp satp rule add -s "VMW_SATP_ALUA" -Vo“Vendor" -M “Model" -P
"VMW_PSP_RR" -O "policy=latency“
l ic a
p ub
PowerCLI:
f r
o|Get-EsxCli -V2
$esxcli = get-vmhost <your host name>
o t
N
: rule”
$satpArgs=$esxcli.storage.nmp.satp.rule.add.createArgs()
t
e n
$satpArgs.model=“Model“nt
$satpArgs.description=“My new

C o
0 2 0
$satpArgs.vendor=“Vendor“

l d 2
$satpArgs.satp="VMW_SATP_ALUA“
r
$satpArgs.psp="VMW_PSP_RR“
o
w
$satpArgs.pspoption="policy=latency“
M
V$result=$esxcli.storage.nmp.satp.rule.add.invoke($satpArgs)

©2020 VMware, Inc. 22


i o n
r ibut
is t
iSCSI or d
VMFS and vVols t i on
l ica
pub
fo r
o t
t: N
te n
o n
0 C
2 02
orl d
M w
V
©2020 VMware, Inc. 23
Avoid iSCSI Issue, Follow the Guidelines.
Especially network configurations! Use docs.vmware.com
i o n
r ib ut
is t
Network
o r d

t i on
Isolated/dedicated traffic (VLANs), redundant, dedicated interfaces and

a
connections
Port Binding is best practice

b l ic
r p u
o among Service Providers for optimal load
Load Balance
Distribute paths to fLUNs

o t
t
balancing
N
: protocols, unsupported
te n
Don’t mix

n
oRedundancy
0 C
2 02 • Make sure the initiator is connected to all network adapters used for iSCSI

orl d Storage

M w • Don’t share LUNs outside virtual environment


V • Place each LUN on a RAID group capable of necessary performance

©2020 VMware, Inc. 24


What iSCSI adapters can you use?
vSphere support 3 types of iSCSI adapters: Software, Dependent and Independent.
i o n
r ibut
is t is not
SW iSCSI Dependent HW iSCSI Independent HW iSCSI

or d
Mix of SW & HW adapters
VMkernel VMkernel VMkernel supported
t i on
IPv4c &aIPv6 is supported on all 3
iSCSI network iSCSI HBA

l i
iSCSI initiator config driver

pu b
r SW iSCSI provides near line rate
NIC Driver

o
TCP/IP

f
iSCSI
HBA
o t
t: N
NIC driver NIC
HW iSCSI provide lower CPU
n
iSCSI initiator
iSCSI initiator

n te Utilization
o
0C
TCP/IP TCP/IP
NIC Recommend Jumbo Frames (MTU
2
(TCP Offload Engine) (TCP Offload Engine)

2 0 9000)*
l d
Host Host Host

r
wo
Standard NIC adapter Third party adapter Third party adapter

VM
depends on VMware offloads iSCSI, network
networking processing, and
management from host

©2020 VMware, Inc. 25


Jumbo Frames Explained
Your Switch can have a different MTU than your kernel.
i o n
r ibut
is t be different
Your switch MTU and vmkernel
or d can

Switch MTU can not ibe


t onsmaller than vmkernel
l ic a
p u b
Switch=9K, vmk=9K

f r
oSwitch=1500, vmk=9K
t
t: No
te n
C on vmkping -I vmkX x.x.x.x -d -s 1472

02 0 vmkping –I vmkX x.x.x.x -d -s 8972


d 2
w orl
V M
©2020 VMware, Inc. 26
Should You use Teaming or Port Binding?
Best Practice is to use Port Binding for iSCSI.
i o n
r ib ut
Port Binding is
t
o r d
o n
Fails over I/O to other paths

Same
l ic atiLoad balancing over multiple paths
pub
Subnet

fo r Initiator creates sessions from all

t
bound ports to all targets

No
n t:
n te NIC Teaming

C o Only provides fault tolerance at

02 0 NIC/port

l d 2 Different Use if routing is required

or Subnets
w
VM
Able to use separate gateway
per vmkernel

©2020 VMware, Inc. 27


How do You Configure Multiple Adapters for iSCSI or iSER?
Adapter mapping on separate vSphere standard switches.
i o n
r ut
ib
is t
vSwitch1
o r d
Multiple Switch Config
VMkernel adapters Physical adapters
Designateio
t n
separate switch for each
l ic a
virtual-to-physical adapter pair
iSCSI1 vmnic1

pu b
r
vmk1

t fo Physical network adapters must be

N o on the same subnet as the storage

n t:
e
vSwitch2
VMkernel adapters
o n t
C
Physical adapters

02 0
2
iSCSI2 vmnic2
vmk2

orl d
M w
V
©2020 VMware, Inc. 28
How do You Configure Multiple Adapters for iSCSI or iSER
Adapter Mapping on a Single vSphere Standard Switch.
i o n
r iut
b
Singled ist Config
r Switch
VMkernel adapters Physical adapters

n o
i o
vmnic1

t
iSCSI1
Add all NICs and VMkernel adapters
NIC
vmk1 vmnic2

l ic a
to single vSphere switch
Teaming
pu b
iSCSI2

f or NIC teaming not supported for iSER


vmk2

o t
t: N
In vSphere 6.5 you can use single
OR n
VMkernel adapters

n te
Physical adapters switch if each iSCSI VMkernel is

C o vmnic1 bound to a single NIC


0
iSCSI1

02
vmk1 vmnic2

2
Port
Binding

orl d
w
iSCSI2

V M vmk2

©2020 VMware, Inc. 29


Separate Storage Network for iSCSI.
Think FC but using Ethernet.
i o n
r ib ut
vSwitch1
is t
vSwitch2
VMkernel adapters VMkernel adapters
or d
` vmnic0

t i on vmnic2

a
MGMT iSCSI1
vmk0 vmnic1

b l ic
vmk3 vmnic3

Physical adapters

r pu Physical adapters

t fo
o
vMotion iSCSI2

N
vmk1 vmk4

n t:
n te
vSAN
C o
02 0
vmk2

d 2
orl vMotion, vSAN, FT, etc.
w
Management, iSCSI only, Port Binding
VM
©2020 VMware, Inc. 30
i o n
r ibut
is t
Queuing or d
t i on
l ica
pub
fo r
o t
t: N
te n
o n
0 C
2 02
orl d
M w
V
©2020 VMware, Inc. 31
i o n
r ibut
“Little's Law: The long-term is t
average number of customers or d
t i on
in a stable system L is equal to
l ica
the long-term average pub
fo r
effective arrival rate, λ,
o t
multiplied by the average time t: N
a customer spends innthe te n
C o
system, W.” 0 0 2
2
VM
wo
r l d
L = λW
©2020 VMware, Inc. 32
What is a Queue Depth Limit?
A queue is a line, and a queue depth limit is how “wide” that line is.
i o n
r ut
ib
is t
One grocery clerk Two grocery clerks
o r d
t i o n
Can help one customer at a time
l ic a
Two customers can be helped at a time
• (queue depth limit of 1)
p u b
Neither has to wait (no added latency)
If there are two customers, one must wait
t for
for the first to finish (added latency)
No
n t:
n te
C o
02 0
d 2
w orl
V M
©2020 VMware, Inc. 33
Queue Limits
i o n
rut
ib
is t
r d
HBA o
Application
t i on Device Queue Depth
Application

l ica Limit
b
Latency Virtual Machine
Guest

r pu Hypervisor Queue Depth


o
Latency vSCSI Adapter
f
Kernel

t
Latency Limit
ESXi Kernel
N o
n t: vSCSI Adapter Queue
e
Driver

o n t Depth Limit
C
Device HBA

0
Latency

02
Virtual Disk Queue Depth
2
Fabric
Limit

orl d Array

M w
V
©2020 VMware, Inc. 34
Want Higher Performance?
Use PARAVIRTUAL SCSI ADAPTER
i o n
r iut
b
is t
Setting Value
o r d
Default Adapter Queue Depth 64
t i on
For high-performance workloads,

l ic a
Paravirtual SCSI adapter is best

Maximum Adapter Queue Depth 256


pubBetter CPU efficiency with heavy

fo r workloads

256 o t
Maximum Virtual Disk Queue Depth
t: N Higher default and maximum queue

e n depths
nt 1,024
Maximum Virtual Disk QueueoDepth
0 C VMware tools includes the drivers

2 0 2
o rl d
w
VMhttps://kb.vmware.com/kb/1010398
©2020 VMware, Inc. 35
HBA Device Queue
Depending on your HBA vendor the default value varies
i o n
r ut
ib
is t
Type
Default
Value Name r d
o which controls how
Value
t i on
This is an HBA setting

l ic a
many I/Os may be queued on a device
QLogic 64 qlfxmaxqdepth
u b
rp
Values are configured via esxcli

t f o
Brocade 32 bfa_lun_queue_depth
N o Changing requires a reboot of ESXi

n t:
e
https://kb.vmware.com/kb/1267
Emulex 32
n t
lpfc0_lun_queue_depth
o
C
0 fnic_max_qdepth
Cisco UCS 32
2 0 2
r l d
Software
wo 128
VM
iscsivmk_LunQDepth
iSCSI

©2020 VMware, Inc. 36


Hypervisor Level Queue Depth Limit
i o n
r ibut
is t
r d
o be the
These values can be
t i on
The actual device queue (DQLEN) will
changed via esxcli minimum of the Device Queue
Hypervisor Level Limit lic
a Depth or the
for a specific device
u b
p default to 32.
• VMFS andoRDMs
f r
o t
t: N
n
• Protocol Endpoints default to 128

n e
t (Scsi.ScsivVolPESNRO)
C o
2 0
l d 20
r
wo
VM
©2020 VMware, Inc. 37
Some Quick Math…
Let’s suppose the latency on average is .5 ms
i o n
r ibut
is t
1 second = 1,000 ms
r d
o These
t i on IOPS values are

a
1,000 ms / .5 ms per IO = 2,000 IOPS per PE (or per VMFS)

b l ic per host
VMFS Default: 2,000 IOPS * 32 max outstanding
r pu
I/Os = 64,000 IOPS
t f o
N o
t:
PE Default: 2,000 IOPS * 128 max outstanding
n
I/Os = 256,000 IOPS
n te
C o
0 2 0
l d 2
o r
w
VM
©2020 VMware, Inc. 38
Difference Between Storage DRS and Storage I/O Control
Latency on a VM or latency on a datastore
i o n
r ib ut
is t
or d
STORAGE I/O n
STORAGE DRS
a t i o CONTROL

b l ic
r pu
t f o
Storage DRS moves VMs around based on
No
hitting a latency threshold
n t:
n te
o
Storage I/O control throttles VMs based on

0 C
This is the VM observed latency (includes hitting a datastore latency threshold:

02
any latency induced by queueing)

l d 2 This is the disk latency (does not include


or any latency induced by queueing)
w
VM
©2020 VMware, Inc. 39
vVols and Queue Depth Limits
i o n
r ib ut
is t
r d
vVols provides theoability for virtual disks
t i on
ESXi hosts
l ic a
to be individual block volumes - no VMFS

u b
r pProtocol Endpoint (PE)
vVols are presented to ESXi via a

t f o
o
t: N
There is a misconception there is no more
Protocolte n
Endpoint
shared queue depth because virtual disks

o n are their own volumes

0 C
20 2 • vVols share queue depth limit of their PEs

vVols r l d
wo
VM
©2020 VMware, Inc. 40
i o n
r ibut
is t
VMFS or d
t i on
l ica
pub
fo r
o t
t: N
te n
o n
0 C
2 02
orl d
M w
V
©2020 VMware, Inc. 41
Evolution of VMFS
i o n
r ib ut
is t
2011 2016 2018
or d
2020

t i on
l ic a
ub
vSpherer6.7p
vSphere 5.0 vSphere 6.5
t f o vSphere 7.0

t: No
VMFS-6: e
t n VMFS-6:
VMFS-5:
o n VMFS-6:
• VMFS used for boot device
C
• 64TB datastore • 512e & 4K sector native • VMFS-6 default FS

0
• - >= 2TB disk support • Auto unmap for Sesparse

2
• Enhanced Regional allocation (Affinity 2.0)

0
• LVM2 • Automatic unmap snapshot

2
• RDM, ATS locking • Support. • Configurable unmap rate • “Shared vmdk” support

orl d
performance and
scalability
• 512/2k lun/path support
• Affinitized Resource
• 1k/4k lun/path support
• VMFS3 EOL • XCOPY support for SCSI VPD page.

M w improvements Allocation. • XCOPY support Array


specific xfer size • VOMA Fix mode support for VMFS-6.

V
• VOMA fix mode for
VMFS-5

©2020 VMware, Inc.


VMFS Affinity 1.0
Affinity Manager in 1.0 does not have a cluster-wide view of available storage RCs.
i o n
r iut
b
i s t
o r d
VMFS File Layer
on
Affinity Manager
t i
has to “find” available

a
Resource Cluster to write new IO to space

b l ic
r pu Can add additional IO overhead with back
and forth operations
o
Affinity Resource
Manager Manager
o t f
t: N
te n
o n
0 C
2 02
or l d Storage

M w
V
©2020 VMware, Inc. 43
Affinity 2.0
More efficient first write IO, reducing “cost” of operation.
i o n
r ibut
is t
VMFS File Layer
o r d
t i onNew Affinity Manager

a Map of storage Resource-


Affinity Manager creates cluster-wide Region

b l ic
u
Resource
Clusters
Allocation Logic

o r p
Manager

Existing RC New RC
o t f No more back and forth

N
Region Map trying to locate free RC
:
Pool

t
Pool

te n Thin Provisioned first write

o n IOs more efficiently written.

0C
Async Async
Async New

2
Region Existing

0
Map SFBC Pool
RC Pool

2
Mgmt. Mgmt.
Mgmt.

r l d
wo
VM Storage

©2020 VMware, Inc. 44


Shared VMDK for WSFC on VMFS
Initial release will only support FC.
i o n
r ib ut
is t
Configuration Maximums for Clustered
r d
o Clustered
VMDK Support Requirements for
t i on Using
VMDKs
l ic a
Configuration Maximum
p u b
VMs in WSFC Cluster 5
f or •vSphere 7

o t
N
WSFC clusters 3 • Set Windows Cluster Parameter

n t: QuorumArbitrationTimeMax to 60
e
Clustered VMDKs per host 128

o n t
0 C • FC storage supporting SCSI-3 PR type WEAR

02
and ATS

d 2
orl
• Enable Clustered VMDK feature on Datastore

M w
V • Use EZT on shared VMDKs

©2020 VMware, Inc. 45


How Do You Determine VMFS Datastore Sizing?
Do some research, check with vendor!
i o n
r ibut
is t
Questions Array-Specific o r d Questions
t i on
l ic a
How big should you make your
pu b
or
Does your array support VAAI (Especially
datastore?
o t f Hardware Assisted Locking)?

t: N
How many VMs? nt e n Is there a performance limit to each
C o volume (queue depths)?

0 2 0
l d 2
r
wo
Are there array features you plan on

VM
using?

©2020 VMware, Inc. **** 46


i o n
r ibut
is t
or d
t i on
l ica
pub
Miscellaneous Best Practices f o r
o t
t: N
te n
o n
0 C
2 0 2
or l d
w
VM
Confidential │ ©2020 VMware, Inc. 47
Virtual Disk Choice
What should I choose? (Showing supported ESXi releases only)
i o n
r ibut
is t
Benefit
VMFS VMFS
VMFS
Lazy vVol Thin vVol Thick
o r d
vRDM pRDM
Thin Eager Thick
Thick
t i on
Space High Low Medium High
l ic a
Medium N/A RDMs:
N/A
Efficiency (no
p u b (vendor Virtual/
UNMAP)
f o r dependent)
Physical
Mediumot High
N
Performance Low High High High High
t:
ESXi 6.5 ten
SCSI ESXi 7.0
o n ESXi 7.0 ESXi 6.7 ESXi 6.7 ESXi 6.5 ESXi 6.5
Reservations
0 C (CIB only)
Snapshot High
2 0 2 High High Low Low N/A N/A
Impact
o r l d
w
Creation time Low Low High Low Low High High

VM
©2020 VMware, Inc. 48
General Best Practices
Important things to remember
i o n
r ib ut
is t
Syslog/NTP or
UEFI d
t i on
l ic a
pu b
Important. The more integrated applications
fo r
UEFI is a recommended best practices for
are, the more important various logs become.
o t Windows VMs.

t: N
• This causes a 7 MB read that many arrays don’t
Having all logs SYNCED and STORED is
te n support

o n
extremely helpful (ESXi, vCenter, Array,
VASA).
0 C Be on VMware HW version 14 or modern

2 2
0 (Log Insight for releases of ESXi

o
example). Use l d
Deploy a syslog target
r NTP sources everywhere.
• HW 14 doesn't issue large read
• New versions of ESXi split large I/Os by looking
w
VM
at Maximum Transfer Length of device VPD
• Or set Disk.DiskMaxIOSize

©2020 VMware, Inc. 49


i o n
r ibut
is t
Session Resources or d
t i on
l ica
pub
fo r
o t
t: N
http://bit.ly/HCI1691 te n
o n
0 C
2 0 2
or l d
w
VM
©2020 VMware, Inc. 50
vVols Related VMworld Sessions
i o n
ut
r ib
is t
or
Core Storage Best Practice Deep Dive: Configuring the New Storage Features d [HCI1691]

t i on
l ic a
VCF and vVols: Empower Your External Storage [HCI2270]
p u b
f o r
vSphere Virtual Volumes (vVols): Modernize tYour
N o External Storage [ HCI1692]

n t:
te
vSphere Cloud Native Storagenwith
C o Virtual Volumes and SPBM: Better Together [HCI2089]

0 2 0
Virtual Volumes 2
o rl d and Site Recovery Manager Deep Dive with Dell EMC Storage [ HCI1451]

w
VMTogether: Site Recovery Manager with Virtual Volumes [HCI1455]
Better

©2020 VMware, Inc. 51


Storage Related VMworld Sessions
i o n
r iut
b
is t[HCI1691]
r d
Core Storage Best Practice Deep Dive: Configuring the New Storage Features
o
Deep Dive: Storage for VMware Cloud Foundation [HCI2362] tio
n
l ic a
pu b
f o r
VMFS Shared Storage with NVMe Over Fabrics [OCTO1128]

vSphere External Storage for the HybridN


t
o [HCI2071]
n t: Cloud

n te
C o
Technical Deep Dive on Cloud Native Storage for vSphere [HCI2160]

0 2 0
l d 2
Storage Management - How to Reclaim Storage Today on vSAN, VMFS and vVols with

o r
John Nicholson [HCI2726]
w
VM
©2020 VMware, Inc. 52
i o n
r ibut
is t
or d
Please Fill Out t i on
l ica
Your Survey pub
fo r
o t
t: N
te n
o n
0 C
202
orl d
M w
V
i o n
r ibut
is t
or d
t i on
l ica
Thank you! pub
fo r
o t
t: N
te n
o n
0 C
202
orl d
M w
V

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy