SG 248549
SG 248549
SG 248549
Vasfi Gucer
Warren Hawkins
Nilesh Khankari
Devendra Mahajan
Redbooks
IBM Redbooks
October 2024
SG24-8549-01
Note: Before using this information and the product it supports, read the information in “Notices” on
page ix.
This edition applies to IBM Storage Virtualize Version 8.6 and later, as well as VMware vSphere version 8.0.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 IBM and VMware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Overview of IBM Storage Virtualize and IBM Storage FlashSystem. . . . . . . . . . . . . . . . 2
1.2.1 IBM Storage Virtualize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.2 IBM Storage FlashSystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.3 Key IBM Storage Virtualize terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.4 Key VMware terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Overview of IBM Storage FlashSystem with VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
iv IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Chapter 6. IBM Storage plug-in for vSphere. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.1 Supported platforms for the IBM Storage plug-in for vSphere . . . . . . . . . . . . . . . . . . 126
6.2 Architecture of the IBM Storage plug-in for vSphere. . . . . . . . . . . . . . . . . . . . . . . . . . 126
6.2.1 Required resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
6.3 Downloading and deploying the OVA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
6.3.1 First time boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
6.3.2 Registering and unregistering the plug-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
6.3.3 Unregistering the plug-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
6.3.4 Checking the status of the plug-in. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.4 Upgrading from a previous version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
6.4.1 What you need to know before upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
6.4.2 Upgrade process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
6.4.3 Updating Photon OS packages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
6.5 Using the dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
6.5.1 Navigating to the dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
6.5.2 Adding storage systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
6.5.3 Refreshing the inventory of registered storage systems . . . . . . . . . . . . . . . . . . . 143
6.5.4 Editing storage systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
6.5.5 Deleting storage systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
6.5.6 Viewing more information about storage systems . . . . . . . . . . . . . . . . . . . . . . . 147
6.6 Datastore actions and panels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
6.6.1 Creating datastore(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
6.6.2 Summary panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
6.6.3 Expanding a datastore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
6.6.4 Deleting a datastore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
6.7 Host panels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
6.7.1 Summary panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
6.7.2 Configure panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
6.7.3 Creating a new host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
6.8 Cluster panels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
6.8.1 Summary panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
6.8.2 Configuration panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
6.8.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Manage hosts and host clusters167
6.9 Datastore snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
6.9.1 Navigating to datastore snapshot panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
6.9.2 Taking a manual snapshot of a VMFS datastore . . . . . . . . . . . . . . . . . . . . . . . . 170
6.9.3 Creating a new datastore from a snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
6.9.4 Deleting a snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
6.10 Volume groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
6.10.1 Viewing volume group details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
6.10.2 Moving datastore to a volume group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
6.10.3 Removing datastore from a volume group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
6.11 Enforcing practical usage behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
6.11.1 Child pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
6.11.2 Ownership groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
6.11.3 Provisioning policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
6.11.4 Volume protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
6.12 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
6.12.1 Collecting a snap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
6.12.2 Copying a snap to an IBM storage system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
6.12.3 Viewing the logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
6.12.4 vSphere task names being misreported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
6.12.5 Restarting the vSphere Client Service in vCenter. . . . . . . . . . . . . . . . . . . . . . . 180
Contents v
6.12.6 Pinging the appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
6.12.7 Changing the network configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
6.12.8 . . . . . . . . . . . . . . . . Mapping datastore to offline hosts on the storage system181
6.12.9 . . . . . . . . . . . . . . . . . . . . . . . . Revalidating storage system in vSphere plugin181
6.12.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Storage devices refresh timeout181
vi IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Chapter 9. Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
9.1 Collecting data for support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
9.1.1 Data collection guidelines for SAN Volume Controller and IBM FlashSystem . . 258
9.1.2 Data collection guidelines for VMware ESXi . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
9.1.3 Data collection guidelines for VMware Site Recovery Manager . . . . . . . . . . . . . 259
9.1.4 Data collection guidelines for IBM Spectrum Connect (VASA or vVols) . . . . . . . 260
9.2 Common support cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
9.2.1 Storage loss of access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
9.2.2 VMware migration task failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
9.2.3 Registration of vVol/VASA Storage Provider failures . . . . . . . . . . . . . . . . . . . . . 263
9.2.4 vVol Metadata migration before upgrading to 8.6.1.x or newer . . . . . . . . . . . . . 271
Contents vii
viii IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Notices
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
AIX® IBM® IBM Spectrum®
Easy Tier® IBM Cloud® Redbooks®
FlashCopy® IBM FlashCore® Redbooks (logo) ®
HyperSwap® IBM FlashSystem® Storwize®
Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Red Hat, are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the United States and
other countries.
VMware, VMware vSphere, and the VMware logo are registered trademarks or trademarks of VMware, Inc. or
its subsidiaries in the United States and/or other jurisdictions.
Other company, product, or service names may be trademarks or service marks of others.
x IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Preface
This IBM® Redbooks® publication details the configuration and best practices for using the
IBM Storage FlashSystem family of storage products within a VMware environment.
Topics illustrate planning, configuring, operations, and preferred practices that include
integration of IBM FlashSystem® storage systems with the VMware vCloud suite of
applications:
VMware vSphere Web Client (vWC)
vSphere Storage APIs - Storage Awareness (VASA)
vSphere Storage APIs – Array Integration (VAAI)
VMware Site Recovery Manager (SRM)
VMware vSphere Metro Storage Cluster (vMSC)
Embedded VASA Provider for VMware vSphere Virtual Volumes (vVols)
This book is intended for presales consulting engineers, sales engineers, and IBM clients who
want to deploy IBM Storage FlashSystem storage systems in virtualized data centers that are
based on VMware vSphere.
Authors
This book was produced by a team of specialists from around the world.
Also, we express our gratitude to the authors of the previous edition of this book:
Markus Oscheka
IBM Germany
xii IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Now you can become a published author, too!
Here’s an opportunity to spotlight your skills, grow your career, and become a published
author—all at the same time! Join an IBM Redbooks residency project and help write a book
in your area of expertise, while honing your experience using leading-edge technologies. Your
efforts will help to increase product acceptance and customer satisfaction, as you expand
your network of technical contacts and relationships. Residencies run from two to six weeks
in length, and you can participate either in person or as a remote resident working from your
home base.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface xiii
xiv IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
1
Chapter 1. Introduction
This IBM Redbooks publication describes the configuration and best practices for using
IBM Storage Virtualize based storage systems within a VMware environment. This version of
the book addresses IBM Storage Virtualize Version 8.6 with VMware vSphere 8.0.
This publication is intended for Storage and VMware administrators. The reader is expected
to have a working knowledge of IBM Storage Virtualize and VMware. Initial storage and
server setup is not covered.
IBM is a VMware Technology Alliance Partner. IBM storage is deployed in the VMware
Reference Architectures Lab. Therefore, VMware products run well on IBM storage.
For this book, the focus is on the storage as IBM Storage FlashSystem. However, other
IBM Storage Virtualize storage products work in similar fashion.
The primary function of IBM Storage Virtualize is block-level storage virtualization. IBM
defines storage virtualization as a technology that makes one set of resources resemble
another set of resources, preferably with more desirable characteristics.
The storage that is presented to the host is virtual and does not correspond to a specific
back-end storage resource so that IBM Storage Virtualize can perform many enterprise-class
features without impacting the hosts.
IBM Storage Virtualize first came to market in 2003 in the form of the IBM SVC. In 2003, the
SVC was a cluster of commodity servers attached to a storage area network (SAN). The SVC
did not contain its own storage. Instead, SVC used back-end storage that was provided from
other storage systems. At the time of writing, IBM Storage Virtualize supports up to 500+
different storage controllers.
2 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
IBM Easy Tier® for workload balancing
Multi-tenancy
Thin-provisioning
2-site and 3-site replication and point-in-time copy
Software-defined storage (SDS) capability to cloud service providers (CSPs)
IBM Storage Virtualize is software, so new features and functions are added regularly.
IBM Storage Virtualize includes several technologies so that it integrates well with VMware.
IBM Storage FlashSystem 5200 supports a range of storage media with an emphasis on
high-performance Non-Volatile Memory Express (NVMe) drives. For most storage
administrators, capacity and performance optimization include IBM FlashCore® Module
(FCM) modules. FCMs are the next generation of IBM Storage FlashSystem Micro Latency
Modules. They offer high throughput and built-in compression without a performance penalty.
On the high end is Storage Class Memory (SCM), which is built around Intel 3D-Xpoint and
Samsung Z-NAND technologies. SCM offers high throughput and low latency, with limited
capacities. The IBM Storage FlashSystem 5200 control enclosure supports NVMe SCM and
FCM and solid-state drives (SSDs), and serial-attached SCSI (SAS) SSDs are supported
only by using expansion enclosures.
Figure 1-1 shows the front of an IBM Storage FlashSystem 5200 enclosure. It is configured
with 12 dual-ported NVMe drive slots.
Each system can be scaled up to include addition control enclosures or scaled out to include
more expansion enclosures.
Chapter 1. Introduction 3
Figure 1-2 shows the back of an IBM Storage FlashSystem 5200 enclosure. In the center are
two canisters that run the IBM Storage Virtualize software and perform I/O. The canisters are
identical. The far left and right sides contain redundant power supplies.
IBM Storage FlashSystem 5200 can be configured with various I/O ports that support various
protocols. The most common configuration is 8 Fibre Channel (FC) ports. These ports can
support the Fibre Channel Protocol (FCP) or Non-Volatile Memory Express over Fibre
Channel (NVMe over FC).
Canister The IBM Storage FlashSystem hardware that runs the IBM Storage
Virtualize software
I/O group A pair of nodes or canisters that work together to service I/O
Managed disk (MDisk) Either an array of internal storage or a logical unit number (LUN)
provided by an external storage controller
Host The server that uses the storage. An ESXi server in this case
IBM HyperSwap® A business continuity solution that copies data across two sites
4 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
1.2.4 Key VMware terminology
Table 1-2 lists the key VMware terminology.
Native multipathing (NMP) NMP plug-in. Most of the FC SAN-based storage systems are
controlled by NMP.
Claim rules Claim rules determine which multipathing module owns the paths to
a storage device. They also define the type of multipathing support
that the host provides to the device.
VMware vSphere Virtual vVols are virtual machine disk (VMDK) granular storage entities that
Volumes (vVols) are exported by storage arrays. vVols are exported to the ESXi host
through a small set of Protocol Endpoints (PEs). PEs are part of the
physical storage fabric, and they establish a data path from virtual
machines (VMs) to their respective vVols on demand. Storage
systems enable data services on vVols. The results of these data
services are newer vVols. Data services configuration and
management of virtual volume systems are exclusively done
out-of-band regarding the data path.
Chapter 1. Introduction 5
1.3 Overview of IBM Storage FlashSystem with VMware
Figure 1-3 summarizes the various VMware and IBM software components that are
discussed in this publication.
Integrating IBM Storage Virtualize with VMware includes the following key components:
Integration with vSphere Client. For more information about the Integration with vSphere
Client, see Chapter 6, “IBM Storage plug-in for vSphere” on page 125.
Integration with vVols. For more details about Integration with vVols, refer to Chapter 5,
“Embedded VASA Provider for Virtual Volumes (vVol)” on page 83.
Integration with VMware Site Recovery Manager. For more information about Integration
with VMware Site Recovery Manager, see Chapter 4, “Preparing for disaster recovery” on
page 41.
IBM Spectrum® Connect, which is a no additional charge software solution that is
available with all IBM storage but will be deprecated. Transparent orange arrows in
Figure 1-3 depict non-preferred integrations through IBM Spectrum Connect. For more
information about IBM Spectrum Connect, see Chapter 8, “Integrating with VMware by
using IBM Spectrum Connect” on page 195.
6 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
2
Figure 2-1 Configuration and connectivity for the test environment that is used in this book
Volumes that are mapped to a host cluster are assigned to all members of the host cluster
that use the same Small Computer System Interface (SCSI) ID. Before this feature was
implemented in IBM Storage Virtualize, as an example, a VMware vSphere cluster would be
created as a single host object, containing all the worldwide port names (WWPNs) for the
hosts in the cluster, up to 32.
With host clusters, a storage administrator can define individual host objects for each ESXi
host and add them to a host cluster object that represents each vSphere cluster. If hosts are
later added to the host cluster, they automatically inherit shared host cluster volume
mappings. Similarly, when removing a host from a host cluster, you have the option to either
retain or remove any shared mappings. It also ensures that volumes are mapped with
consistent SCSI IDs across all host cluster members. Because of these factors, it is
recommended to use host clusters to simplify storage provisioning and management where
possible. You can have up to 128 hosts in a single host cluster object. Host clusters are easier
to manage than single host objects because the 32 worldwide port name (WWPN) limitation
is removed.
8 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
The minimum size of a cluster is two nodes for vSphere high availability (HA) to protect
workloads if one host stops functioning. However, in most use cases, a 3-node cluster is more
appropriate because you have the option of using Distributed Resource Scheduler (DRS) and
of running maintenance tasks on an ESXi server without having to disable HA.
Configuring large clusters has benefits, too. You typically have a higher consolidation ratio,
but there might be a downside if you do not have enterprise-class or correctly sized storage in
the infrastructure. If a datastore is presented to a 32-node or a 64-node cluster and the virtual
machines (VMs) on that datastore are spread across the cluster, there is a chance of
SCSI-locking contention issues. Using a VMware vSphere Storage APIs Array Integration
(VAAI) aware array helps reduce this problem with Atomic Test and Set (ATS). However, if
possible, consider starting small and gradually growing the cluster size to verify that your
storage behavior is not impacted.
Figure 2-2 shows one of the VMware host clusters that was used in the test configuration for
this book. There are two hosts that are defined in the VMware host cluster.
Figure 2-2 VMware host clusters that were used in the test configuration
Note: Do not add Non-Volatile Memory Express (NVMe) hosts and SCSI hosts to the
same host cluster.
Figure 2-4 shows one of the hosts in the host cluster with the volumes that are connected to
it. The volumes were assigned to the host cluster, and not directly to the host. Any hosts that
are added to a host cluster have all of the volumes mapped to the host cluster automatically
assigned to the hosts.
Note: Private mappings can still be provisioned to individual hosts within a host cluster, for
example for storage area network (SAN) Boot configurations.
Figure 2-4 One of the hosts in the host cluster with the volumes that are connected to it
10 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
A common use case for throttles on hosts, host clusters, and volumes can be applied when
test and production workloads are mixed on the same IBM Storage Virtualize system.
Test-related workloads should not affect production, so you can throttle test hosts and
volumes to give priority to production workloads.
IBM Storage Virtualize supports commands that are used for SCSI offload and VMware VAAI:
SCSI offload enables the host to offload some data operations to the storage system.
VAAI enables VMware hosts to also offload some operations to supported storage
systems.
Both technologies reduce traffic on the storage network, and load on the host. Hosts use
offload commands to perform tasks such as formatting new file systems or performing data
copy operations without the need of a host to read and write data. Examples are the WRITE
SAME and XCOPY commands. IBM Storage Virtualize 8.1.0.0 introduced support for WRITE
SAME when UNMAP is enabled. WRITE SAME is a SCSI command that tells the storage
system to write the same pattern to a volume or an area of a volume.
When SCSI UNMAP is enabled on IBM Storage FlashSystem storage, it advertises this
situation to hosts. At versions 8.1.0.0 and later, some hosts respond to the UNMAP command
by issuing a WRITE SAME command, which can generate large amounts of I/O. If the
back-end storage system cannot handle the amount of I/O, volume performance can be
impacted. IBM Storage Virtualize offload throttling can limit the concurrent I/O that is
generated by the WRITE SAME or XCOPY commands.
When you enable offload throttling, the following bandwidth throttle values are recommended:
For systems that manage any enterprise or nearline storage, the recommended value is
100 MBps.
For systems managing only tier1 flash or tier0 flash, the recommended value is
1000 MBps.
Enable the offload throttle by using the following command line interface (CLI) command:
mkthrottle -type offload -bandwidth bandwidth_limit_in_MB
IBM Storage Virtualize systems implement data reduction by using data reduction pools
(DRPs). A DRP can contain thin-provisioned or compressed volumes. DRPs also provide
more capacity to volumes in the pool by supporting data deduplication.
With a log-structured pool implementation, DRPs help to deliver more consistent performance
from compressed volumes. DRPs also support compression of all volumes in a system,
potentially extending the benefits of compression to all data in a system. Traditional storage
pools have a fixed allocation unit of an extent, and that does not change with DRPs. However,
features like thin provisioning and IBM Real-time Compression (RtC) use smaller allocation
units and manage this allocation with their own metadata structures. These features are
described as binary trees or Log Structured Arrays (LSAs).
For thin-provisioned volumes to stay thin, you must be able to reclaim capacity that is no
longer used. For LSAs, where all writes go to new capacity, you must be able to
Figure 2-5 shows the types of volumes that can be created in a DRP.
DRP fully allocated volumes provide the best performance for the IBM Storage
FlashSystem products, but storage efficiency and space savings are not realized.
Thin-compressed volumes provide storage-space efficiency with the best performance of
the four options for space-efficient volumes.
Figure 2-5 Types of volumes that can be created in a data reduction pool
Best practice: DRPs are suitable for scenarios where capacity savings are prioritized at
the cost of performance. For performance sensitive workloads, ensure that sufficient
performance benchmarking has been validated before employing DRPs throughout your
environment.
For more information about data reduction pools, see the Redbooks publication
Implementation Guide for IBM Storage FlashSystem and IBM SAN Volume Controller:
Updated for IBM Storage Virtualize Version 8.6, SG24-8542.
12 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
2.2.1 iSCSI
iSCSI connectivity is a software feature that is provided by the Storage Virtualize code. The
iSCSI protocol is a block-level protocol that encapsulates SCSI commands into Transmission
Control Protocol/Internet Protocol (TCP/IP) packets. Therefore, iSCSI uses an IP network
rather than requiring the Fibre Channel (FC) infrastructure. For more information about the
iSCSI standard, see Request for Comment (RFC) 3720.
An iSCSI client, which is known as an iSCSI initiator, sends SCSI commands over an IP
network to an iSCSI target. A single iSCSI initiator or iSCSI target is called an iSCSI node.
You can use the following types of iSCSI initiators in host systems:
Software initiator. Available for most operating systems (OSs), including IBM AIX, Linux,
and Windows.
Hardware initiator. Implemented as a network adapter with an integrated iSCSI processing
unit, which is also known as an iSCSI host bus adapter (HBA).
Ensure that the iSCSI initiators and targets that you plan to use are supported. Use the
following sites for reference:
IBM Storage FlashSystem 8.6 Support Matrix
IBM Storage FlashSystem 9x00 8.6
IBM System Storage Interoperation Center (SSIC)
An alias string can also be associated with an iSCSI node. The alias enables an organization
to associate a string with the iSCSI name. However, the alias string is not a substitute for the
iSCSI name.
Important: The cluster name and node name form part of the IQN. Changing any of them
might require reconfiguration of all iSCSI nodes that communicate with IBM Storage
FlashSystem.
2.2.2 iSER
IBM Storage FlashSystems that run IBM Storage Virtualize v8.2.1 or later support iSER for
host attachment, which is implemented by using RDMA over Converged Ethernet (RoCE) or
Internet Wide-Area RDMA Protocol (iWARP). This feature supports a fully Ethernet-based
infrastructure and not Fibre Channel in your data center:
IBM Storage FlashSystem internode communication with 2 or more IBM Storage
FlashSystem in a cluster.
HyperSwap.
Using iSER requires that an Ethernet adapter is installed in each node, and that dedicated
Remote Direct Memory Access (RDMA) ports are used for internode communication. RDMA
enables the Ethernet adapter to transfer data directly between nodes. The direct transfer of
data bypasses the central processing unit (CPU) and cache and makes transfers faster.
Compared to SCSI, NVMe offers performance improvements in I/O operations and latency.
This is achieved through features such as multiple, deep command queues and a streamlined
architecture. SCSI can also support multiple queues with features like blk_mq, but NVMe
provides a more efficient and robust approach. Additionally, NVMe's architecture is better
optimized for using multi-core processors for further performance gains.
NVMe is designed to have up to 64 thousand queues. In turn, each of those queues can have
up to 64 thousand commands that are processed simultaneously. This queue depth is much
larger than what SCSI typically has. NVMe also streamlines the list of commands to only the
basic commands that Flash technologies need.
IBM Storage FlashSystem implements NVMe by using the NVMe over Fibre Channel
protocol. NVMe over Fibre Channel uses the Fibre Channel protocol as the transport so that
data can be transferred from host memory to the target, which is similar to RDMA. For more
information about NVMe, see IBM Storage and the NVM Express Revolution, REDP-5437.
Every physical FC port on IBM Storage FlashSystem storage supports four virtual ports: one
for SCSI host connectivity, one for NVMe over Fibre Channel host connectivity, one for SCSI
host failover, and one for NVMe over Fibre Channel host failover. Every NVMe virtual port
supports the functions of NVMe discovery controllers and NVMe I/O controllers. Hosts create
associations, NVMe logins, to the discovery controllers to discover volumes or to I/O
controllers to complete I/O operations on NVMe volumes. Up to 128 discovery associations
are allowed per node, and up to 128 I/O associations are allowed per node. An extra 128
discovery associations and 128 I/O associations per node are allowed during N_Port ID
virtualization (NPIV) failover.
At the time of this writing, IBM Storage FlashSystem 9200 8.6 supports a maximum of 64
NVMe hosts in a four I/O group configuration. However, a single I/O group supports a
maximum of 16 NVMe hosts. For more information, see V8.6.0.x Configuration Limits and
Restrictions for IBM Storage FlashSystem 9100 and 9200.
14 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
If NVMe over Fibre Channel is enabled on the IBM Storage FlashSystem, each physical
WWPN reports up to four virtual WWPNs. Table 2-1 lists the NPIV ports and port usage when
NVMe over Fibre Channel is enabled.
Table 2-1 NPIV ports and port usage when NVMe over Fibre Channel is enabled
NPIV port Port description
Primary Port The WWPN that communicates with other nodes in the cluster and with back-end storage, if the
IBM Storage Virtualize system is virtualizing any external storage.
SCSI Host Attach The virtual WWPN that is used for SCSI attachment to hosts. This WWPN is a target port only.
Port
Failover SCSI The standby WWPN that is brought online only if the partner node in an I/O group goes offline. This
Host Port WWPN is the same WWPN as the primary host WWPN of the partner node.
NVMe Host The WWPN that communicates with hosts for NVMe over Fibre Channel. This WWPN is a target
Attach Port port only.
Failover NVMe The standby WWPN that is brought online only if the partner node in an I/O group goes offline. This
Host Attach Port WWPN is the same WWPN as the primary host WWPN of the partner node.
For more information about NVMe over Fibre Channel and configuring hosts to connect to
IBM Storage FlashSystem storage systems by using NVMe over Fibre Channel, see VMware
ESXi installation and configuration for NVMe over Fibre Channel hosts.
Figure 2-6 shows how to add an NVMe over Fibre Channel host to an IBM Storage
FlashSystem from the Add Host window.
Figure 2-6 Adding an NVMe over Fibre Channel host to IBM Storage FlashSystem
Figure 2-7 Discovering the IBM Storage FlashSystem NVMe controller on VMware
NVMe devices are managed by the VMware high-performance plug-in (HPP). To see the
NVMe devices, run the esxcli storage hpp device list command by using ESXCLI.
The IBM Storage Virtualize software that runs on IBM Storage FlashSystem uses the SCSI
protocol to communicate with its clients, and presents storage space in the form of SCSI
logical units (LUs) identified by SCSI logical unit numbers (LUNs).
Note: In formal practice, LUs and LUNs are different entities. In practice, the term LUN is
often used to refer to a logical disk or LU.
Most applications do not directly access storage but instead work with files or records.
Therefore, the OS of a host must convert these abstractions to the language of storage, which
16 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
are vectors of storage blocks that are identified by logical block addresses within an LU. In
IBM Storage Virtualize, each of the externally visible LUs is internally represented by a
volume, which is an amount of storage that is taken out of a storage pool. Hosts use the SCSI
protocol to send I/O commands to IBM Storage FlashSystem storage to read and write data
to these LUNs.
As with NVMe over Fibre Channel host attachment, if NPIV is enabled on the IBM Storage
FlashSystem storage system, hosts attach to a virtual WWPN. Table 2-1 on page 15 lists the
SCSI and Failover Host Attach Ports.
VMware V7.0u2 and later supports RoCE v2 as host connectivity for IBM Storage Virtualize
8.5 storage systems.
RNICs can use RDMA over Ethernet through RoCE encapsulation. RoCE wraps standard
InfiniBand payloads with Ethernet or IP over Ethernet frames, which is sometimes called
InfiniBand over Ethernet. There are two main RoCE encapsulation types:
1. RoCE v1
Uses dedicated Ethernet Protocol Encapsulation to send Ethernet packets between
source and destination MAC addresses by using Ethertype 0x8915.
2. RoCE v2
Uses dedicated UDP over Ethernet Protocol Encapsulation to send IP UDP packets by
using port 4791 between source and destination IP addresses. UDP packets are sent over
Ethernet by using source and destination MAC addresses.
RoCE v2 is not compatible with other Ethernet options, such as RoCE v1.
For more information about configuring the VMware ESXi for NVMe over RDMA on
IBM Storage FlashSystem storage systems, see NVMe over RDMA and NVMe over TCP host
attachments.
VMware vSphere V7.0u3 and later supports NVMe over TCP as host connectivity for
IBM Storage Virtualize 8.6 storage systems.
Tip: The VMware preferred flag can be set on a path. This flag is not applicable if the path
selection policy is set to Most Recently Used.
Fixed
The Fixed policy uses the designated preferred path flag if it is configured. Otherwise, it uses
the first working path that is discovered at system start time. If the ESXi host cannot use the
preferred path or it becomes unavailable, the ESXi host selects an alternative available path.
The host automatically returns to the previously defined preferred path when it becomes
available. This policy is the default for LUNs that are presented from an active/active storage
array.
Round-Robin
The Round-Robin policy is the recommended policy for IBM Storage Virtualize systems. This
path selection policy uses a round-robin algorithm to load balance paths across all LUNs
when connecting to a storage array. This policy is the default for VMware starting with ESXi
5.5.
Data can travel through only one path at a time for a single volume:
For active/passive storage arrays, only the paths to the preferred storage controller are
used.
For an active/active storage array, all paths are used for transferring data, assuming that
all paths are available.
With Asymmetric Logical Unit Access (ALUA) in an active/active storage array, such as the
IBM Storage FlashSystem 9100, 9200 and 9500 systems, only the optimized paths to the
preferred control enclosure node are used for transferring data. Round-Robin cycles through
18 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
only those optimized paths. Configure pathing so that half the LUNs are preferred by one
control enclosure node, and the other half are preferred by the other control enclosure node.
The default path selection limit is IOPS, and the default value is 1000 IOPS before the path
changes. In some cases, a host can experience latency to storage with no latency seen on
the SAN. In these cases, the load of 1000 IOPS saturates the bandwidth of the path.
Lowering this value can increase storage performance and help minimize the impact of path
failure or node outages during firmware upgrade or hardware maintenance. The
recommended path-selection limit setting for IBM Storage Virtualize systems is to use IOPS
and set the value to 1. For more information about the IOPS limit, see Adjusting Round Robin
IOPS limit from default 1000 to 1 (2069356).
Example 2-1 Creating a claim rule for an IBM Storage Virtualize system to set the path selection limit to 1
esxcli storage nmp satp rule add -s VMW_SATP_ALUA -V IBM -M "2145" -c tpgs_on --psp="VMW_PSP_RR"
-e "IBM arrays with ALUA support" -O "iops=1"
Figure 2-8 Configuring a claim rule by using the Host Profile window
Note: Existing and previously presented devices must be manually set to Round-Robin
with an IOPS limit of 1. Optionally, the ESXi host can be restarted so that it can inherit the
multipathing configuration that is set by the new rule.
In vSphere 7.0 Update 1 and earlier, NMP remains the default plug-in for local NVMe devices,
but you can replace it with HPP. Starting with vSphere 7.0 Update 2, HPP becomes the
default plug-in for local NVMe and SCSI devices, but you can replace it with NMP.
By default, ESXi passes every I/O through the I/O scheduler. However, using the scheduler
might create internal queuing, which is not efficient with the high-speed storage devices.
20 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
You can configure the latency sensitive threshold and enable the direct submission
mechanism that helps I/O to bypass the scheduler. With this mechanism enabled, the I/O
passes directly from Pluggable Storage Architecture (PSA) through the HPP to the device
driver.
For the direct submission to work properly, the observed average I/O latency must be less
than the latency threshold that you specify. If the I/O latency exceeds the latency threshold,
the system stops the direct submission, and temporarily reverts to using the I/O scheduler.
The direct submission is resumed when the average I/O latency drops below the latency
threshold again.
Note: HPP does not benefit when the systems are not capable of 200,000 IOPS.
Example 2-2 shows how to list the devices that are controlled by the HPP.
Example 2-2 Output of using the esxcli storage hpp device list command
[root@localhost:~] esxcli storage hpp device list
eui.70000000000004b5005076081280000c
Device Display Name: NVMe Fibre Channel Disk (eui.70000000000004b5005076081280000c)
Path Selection Scheme: LB-RR
Path Selection Scheme Config: {iops=1,bytes=10485760;}
Current Path: vmhba64:C0:T0:L3
Working Path Set: vmhba64:C0:T0:L3, vmhba65:C0:T0:L3
Is SSD: true
Is Local: false
Paths: vmhba64:C0:T0:L3, vmhba65:C0:T1:L3, vmhba65:C0:T0:L3, vmhba64:C0:T1:L3
Use ANO: false
To support multipathing, HPP uses the Path Selection Schemes (PSSs) when you select
physical paths for I/O requests.
Fixed
With this scheme, a designated preferred path is used for I/O requests. If the preferred path is
not assigned, the host selects the first working path that is discovered at start time. If the
preferred path becomes unavailable, the host selects an alternative available path. The host
returns to the previously defined preferred path when it becomes available again.
When you configure FIXED as a path selection mechanism, select the preferred path.
The preferred method is to use only WWPN zoning. Do not mix zoning types. WWPN zoning
is more flexible than switch port zoning and is required if the IBM Storage Virtualize NPIV
feature is enabled. Switch port zoning can cause failover of the NPIV ports to not work
correctly, and in certain configurations can cause a host to be connected to the IBM Storage
FlashSystem on both the physical and virtual WWPNs.
For more information about the NPIV feature and switch port zoning, see Using Switch
Port-Based Zoning with the IBM Spectrum Virtualize NPIV Feature.
A common misconception is that WWPN zoning provides poorer security than port zoning.
However, modern SAN switches enforce the zoning configuration directly in the switch
hardware. Also, you can use port-binding functions on SAN switches so that a WWPN is
connected to a particular SAN switch port. Port binding also prevents unauthorized devices
from logging in to your fabric if they are connected to switch ports. Lastly, the default zone on
each of your virtual fabrics should have a zone policy of deny, which means that any device in
the default zone cannot communicate with any other device on the fabric. All unzoned devices
(that are not in at least one named zone) are in the default zone.
Naming convention
When you create and maintain a Storage Network zoning configuration, you must have a
defined naming convention and zoning scheme. If you do not define a naming convention and
zoning scheme, your zoning configuration can be difficult to understand and maintain.
Environments have different requirements, which means that the level of detailing in the
zoning scheme varies among environments of various sizes. Therefore, ensure that you have
an understandable scheme with an appropriate level of detailing for your environment. Then,
use it consistently whenever you change the environment.
Aliases
Use aliases when you create your IBM Storage Virtualize system zones. Aliases make your
zoning easier to configure and understand and minimize errors. Define aliases for the
physical WWPNs, the SCSI-FC WWPNs, and the NVMe over Fibre Channel WWPNs if you
have that feature enabled.
22 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
You should have the following zones:
A zone containing all of the IBM Storage Virtualize system aliases for the physical
WWPNs that are dedicated for internode use.
A zone containing all of the IBM Storage Virtualize system aliases for both the local and
remote IBM Storage Virtualize system physical WWPNs that are dedicated for partner use
if replication is enabled.
One zone for each host initiator containing the alias for the host initiator and either the
IBM Storage Virtualize system SCSI-FC virtual WWPNs, or the NVMe over Fibre Channel
virtual WWPNS, depending on which type of host attachment the host is using. For an
alternative to this approach, see “Multi-inititiator zoning” on page 23.
One zone per storage system containing the aliases for the storage system and the
IBM Storage Virtualize system physical WWPNs if the IBM Storage FlashSystem is
virtualizing storage.
Tip: If you have enough IBM Storage Virtualize system ports available and you have many
hosts that you are connecting to an IBM Storage Virtualize system, you should use a
scheme to balance the hosts across the ports on the IBM Storage Virtualize system. You
can use a simple round-robin scheme, or you can use another scheme, such as numbering
the hosts with the even-numbered hosts zoned to the even-numbered ports and the
odd-numbered hosts zoned to the odd-numbered ports. Whichever load-balancing scheme
that you choose to use, you should ensure that the maximum number of paths from each
host to each volume is four paths. The maximum supported number is eight paths for
volumes and sixteen paths for HyperSwap volumes. The recommended number is four
paths per volume.
Important: Mixing different connectivity protocols in the same host cluster is not
supported. Avoid configuring vSphere clusters with a mixed storage connectivity.
IBM Storage Virtualize volume mappings cannot be shared between, for example, a SCSI
and NVMe hosts.
For SAN zoning best practices, see IBM Support SAN Zoning Best Practices.
Multi-inititiator zoning
For host clusters such as VMware, it is desirable to have all hosts in the cluster in the same
zone because it makes administration and troubleshooting easier. This setup can cause
issues where a malfunctioning host affects all other hosts in the zone. Traditional
best-practice zoning is to have only one initiator (host) per zone.
In recent years, Brocade released the Peer Zoning feature. Cisco released a similar feature
that is called Smart Zoning. Both features allow multiple initiators to be in the same zone, but
prevent them from connecting to each other. They can connect only to target ports in the
zone, which allows multiple hosts to be in the same zone, but prevents the issue of a
malfunctioning host port from affecting the other ports.
For VMware clusters, the preferred zoning configuration is to have the ports for all of the hosts
in the cluster in a zone with the IBM Storage FlashSystem virtual WWPN.
Brocade Peer zoning must be enabled for this zone on Brocade fabrics. Brocade Peer
Zoning was introduced in FOS v7.4.x.
Large-scale workloads with intensive I/O patterns require adapter queue depths greater than
the PVSCSI default values. At the time of writing, the PVSCSI queue depth default values are
64 for device and 254 for adapter. You can increase PVSCSI queue depths to 254 for device
and 1024 for adapter inside a Windows or Linux VM.
24 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
3
When you work with VMFS datastores, consider the following items:
Datastore extents
Do not span more than one extent in a datastore. The recommendation is to have a 1:1
ratio between the datastore and the volume.
Block size
The block size on a VMFS datastore defines the maximum file size and the amount of
space a file occupies. VMFS5 and VMFS6 datastores support the block size of 1 MB.
Storage vMotion
Storage vMotion supports migration across VMFS, virtual storage area network (VSAN),
and VMware vSphere Virtual Volume (vVol) datastores. A vCenter Server performs
compatibility verification to validate Storage vMotion across different types of datastores.
Storage Distributed Resource Scheduler (SDRS)
VMFS5 and VMFS6 can coexist in the same datastore cluster. However, all datastores in
the cluster must use homogeneous storage devices. Do not mix devices of different
formats within the same datastore cluster.
Device Partition Formats
A new VMFS5 or VMFS6 datastore uses a globally unique identifier (GUID) partition table
to format the storage device. You can use the GUID partition table (GPT) format to create
datastores larger than 2 TB. If your VMFS5 datastore was upgraded from VMFS3, it
continues to use the master boot record (MBR) partition format, which is characteristic for
VMFS3. Conversion to GPT occurs only after you expand the datastore to a size larger
than 2 TB.
26 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Starting with vSphere 6, VMware introduced I/O reservations with SIOC. When reservations
are used, the same I/O injector that is used for checking latency also samples the I/O
operations per second (IOPS) capabilities of a datastore. When the configured IOPS
reservation that is set on the VMs exceeds the observed IOPS capabilities of that datastore,
IOPS are distributed to the VMs proportionally to their percentage of the number of set
reservations.
The lowest value that you can set is 5 milliseconds. The default is 30 ms. Typically, you cannot
reach this value with IBM Storage FlashSystem because it runs IOPS in microseconds.
However, if the specified latency is reached, SIOC acts to reduce latency to acceptable levels.
For critical systems, the usual recommendation is to not employ limits or throttling on the VMs
resources. Even though SIOC falls into the throttling category, it also provides a fail-safe for
unavoidable and unpredictable contention. This function might be helpful when there are
multiple VMDKs that share a datastore for manageability reasons.
For more information, see Storage I/O control for critical apps is a great idea.
Note: The goal of Distributed Resource Scheduler (DRS) I/O load-balancing is to fix
long-term prolonged I/O imbalances, VMware vSphere SIOC addresses short-term burst
and loads.
You do not need to adjust the threshold setting in most environments. If you change the
congestion threshold setting, set the value based on the following considerations:
A higher value typically results in higher aggregate throughput and weaker isolation.
Throttling does not occur unless the overall average latency is higher than the threshold.
If throughput is more critical than latency, do not set the value too low. For example, for
Fibre Channel disks, a value below 20 ms might lower peak disk throughput. A high value
of more than 50 ms might allow high latency without significant gain in overall throughput.
A lower value results in lower device latency and stronger VM I/O performance isolation.
Stronger isolation means that the shares controls are enforced more often. Less device
latency translates into lower I/O latency for the VMs with the highest shares, at the cost of
higher I/O latency experienced by the VMs with fewer shares.
A low value of less than 20 ms results in lower device latency and isolation among I/Os at
the potential cost of a decrease in aggregate datastore throughput.
Setting the value high or low results in poor isolation.
Easy Tier is an automated process running in IBM Storage Virtualize which constantly
monitors the read/write activity on all volumes within a given storage pool. Typically, the
system automatically and non-disruptively moves frequently accessed data, referred to as hot
data, to a faster tier of storage. For example, the hot data is moved from lower-speed,
high-capacity storage to flash-based storage MDisks. To accommodate this, rarely accessed
data, cold data, is demoted to a lower tier thus optimizing the I/O flow of the system as a
whole.
Note: When you use both Easy Tier and SDRS, the behavior of SDRS can adversely affect
the Easy Tier algorithm, which is also called a heatmap, and can unexpectedly impact
performance.
If the latency for a datastore exceeds the threshold default of 15 ms over a percentage of
time, SDRS migrates VMs to other datastores within the datastore cluster until the latency is
below the threshold limit. SDRS might migrate a single VM or multiple VMs to reduce the
latency for each datastore below the threshold limit. If SDRS is unsuccessful in reducing
latency for a datastore, then at a minimum it tries to balance the latency among all datastores
within a datastore cluster.
When I/O metrics are enabled, SIOC is enabled on all datastores in the datastore cluster.
Note: The I/O latency threshold for SDRS should be lower than or equal to the SIOC
congestion threshold.
Powered-on VMs with snapshots are not considered for space balancing.
28 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Tip: As a general recommendation, consider using a datastore cluster or SDRS whenever
possible. However, make sure to disable latency-based rules in the Easy Tier environment.
SDRS simplifies VM placement when creating, deploying, or cloning VMs. SDRS provides
recommendations for balancing on space and I/O. In manual mode, recommendations can
be applied on a case-by-case basis.
With RDM, a VM can access and use a storage LUN directly, and it allows the use of VMFS
manageability.
Figure 3-1 shows an illustration of the RDM. RDM is a symbolic link from a VMDK file on a
VMFS to a raw LUN.
An RDM offers several benefits, but it should not be used in every situation. In general,
virtual-disk files are preferred over RDMs for manageability purpose.
The most common use case for RDM is Microsoft Cluster Server (MSCS). In an MSCS
clustering scenario that spans multiple hosts, which can be a mix of virtual and physical
clusters, the cluster data and quorum disk are to be configured as RDMs.
30 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Use of storage area network management agents within a virtual
machine
The are two possible RDM modes:
1. Virtual. With virtual mode, the RDM appears to be the same as a virtual disk in a VMFS
and the VMKernel sends its reads and writes to the mapping file instead of accessing the
physical device directly.
2. Physical. Physical mode provides more control over the physical LUN to access it directly,
However, VMware snapshots are not supported. Also, in physical mode, you cannot
convert the RDM disk to a VMFS virtual disk by using storage vMotion.
The VMFS datastore is hosted by a single volume on a storage system, such as the
IBM Storage FlashSystem 9200. A single VMFS datastore can have hundreds or even
thousands of VMDKs.
VMware vVols provide a one-to-one mapping between the VM’s disks and the volumes that
are hosted by the storage system. These vVols are wholly owned by the VM. Making the
vVols available at the storage level enables storage system-based operations at the granular
VM level. For example, capabilities such as compression and encryption can be applied to an
individual VM. Similarly, IBM FlashCopy® can be used at the vVol level when you perform
snapshot and clone operations.
The integration of vVols with IBM Storage Virtualize storage systems is dependent upon the
vSphere application programming interfaces (APIs) for Storage Awareness (VASA). These
APIs facilitate VM-related tasks that are initiated at the vSphere level to be communicated
down to the storage system.
IBM support for VASA was originally provided by IBM Spectrum Connect, but for IBM Storage
Virtualize 8.5.1.0 or later the VASA functionality is serviced by the new Embedded VASA
Provider. IBM Spectrum Connect is an out-of-band VASA Provider, which enables the
communication between vSphere and the storage system along the control plane.
Note: IBM Spectrum Connect is still required to use vVols on IBM FlashSystem 5015,
5035 or 5045 platforms.
IBM Storage FlashSystem manages vVols at the storage level and enables the flexible and
dynamic provisioning of VM storage that is required of a truly software-defined storage
environment.
For information about how to implement vVols with IBM Storage FlashSystem, see Chapter 8,
“Integrating with VMware by using IBM Spectrum Connect” on page 195.
You can use vSphere policies, for example, Gold, Silver, and Bronze, to select the appropriate
class of storage when you provision VMs.
Figure 3-2 A vVols environment where parent and child pools are segregated by drive class
With the introduction of vVols, by defining a range of storage services on IBM Spectrum
Connect, policies can become far more interesting and useful than the simple Gold, Silver,
and Bronze model.
Each of the policies (gold, silver, and bronze) can be further subdivided. For example, the
solid-state drive (SSD) parent pool can be divided into two distinct child pools. One child pool
is linked to an encrypted storage service, and the other is associated with an unencrypted
storage service. This approach provides the vSphere administrators with the flexibility to
provision VMs on storage that matches the requirements of the application, on a per-VM
basis.
32 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Because vVols are a special volume, Easy Tier can manage their extents in an identical
fashion.
A hot or frequently used extent of a vVol is promoted to faster storage, such as SSD.
A cold or infrequently used extent of a vVol is moved onto slower drives.
A vVols implementation that takes advantage of Easy Tier can provide greater simplicity for
the storage administrator. By defining a child pool within an Easy Tier enabled parent pool,
the storage system is enabled to manage the extents of any vVols created therein.
This flexibility removes the requirement for a choice of storage class when the vSphere
administrator initially provisions the VM. Such an approach can also minimize the need for
Storage vMotion tasks because Easy Tier eliminates the requirement to manually migrate
vVols onto faster or slower storage as the needs of an application change.
Figure 3-3 demonstrates a vVols configuration, based on a single parent pool, with Easy Tier
enabled.
Note: Easy Tier also provides benefits within a single-tiered pool. When enabled, Easy
Tier automatically balances the load between managed disks (MDisks) to optimize
performance.
Figure 3-3 The simplified approach to vVols provisioning that can be implemented by enabling Easy
Tier
VAAI helps the ESXi performance because the storage area network (SAN) fabric is not used
and fewer central processing unit (CPU) cycles are needed because the copy does not need
to be handled by the host.
The following types of operations are supported by the VAAI hardware acceleration for
IBM Storage FlashSystem:
Atomic Test and Set (ATS), which is used during the creation and locking of files on the
VMFS volume
Clone Blocks, Full Copy, and extended copy (XCOPY), which is used to copy or migrate
data within the same physical array
Zero Blocks/Write Same, which is used when creating VMDKs with an eager-zeroed thick
provisioning profile
SCSI UNMAP, which is used to reclaim storage space
ATS is a standard T10 SCSI command with opcode 0x89, which is SCSI Compare and Write
(CAW). The ATS primitive has the following advantages where LUNs are used by multiple
applications or processes at one time:
Significantly reduces SCSI reservation contentions by locking a range of blocks within a
LUN rather than issuing a SCSI reservation on the entire LUN
Enables parallel storage processing
Reduces latency for multiple ESXi hosts accessing the same LUN during common
operations
Increases cluster scalability by greatly extending the number of ESXi hosts and VMs that
can viably reside simultaneously on a VMFS datastore
Note: All newly formatted VMFS5 and VMFS6 datastores use the ATS-only mechanism if
the underlying storage supports it. SCSI reservations are never used.
VMFS3 volumes that are upgraded to VMFS5 must be manually upgraded to ATS-only so
that it is easier to redeploy the datastore and migrate the VMs to the datastore.
Note: The use of ATS heartbeating is not supported on the following platforms:
ESXi hosts that run version 5.5 update 2 or later
ESXi version 6.0 before update 3
34 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
During high-latency events, ATS heartbeats might timeout, which results in ATS miscompare
errors. If multiple heartbeat attempts fail, the ESXi host might lose access to the datastore in
which timeouts are observed.
ATS heartbeating increases the load on the system and can lead to access issues on busy
systems, particularly during maintenance procedures. To reduce this load, ATS heartbeats
can be disabled.
For VMware vSphere versions 5.5 and 6.0, the recommendation is to disable ATS
heartbeating because of host-disconnect issues.
To disable ATS heartbeats, run the following command:
esxcli system settings advanced set -i 0 -o /VMFS3/UseATSForHBOnVMFS5
For VMware vSphere versions 6.5, 6.7 and 7.0, the recommendation is to enable ATS
heartbeating.
To enable ATS heartbeats, run the following command-line interface (CLI) command:
esxcli system settings advanced set -i 1 -o /VMFS3/UseATSForHBOnVMFS5
Similarly, XCOPY reduces the volume of traffic that is moving through the SAN when a VM is
deployed. It does so by synchronizing individual VM-level or file system operations, including
clone and migration activities, with the physical storage-level operations at the granularity of
individual blocks on the devices. The potential scope in the context of the storage is both
within and across LUNs.
The SCSI opcode for XCOPY is 0x83. As a best practice, set the XCOPY transfer size to 4096,
as shown in Example 3-1.
The WRITE_SAME 0x93 SCSI command allows the IBM Storage FlashSystem to minimize
internal bandwidth consumption. For example, when provisioning a VMDK file with the
eagerzeroedthick specification, the Zero Block’s primitive issues a single WRITE_SAME
command. The single WRITE_SAME command replicates zeros across the capacity range that is
represented by the difference between the provisioned capacity of the VMDK and the
capacity that is consumed by actual data. The alternative to using the WRITE_SAME command
requires the ESXi host to issue individual writes to fill the VMDK file with zeros. The same
applies when cloning or running storage vMotion of a VM with eager-zeroed thick VMDKs.
The scope of the Zero Block’s primitive is the VMDK creation within a VMFS datastore.
Therefore, the scope of the primitive is generally within a single LUN on the storage
subsystem, but it can potentially span LUNs backing multi-extent datastores.
Note: In thin-provisioned volumes, IBM Storage FlashSystem further augments this benefit
by flagging the capacity as zeroed in metadata without the requirement to physically write
zeros to the cache and the disk, which implies even faster provisioning of the eager-zeroed
VMDKs.
When an IBM Storage FlashSystem receives a SCSI UNMAP command, it overwrites the
relevant region of the volume with all-zero data, which allows thin-provisioned storage
controllers, such as the IBM Storage FlashSystem, to reclaim physical capacity through
garbage collection.
The main benefit is that this action helps prevent a thin-provisioning storage controller from
running out of free capacity for write I/O requests. When thin-provisioned storage controllers
are used, SCSI UNMAP should normally be left enabled.
With lower-performing storage, such as nearline arrays, extra I/O workload can be generated,
which can increase response times.
To enable SCSI UNMAP, run the following command on IBM Storage FlashSystem:
chsystem -hostunmap on
36 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Enabling SCSI UNMAP does not affect data on older volumes that are created before using
this command. Create datastores on newly created volumes, migrate data through storage
vMotion, and delete old volumes.
Host UNMAP commands can increase the free capacity that is reported by the data reduction
pool (DRP) when received by thin-provisioned or compressed volumes. SCSI UNMAP
commands are also sent to internal FlashCore Modules (FCMs) to free physical capacity.
For more information, see SCSI Unmap support in IBM Spectrum Virtualize systems.
For more information about DRPs, see Introduction and Implementation of Data Reduction
Pools and Deduplication, SG24-8430.
Inside the VM, storage space is freed when you delete files on the thin virtual disk. Storage
space that is left by deleting or removing files from a VMFS datastore can be freed up within
the file system. This free space is allocated to a storage device until the file system releases
or unmaps it. This operation helps the storage array to reclaim unused free space.
On VMFS6 datastores, ESXi supports the automatic asynchronous reclamation of free space.
VMFS6 can run the UNMAP command to release free storage space in the background on
thin-provisioned storage arrays that support unmap operations. Asynchronous unmap
processing has several advantages:
Unmap requests are sent at a rate that can be throttled in vSphere, which helps to avoid
any instant load on the backing array.
Freed regions are batched and unmapped together.
I/O performance of other workloads is not impacted by the UNMAP command.
For information about the space-reclamation parameters for VMFS6 datastores, see Space
Reclamation Requests from VMFS Datastores.
VMs that use VMFS5 typically cannot pass the UNMAP command directly to the array. You
must run the esxcli storage vmfs unmap command to trigger unmaps from the IBM Storage
FlashSystem. However, for a limited number of the guest operating systems, VMFS5
supports the automatic space reclamation requests.
To send the unmap requests from the guest operating system to the array, the VM must meet
the following prerequisites:
The virtual disk must be thin-provisioned.
VM hardware must be version 11 (ESXi 6.0) or later.
The advanced EnableBlockDelete setting must be set to 1.
The guest operating system must be able to identify the virtual disk as thin.
VMFS6 generally supports automatic space-reclamation requests that generate from the
guest operating systems, and passes these requests to the array. Many guest operating
systems can send the UNMAP command and do not require any additional configuration. The
guest operating systems that do not support automatic unmaps might require user
intervention.
The following considerations apply when you use space reclamation with VMFS6:
VMFS6 processes the unmap request from the guest operating system (OS) only when
the space to reclaim equals 1 MB or is a multiple of 1 MB. If the space is less than 1 MB or
is not aligned to 1 MB, the unmap requests are not processed.
For VMs with snapshots in the default SEsparse format, VMFS6 supports the automatic
space reclamation only on ESXi hosts version 6.7 or later.
Space reclamation affects only the top snapshot and works when the VM is powered on.
Example 3-2 Verifying device VAAI support where “naa.xxx” stands for device identifier
[root@ESX1-ITSO:~] esxcli storage core device vaai status get -d naa.xxx
naa.xxx
VAAI Plugin Name:
ATS Status: supported
Clone Status: supported
Zero Status: supported
Delete Status: unsupported
You can verify and change your VAAI settings in the host Advanced System Settings
(Figure 3-4 on page 39). A value of 1 means that the feature is enabled. If the setting is
host-wide, it is enabled if the connected storage supports it.
38 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 3-4 VAAI settings
It is advisable to keep all VAAI operations enabled when using IBM Storage FlashSystem
storage systems so that as much work as possible is offloaded to storage.
Example 3-3 shows how to evaluate the VAAI status by using the PowerCLI command.
Example 3-3 PowerCLI command that is used to evaluate the VAAI status
# Get-VMHost | Get-AdvancedSetting -name *HardwareAccelerated* | select Name, value
Name Value
---- -----
DataMover.HardwareAcceleratedMove 1
VMFS3.HardwareAcceleratedLocking 1
DataMover.HardwareAcceleratedInit 1
40 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
4
Copy services are a collection of functions that provide capabilities for business continuity,
disaster recovery, data migration, and data duplication solutions. This chapter provides an
overview and the preferred best practices guide for VMware and IBM Storage Virtualize using
Data Protection techniques.
For replication capabilities including support for VMware Site Recovery Manager (SRM) and
by using the IBM Storage Replication Adapter (SRA), the following mechanisms are
supported:
Policy-based replication
HyperSwap
Metro Mirror
Global Mirror
The following copy services techniques are implemented in IBM Storage Virtualize to help
protect against unexpected data loss:
Volume group snapshots
SafeGuarded Copy
FlashCopy
The volume group provides consistency across all volumes in the group and can be used with
the following functions.
42 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
group. The system automatically replicates the data and configuration for volumes in the
group based on the values and settings in the replication policy. As part of policy-based
replication, a recovery volume group is created automatically on the recovery system.
Recovery volume groups cannot be created, changed, or deleted other than by the
policy-based replication. A single replication policy can be assigned to multiple volume
groups to simplify replication management. When additional volumes are added to the group,
replication is automatically configured for these new volumes. Policy-based replication
supports configuration changes while the partnership is disconnected. After the partnership is
reconnected, the system automatically reconfigures the recovery system.
A Safeguarded volume group describes a set of source volumes that can span different pools
and are backed up collectively with the Safeguarded Copy function. Safeguarded snapshots
are created either manually, or through an internal scheduler that is defined in the snapshot
policy. Alternatively, Safeguarded snapshots can be configured with an external snapshot
scheduling application such as IBM Copy Services Manager.
The following copy services relationships are supported by IBM Storage Virtualize system:
Policy-based replication
Volume group snapshots
SafeGuarded Copy
FlashCopy, for point-in-time copy
Metro Mirror, for synchronous remote copy
Global Mirror, for asynchronous remote copy
Global Mirror with change volumes (GMCV), for asynchronous remote copy for a
low-bandwidth connection
Note: All these copy services, except Snapshot, are supported by VMware SRM when
using IBM Storage Virtualize Family SRA 4.1.0.
Snapshot policies
A snapshot policy is a set of rules that controls the creation, retention, and expiration of
snapshots.
With snapshot policies, administrators can schedule the creation of snapshots for volumes in
a volume group at specific intervals and retain based on their security and recovery point
objectives (RPO). A snapshot policy has the following properties:
It can be assigned to one or more volume groups.
Only one snapshot policy can be scheduled to one volume group.
The system supports a maximum number of 32 snapshot policies.
The system supports an internal scheduler to manage and create snapshot policies on the
system. The management GUI supports selecting either a user-defined policy or a predefined
policy, and the user-defined policies can be created by using the management GUI or by
using the mksnapshotpolicy command. Predefined policies contain specific retention and
frequency values for common use-cases. Both predefined and user-defined policies are
managed on the IBM Storage Virtualize system.
Safeguarded snapshots are supported on the system through an internal scheduler that is
defined in the snapshot policy. When the policy is assigned to a volume group, you can select
the Safeguarded option. The policy creates immutable snapshots of all volumes in the volume
group. The system supports internal and external snapshot scheduling applications such as
IBM Copy Services Manager and IBM Storage Copy Data Management. For Safeguarded
snapshots with internal scheduler, refer to Managing snapshots.
After you configure Safeguarded Copy function on your system, regularly test the
configuration to ensure that Safeguarded backups are ready in the event of a cyberattack. In
addition to testing and recovering, you can also manage objects that are related to the
44 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Safeguarded Copy function on the system. These tasks include adding source volumes to
Safeguarded volume groups, managing Safeguarded backups after expiration, and other
actions. Some administrative actions are completed on the system, and others are completed
on the management interface for the external scheduling application, such as IBM Storage
Copy Data Management or IBM Copy Services Manager.
Volume groups used with policy-based replication require that all volumes in the volume
group have the same caching I/O group.
Production
In a volume group where the volumes in the group are accessible for host I/O, configuration
changes are allowed. This system acts as the source for replication to any recovery copies.
By definition, a production volume group always contains an up-to-date copy of application
data for host access.
Recovery
On the system where the volumes in the group are able to only receive replication I/O, they
act as a target for replication and cannot be used for host I/O. Configuration changes are not
allowed on the recovery copy, but changes made to the production copy are replicated to the
recovery copy.
Independent
If independent access is enabled for the volume group, such as during a disaster, each copy
of the volume group is accessible for host I/O, and configuration changes are allowed.
Replication is suspended in this state and configuration changes are allowed on both the
copies.
Replication policies do not define the direction for replication. Direction for replication is
determined when a replication policy is associated with a volume group, or a volume group is
created specifying a replication policy. The system where replication policy is configured is
the production system.
An important concept is that the configuration and the data on the volumes are coupled, and
the recovery point is formed from both the configuration and volume data. Adding, creating,
removing, or deleting volumes from a volume group is reflected on the recovery system at the
equivalent point-in time as they did on the production system. If independent access is
enabled on a recovery volume group during a disaster, then volumes on the production
system are from a previous point-in-time. When independent access is enabled on the
recovery system, any partially synchronized volumes are deleted.
When you delete volumes from the production volume group, the copy of the volume on the
recovery volume group is deleted. The volume is deleted when the recovery volume group
has a recovery point that does not include the deleted volume.
Removing a replication policy from a volume group causes the recovery volume group and its
volumes to be deleted. To keep the recovery copy, independent access must be enabled on
the recovery copy first and then the replication policy must be removed from the volume group
in both locations.
The replication policy assigned to a volume group can be changed to a compatible policy
without requiring a full resynchronization. A compatible policy that is associated is defined as
having the same locations. With the compatible policy, the existing replication policy gets
removed, and the new policy is applied. Attempts to change the policy to an incompatible
policy is rejected by the system.
4.3.4 FlashCopy
FlashCopy is known as a point-in-time copy. It makes a copy of the blocks from a source
volume and duplicates them to the target volumes.
Tip: Metro Mirror can increase write latency. For best performance, use shorter distances
and create Metro Mirror relationships only between systems with similar performance.
46 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Global Mirror change volumes
Global Mirror change volumes are copies of data from a primary volume or secondary volume
that are used in Global Mirror relationships. Using change volumes lowers bandwidth
requirements by addressing only the average throughput, not the peak.
Compared to Remote Copy, policy-based replication replicates data between systems with
minimal overhead, significantly higher throughput, and reduced latency characteristics.
Note: Volume groups provide the same consistency capability to policy-based replication
configurations.
The use of SRM with IBM Storage Virtualize system can help you protect your virtual
environment.
SRM automates the failover processes and the ability to test failover processes or DR without
having a negative impact on the live environment, which helps you meet your recovery time
objectives (RTOs).
VMware SRM requires one vCenter server in each site with the respective licenses. Also, if
you are using SRM with IBM Storage FlashSystem storage, you are required to use an
IBM Storage Virtualize SRA, which is described in 4.3.10, “Storage Replication Adapter” on
page 48.
For more information about SRM, see VMware Site Recovery Manager Documentation.
The adapter is used to enable the management of Advanced Copy Services (ACS) on
IBM FlashSystem Storage, such as policy-based replication, Metro Mirror, Global Mirror and
Global Mirror with change volumes.
The combination of SRM and SRA enables the automated failover of VMs from one location
to another, connected by either replication method.
48 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
By using the IBM Storage Virtualize Family Storage Replication Adapter, VMware
administrators can automate the failover of an IBM Storage FlashSystem 9500 at the primary
SRM site to a compatible system at a recovery (secondary) SRM site. Compatible systems
include another IBM Storage FlashSystem 9500, 9200, 9100, 7300, 7200, or IBM SAN
Volume Controller.
In a failover, the ESXi servers at the secondary SRM site mount the replicated datastores on
the mirrored volumes of the auxiliary storage system. When the primary site is back online,
run a failback from the recovery site to the primary site by clicking Reprotect in the SRM.
For more information, see the IBM Storage Virtualize Family Storage Replication Adapter
documentation.
Figure 4-2 SRA and VMware SRM with IBM Storage Virtualize integrated solution
SRA configuration might vary depending on the specific site configuration. Consider the
following preparatory steps and configuration when using SRA with SRM.
50 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Ensure that for non-preconfigured environments, the recovery volumes remain unmapped.
Make sure that the recovery VMware ESXi hosts are defined as hosts at the recovery site
and report as online.
52 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 4-6 Site Recovery: Paired Sites
As a best practice, stop your currently installed version of the SRA container before running a
different version. Ensure that you satisfy all of the prerequisites that are listed in before you
run the SRA container. Follow the steps to run the IBM Storage Virtualize system SRA
container on the SRM server.
1. Log in to the VMware SRM Appliance Management interface as admin, as shown in
Figure 4-7.
54 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 4-9 Site Recovery view
# If force is true, SRA will forcefully discard switch step in recovery operation
if any failure: True/False
# Note: Not recommended to enable this param in normal scenario.
force=False
If changes to the configuration file are needed, determine the SRA configuration volume
directory by using the following command:
docker inspect <Container_ID>|grep volume
As root user, edit the SRA configuration file as needed as shown in Example 4-2.
Before you configure and use the Array Based Replication make sure Network Mappings,
Folder Mappings, Resource Mappings, Storage Mappings and Placeholder Data Stores are
configured according to the VMware documentation. See Configuring Mappings.
56 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Account for the following practices when working with SRA and VMware SRM:
Create Metro Mirror, Global Mirror, or Policy-based Replication relationships between the
source and target VDisks and add them to consistency groups, as explained in “Preparing
the SRA environment” on page 50.
Before you use the SRA, make sure that the relationships and consistency groups are in a
consistent synchronized state.
For Stretched Cluster, make sure that the two copies of a stretched volume are at different
sites and that both copies are online.
For IBM HyperSwap, make sure that the primary volume and secondary volume of a
HyperSwap volume are online.
All volumes that participate in SRM and belong to the same remote copy consistency
group are shown under a single local consistency group. To avoid data inconsistencies
when adding replicated VDisks to the same VM or datastore, all VDisks used by a single
VM or application must be added to the same consistency group.
If you plan to use VMware SRM to manage replicated volumes, use the Name Filter by
using prefixes for the volume name. The volume names can be different for each site, but
prefixes must be paired with the remote site array manager. For example, if the local site
volume name is Pri_Win2019, and if it is mapped to Rec_Win2019 on the remote site,
then you might enter the prefix Pri in the Name Filter field for the local site and the prefix
Rec on the remote site. To use the Name Filter for the consistency group, enter the same
names and prefixes at both the local and remote sites. For more information, see Filtering
different consistency groups and volumes.
Consider the following items for managing datastores and consistency groups:
– The datastores of one VM should be in the same consistency group.
– The datastore of the VM and the raw disk in the VM should be in the same consistency
group.
– You must have administrator privileges to install SRM.
– Set the appropriate timeout and rescan values in SRM for recovery of many VMs.
3. Enter the name of the local FlashSystem. Enter the URL with either /rest or /rest/v1 at
the end. Enter the user and password and which pool will be used for Test. A pool is also
called an MDisk Group. Select NEXT, as shown in Figure 4-12 on page 59.
58 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 4-12 SRM Add Array Pair Define Local Array Manager
4. Enter the name of the remote FlashSystem. Enter the URL with either /rest or /rest/v1
at the end. Enter the user ID and password and which pool to use for Test. Select NEXT,
as shown in Figure 4-13.
Figure 4-13 SRM Add Array Pair Define Remote Array Manager
6. Review the summary and click FINISH to create the array pair, as shown in Figure 4-15.
7. Select the array pair and the Storage Virtualize systems replicated devices with their
consistency group are shown, as illustrated in Figure 4-16 on page 61.
60 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 4-16 SRM Array Pairs and Replicated Devices
2. Choose a name for the Protection Group, select the protection direction and click NEXT,
as shown in Figure 4-18.
4. Select the Data Store Group and verify the protected VMs, click NEXT, as shown in
Figure 4-20.
62 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
5. Select Add to new recovery plan, choose a name for the Recovery Plan and click NEXT,
as illustrated in Figure 4-21.
8. To verify the Recovery Plan select the Recovery Plans tab and select the Recovery Plan,
check the status and the number of VMs which are ready for recovery, as illustrated in
Figure 4-24.
64 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
2. Confirm the Test and click NEXT, as illustrated in Figure 4-26.
66 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
3. Review the summary and click FINISH, as depicted in Figure 4-31.
4. The Cleanup starts and the results are visible in the tab Recovery Steps, as shown in
Figure 4-32.
4. The Recovery starts and the results are visible in the tab Recovery Steps, as shown in
Figure 4-32 on page 67.
68 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 4-36 SRM Recovery Plan Run Recovery Steps
2. Select that you understand the risks and confirm the recovery (Figure 4-38). Click NEXT.
4. The Reprotect starts and the results are visible in the tab Recovery Steps (Figure 4-40).
5. To failback to the original Protected Site another Planned Migration and Reprotect is
necessary.
70 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
In this document, the focus is on solutions that rely both on VMware vSphere Metro Storage
Cluster (vMSC) and VMware SRM in relation to IBM Storage Virtualize.
When you configure a system with a HyperSwap topology, the system is split between two
sites for data recovery, migration, or high availability use cases. When a HyperSwap topology
is configured, each node or enclosure, external storage system, and host in the system
configuration must be assigned to one of the sites in the topology. Both node canisters of an
I/O group must be at the same site. This site must be the same site of any external storage
systems that provide the managed disks to that I/O group. When managed disks are added to
storage pools, their site attributes must match. This requirement ensures that each copy in a
HyperSwap volume is fully independent and spans multiple failure domains (Figure 4-41).
HyperSwap Volume is a group of volumes and remote copy relationships all working together
to provide the active-active solution, and ensure that data is synchronized between sites.
However, when you create a HyperSwap volume, the necessary components are created
automatically, and the HyperSwap Volume can be managed as a single object.
Like a traditional Metro Mirror relationship, the active-active relationship attempts to keep the
Master Volume and Auxiliary Volume synchronized while also servicing application I/O
requests. The relationship uses the CVs as journaling volumes during the resynchronization
process (Figure 4-42).
The HyperSwap Volume always uses the unique identifier (UID) of the Master Volume. The
HyperSwap Volume is assigned to the host by mapping only the Master Volume even though
access to the Auxiliary Volume is ensured by the HyperSwap function. For each HyperSwap
volume, hosts across both sites see a single volume that is presented from the storage
system with the UID of the Master Volume.
Figure 4-42 Read operations from hosts on either site are serviced by the local I/O group
Cluster considerations
Consider the following tips when you work with HyperSwap and VMware vSphere Metro
Storage Cluster (vMSC):
One IBM Storage Virtualize-based storage system, which consists of at least two I/O
groups. Each I/O group is at a different site. Both nodes of an I/O group are at the same
site.
HyperSwap-protected hosts on IBM Storage Virtualize must be connected to both storage
nodes by using Internet Small Computer System Interface (iSCSI) or Fibre Channel.
In addition to the two sites that are defined as failure domain, a third site is needed to
house a quorum disk or IP quorum application.
72 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
More system resources are used to support a fully independent cache on each site. This
allows full performance even if one site is lost.
HyperSwap relationships
One site is considered as the Primary for each HyperSwap Volume or Consistency Group.
This site is dynamically chosen according to the site that writes more data (more than 75% of
write I/Os) to the volume or consistency group over a 20-minute period.
This role can change after a period of 20 minutes, if an I/O majority is detected in nodes on
the non-Primary site, or it can change immediately, if a Primary-site outage occurs.
Note: Low write-throughput rates do not trigger a direction switch to protect against
unnecessary direction changes when experiencing a trivial workload.
Although the I/O group on each site processes all reads from the hosts on that local site, any
write requests must be replicated across the inter-site link, which incurs added latency. Writes
to the primary site experience a latency of 1x the round-trip time (RTT) between the sites.
However, due to the initial forwarding process, writes to the non-primary site experience 2x
the RTT. Consider this additional performance impact when you provision storage for
latency-sensitive applications.
These relationships automatically run and switch direction according to which copy or copies
are online and up-to-date.
Relationships can be grouped into consistency groups, in the same way as other types of
remote-copy relationships. The consistency groups fail over consistently as a group based on
the state of all copies in the group. An image that can be used for disaster recovery is
maintained at each site.
Read I/O is facilitated by the I/O group that is local to the requesting host, which prevents the
need for the I/O to transfer over the long-distance link and incur unnecessary latency.
74 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Write operations from ESXi hosts at the remote site
In this scenario, a write to I/O Group 1 must be applied to both copies, but the replication code
cannot handle that task on I/O Group 0 because I/O Group 0 currently holds the Primary
copy. The write data is initially transferred from the host into a data buffer on a node in I/O
Group 1. The node in I/O Group 1 sends the write, both metadata and customer data, to a
node in I/O Group 0 (Figure 4-45).
Figure 4-45 Write operations from hosts on Secondary are forwarded and replicated
On the node in I/O Group 0, the write is handled as though it were written directly to that I/O
Group by a host. The replication code applies the write to the I/O Group 0 cache, and
replicates it to I/O Group 1 to apply to the cache there, which means that writes to the
secondary site have increased latency and use more bandwidth between the sites. However,
sustained writes mainly to the secondary site over a 20-minute period switches the direction
of the HyperSwap relationship, which eliminates this impact.
Note: Whenever the direction of a HyperSwap relationship changes, there is a brief pause
to all I/O requests to that volume. In most situations, this pause is less than 1 second.
Where possible, consider how application workload to a single HyperSwap volume (or
HyperSwap consistency group) across sites can reduce the likelihood of repeated direction
changes.
IBM Storage Virtualize facilitates vMSC with the ability to create a single storage cluster that
spans both sites, such that a datastore must be accessible in both locations. In other words,
the datastore must be able to read and be written to simultaneously from both sites by using
For every volume presented from the IBM Storage FlashSystem, a preferred node is
automatically elected. To evenly distribute the workload across the IBM Storage FlashSystem
upon volume creation, the preferred node usually alternates between each node in the I/O
group.
When you map a volume to a host object and rescan the host bus adapter (HBA) on the ESXi
host, ESXi automatically identifies the available paths to both nodes in the I/O group. The
paths to the preferred node for each volume are identified as the Active/Optimized paths. The
paths to the non-preferred node are identified as Active/Non-Optimized paths.
By default, ESXi uses the Round-Robin path selection policy (PSP) to distribute I/O over any
available Active/Optimized paths to the preferred node. A failover to Active/Non-Optimized
paths occurs only if available paths to the preferred node do not exist.
Non-Uniform configuration
In Non-Uniform vMSC implementations, ESXi hosts use SCSI Asymmetric Logical Unit
Access (ALUA) states to identify Active/Optimized paths to the preferred node in the local I/O
group and Active/Non-Optimized paths to the partner node. The host has no visibility of the
storage system at the remote site (Figure 4-46).
With Non-Uniform vMSC environments, if a storage failure occurs at the local site, the ESXi
hosts lose access to the storage because paths are not available to the storage system at the
76 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
remote site. However, this architecture might be useful when you run clustered applications
like Microsoft SQL or Microsoft Exchange with servers that are at each site. It might be
preferable to have a clustered application fail over so that an application can continue to run
with locally available storage.
Uniform configuration
In Uniform vMSC implementation, ESXi hosts also uses SCSI ALUA states to identify
Active/Optimized paths to the preferred node and Active/Non-Optimized paths to the partner
node in the local I/O group. Extra paths to the remote I/O group are automatically detected as
Active/Non-Optimized. ESXi uses the Optimized paths to the local preferred node where
possible, and fails over to the Non-Optimized paths only if there are no available paths to the
preferred node (Figure 4-47).
The negative aspect of this configuration is that there can be many ESXi servers and storage
resource at Site 2 that are idle.
In addition, if the workload is unregulated, the direction of the HyperSwap relationship can be
swapped repeatedly, which generates more I/O pauses for each change of direction.
Consider creating multiple VMFS datastores where the primary associated site alternates
between Site 1 and Site 2, and keep VM-storage resources local to hosts that are running the
VMs.
78 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
An example might include the following configuration:
When you create datastores, use odd-numbered datastores to have a site preference of
Site 1, and even-numbered datastores designated to Site 2.
Migrate the compute resources for half of the VMs over to ESXi hosts on Site 2 to create
an even distribution of workload between the two sites.
For VMs running on ESXi hosts at Site 1, ensure the VMDKs are provisioned on
odd-numbered datastores.
For VMs running on ESXi hosts at Site 2, ensure the VMDKs are provisioned on the
even-numbered datastores.
Create Host/VM Groups and Host/VM Rules in vSphere to ensure that the Distributed
Resource Scheduler (DRS) can function correctly to redistribute VMs across hosts within
a site if required but can still enable failover in an outage.
As detailed in “Write operations from ESXi hosts at the remote site” on page 75, Alternative
Access Method ensures that instead of any write I/Os at either site having to be forwarded
over the inter-site link before being replicated to the remote site, they are serviced by the I/O
group at the site where the I/O originated. This method reduces the overall latency and
increases the performance and throughput.
The intention is for a given HyperSwap volume or consistency group to keep VM I/O
workloads local to the host running the VM, which minimizes the workloads being driven from
a host at the non-primary site.
In a site outage at either site, vSphere high availability (HA) automatically recovers the VMs
on the surviving site.
For more information about DRS Host/VM groups and rules, see Create a VM-Host Group.
In an ideal HyperSwap environment, you do not want VMs to move to the other site. Instead,
you want VMs to move to the other site only in a site failure, or intentionally balance
workloads and achieve optimal performance.
ESXi hostnames
Create a logical naming convention so you can quickly identify which site a host is in. For
example, the site can be included in the chosen naming convention or you can choose a
numbering system that reflects the location (for example odd hosts are in site one). The
naming convention makes the designing and the day-to-day running of your system easier.
Table 4-1 IBM Storage Virtualize HyperSwap and VMware vSphere Metro Storage Cluster supported failure scenarios
Failure scenario HyperSwap behavior VMware HA impact
Complete Site-1 failure (The SVC or IBM Storage FlashSystem Family VMs running on vSphere hosts at the
failure includes all vSphere continues to provide access to all volumes failed site are impacted. VMware HA
hosts and SVC or IBM Storage through the control enclosures at Site 2. automatically restarts them on
FlashSystem Family controllers When the control enclosures at Site-1 are vSphere hosts at Site-2.
at Site-1) restored, the volume copies are
synchronized.
Complete site 2 failure Same behavior as a failure of Site-1. Same behavior as a failure of Site-1.
SVC or IBM Storage SVC or IBM Storage FlashSystem Family vSphere hosts continue to access
FlashSystem Family inter-site active quorum is used to prevent a volumes through the remaining I/O
link failure, vSphere cluster split-brain scenario by coordinating one I/O group. No impact.
management network failure group to remain servicing I/O to the
volumes. The other I/O group goes offline.
80 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Failure scenario HyperSwap behavior VMware HA impact
Starting with IBM Storage Virtualize firmware 8.5.1.0 or later, the VASA Provider function has
been incorporated natively in to the configuration node of the cluster to simplify the overall
architecture of a vVol environment. This feature is referred to as the Embedded VASA
Provider.
84 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Platform name Supports Embedded VASA
Provider
Table 5-2 Feature comparison between the Embedded VASA Provider and IBM Spectrum Connect
Item IBM Spectrum Connect Embedded VASA Provider
Note: Data reduction pools (DRPs) are not supported for either the metadata volume disk
(VDisk) or individual vVols.
The system must be configured with a Network Time Protocol (NTP) server to ensure that
the date and time are consistent with the VMware infrastructure.
The system must be configured with a certificate with a defined Subject Alternative Name
value.
Note: Additionally, all hosts that require access to the vVol datastore must be configured
with the vVol host type.
86 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
5.2.2 Configuring the NTP server
To configure the NTP server on the system, complete the following steps:
1. Go to the Settings → System window in the GUI, and select Date and time, as shown in
Figure 5-2.
Note: If you use an FQDN or DNS name for the NTP server, you must ensure that a DNS
server is configured in the system. To configure DNS servers, select Settings → Network
and select DNS.
When you use a self-signed certificate, you must update the Subject Alternative Name field in
the certificate before registering the Embedded VASA Provider within vCenter. When you use
a signed certificate, this value is likely defined.
2. Expand the details and review the Subject Alternative Name value, as shown in
Figure 5-4.
88 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
3. Alternatively, run the lssystemcert command, which shows the following output, as shown
in Example 5-1.
4. If no Subject Alternative Name field is defined, update the self-signed certificate. To do this
task, select Settings → Security and select Secure Communications, as shown in
Figure 5-5.
6. Complete the certificate notification and ensure that a Subject Alternative Name value is
defined. This value can either be an IP address, DNS name, or FQDN. However, the
specified Subject Alternative Name extension must resolve to the same host as the VASA
provider's advertised IP address, hostname, or FQDN, as shown in Figure 5-7 on page 90.
Note: In some versions of firmware, the GUI automatically populates some values for the
Subject Alternative Name field, so do not user this page to view or verify the existing
certificate values. Instead, check the certificate as reported by your web browser as
mentioned previously in steps 1 and 2.
After updating the values in this window, clicking Update generates a new system-signed
certificate for the storage system. During this time, the cluster IP is unavailable for a few
minutes while the new security settings are applied. After a few minutes, you might need to
refresh your browser window, and then you are prompted to accept the new self-signed
certificate. Any I/O being processed by the system is unaffected by this process.
90 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
5.2.4 Exporting Root Certificate and adding to vCenter truststore
You are now required to export the root certificate from the storage system and upload it into
the vCenter truststore.
1. To do this, navigate to Settings → Security and select System Certificates.
2. Locate the Action menu on the right side of the page, and select Export Root Certificate.
See Figure 5-8.
This downloads the storage system root certificate with a file name of
root_certificate.pem through your browser session.
4. Navigate to the vSphere Client, and from the main navigation menu, select
Administration → Certificate Management. See Figure 5-10 on page 92.
The new certificate is registered in the truststore, and a message stating “CA certificates
refresh on host successful!”
92 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
5.2.5 Preparing ESXi hosts for vVol connectivity
Any ESXi hosts that require access to a vVol datastore must be defined as a vVol host type in
the storage system.
Note: As a best practice, create a single host object in IBM Storage Virtualize to represent
each physical ESXi server in the configuration. When you use clustered host
environments, for example, when multiple ESXi hosts are part of a vSphere cluster, use
IBM Storage Virtualize Host Cluster objects to represent the vSphere cluster.
2. Specify a name for the host cluster object and click Next. To simplify troubleshooting,
consider using the same name for the Host Cluster object as is defined on the vSphere
Cluster within vCenter, as shown in Figure 5-13 on page 94.
3. Review the summary window, and click Make Host Cluster, as shown in Figure 5-14 on
page 94.
94 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
4. When creating a host object, ensure that it is defined with the Host Type of vVol. To do this
task, access the Hosts view in the GUI by selecting Hosts → Hosts, and clicking Add
Host, as shown in Figure 5-15.
5. Enter a descriptive name, select the Host Port definitions, and define the Host Type as
vVol. Consider naming the host object in IBM Storage Virtualize with the same name as
the one that the ESXi host uses in vCenter, as shown in Figure 5-16.
Note: When creating the host object by using the CLI, use the host type adminlun:
IBM_2145:vvolsftw-sv1:superuser>mkhost -fcwwpn
2100000E1EC249F8:2100000E1EC249F9 -name vvolsftw-02 -hostcluster vvolsftw -type
adminlun
7. Repeat the process for each additional host that you want to create.
96 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
8. Verify that all hosts are correctly defined as vVol host types by selecting Hosts → Hosts in
the storage system GUI, as shown in Figure 5-18.
9. You can ensure consistency across all members of the host cluster by defining the host
type at the host cluster level. To do this task, select Hosts → Host Clusters. Right-click
the host cluster and select Modify Host Types, as shown in Figure 5-19.
10.Select the vVol host type and click Modify, as shown in Figure 5-20 on page 98.
By configuring the vVol host type on the host or host cluster object, the system
automatically presents the Protocol Endpoints to the ESXi hosts.
11.Before continuing with the Embedded VASA Provider configuration, verify that all hosts in
the vSphere cluster correctly detected the Protocol Endpoints from the storage system. To
do this task, rescan the storage adapters on each ESXi host and verify that there is a
storage device with SCSI ID 768 and 769, as shown in Figure 5-21.
Protocol endpoints
A Protocol endpoint (PE) is presented from each node in the IBM Storage Virtualize cluster.
When you use a storage system with multiple IO-groups, you might see a maximum of 8 PEs.
Ensure that all ESXi hosts correctly identified all PEs before continuing.
If PEs are not being detected by the ESXi host, review the VMware Hardware Compatibility
Guide and ensure that your HBA driver and firmware level supports the Secondary LUNID.
For more information, see Troubleshooting LUN connectivity issues on ESXi hosts (1003955).
98 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
device that is mapped to the ESXi host with a SCSI ID higher than 255, which includes
Protocol Endpoints presented from IBM storage systems. VMware recommends that this
value be defined as 1024.
After all the values are defined, the GUI creates all the necessary objects within IBM Storage
Virtualize to facilitate vVol support, as shown in Figure 5-23 on page 100.
For versions 8.6.0.0 or later, the metadatavdisk is created as a 4 TB space efficient volume.
For systems that are running versions earlier than 8.6.0.0, the metadatavdisk is a 2 TB space
efficient volume.
Note: Although it is allocated a maximum capacity of 4 TB, this volume is intended to store
only system metadata It is likely that the used capacity will never grow beyond a few
gigabytes in size. Account for approximately 1 MB of consumed capacity per Virtual
Volume.
100 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
5.3.2 Child pool (vVol-enabled Storage Container)
When you define a new child pool, enter a name and capacity for the child pool. This pool is
presented to the vSphere environment as a vVol-enabled storage container, so the specified
capacity of the pool dictates the size of the vVol datastore within vSphere. The capacity can
be increased or decreased later, so there is flexibility for expansion and scale as the
infrastructure matures.
Storage systems that are running firmware 8.6.0.0 or later support the creation of additional
vVol child pools by using the GUI. For more information, see 5.3.7, “Provisioning additional
vVol datastores” on page 108.
For storage systems running firmware that is earlier than version 8.6.0.0, the initial vVol
configuration allows for the creation of only a single child pool. Additional vVol-enabled child
pools can be created by using the storage system CLI if required.
Note: Swap vVols are always created as fully allocated volumes within IBM Storage
Virtualize regardless of the specified provisioning policy.
The system automatically creates a user group that is assigned with a specific role within
IBM Storage Virtualize. A user account is created that uses the defined username and
password and is configured as a member of this group and is granted specific access rights
that allow manipulation of vVol objects within the storage system.
The storage credentials that are defined in this window are required when registering the
Storage Provider within vSphere, and they are initially used to authenticate the vSphere
environment against the IBM Storage Virtualize storage system.
Note: The password that is defined in the window is used once, and it is required only in
the initial Storage Provider registration process.
After the account has been reconfigured, the Storage Provider can be reregistered in vCenter
by using the new password.
102 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
To register the Storage Provider in vSphere, complete the following steps:
1. Click the copy icon in Copy the following URL field to copy the string to your clipboard.
2. Open the vCenter web interface and find the vCenter server in the inventory tree. Select
Configure → Storage Providers, and then click Add, as shown in Figure 5-25.
3. Enter an identifiable name, and paste the URL into the URL field. Add the user credentials
that were defined earlier and click OK, as shown in Figure 5-26.
If the registration of the Storage Provider has failed for any reason, see Chapter 9,
“Troubleshooting” on page 257.
5. Verify that the newly added Storage Provider is showing online and active in the Storage
Providers list (Figure 5-28).
Figure 5-28 Newly added Storage Provider is showing online and active
104 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
5.3.6 Creating the vVol datastore
Review the vCenter inventory and identify the cluster or host that you want to mount on the
vVol datastore by completing the following steps:
1. Right-click the cluster and select Storage → New Datastore, as shown in Figure 5-29.
4. Select the hosts that will access the vVol datastore and click NEXT (Figure 5-32).
106 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
5. Review the summary window and click FINISH, as shown in Figure 5-33.
6. Review the Datastores tab and ensure that the capacity and accessibility are correctly
reported, as shown in Figure 5-34.
2. Right click (or from the Actions menu), select Create Child Pool. See Figure 5-36.
108 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
3. Define the name and capacity.
4. Ensure the vVol/VASA Ownership Group and a Provisioning Policy are defined.
5. Click Create.
6. Follow the steps defined in “Creating the vVol enabled child pool” on page 110 to complete
the process.
Note: In some circumstances, it can take a few minutes for vSphere to detect the presence
of a new vVol storage container.
Identify the target parent pool in which to create the child pool and note the mdiskgrp ID or
name.
By default, the name that is associated with the VASA ownership group is VASA.
Identify the provisioning policy that is required for the new vVol child pool by running the
lsprovisioningpolicy command, as shown in Example 5-3.
110 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
5.4 Migrating from existing IBM Spectrum Connect vVol
configurations
This section provides a description of the process of migrating from existing IBM Spectrum
Connect vVol configurations.
To perform a migration between vVol solutions, you must complete the following steps:
1. Provision Virtual Machine File System (VMFS) datastores with sufficient capacity to store
all VMs that are on the vVol storage.
2. Use Storage vMotion to migrate existing VMs or templates from vVol storage to VMFS
datastores.
3. Remove the vVol IBM Spectrum Connect configuration from vCenter.
4. Disable or decommission the vVol function on the storage system.
5. Enable vVol using Embedded VASA Provider, register the storage provider and create
vVol datastores.
6. Use Storage vMotion to migrate VMs and templates from VMFS storage to the new vVol
datastores.
7. Remove the temporary VMFS datastores if they are no longer needed.
After the new volumes are created and mapped to the appropriate hosts or host cluster in the
storage system, create the VMFS datastores.
Identify the VMs that are on the vVol datastores that are presented by IBM Spectrum Connect
and run a storage migration to move them to the new VMFS datastores. Depending on the
number of VMs, consider using, for example, PowerCLI to automate bulk VM migrations.
Note: VM templates might exist on the vVol datastores that require conversion into a VM
before they can be migrated. After the migration completes, they can be converted back
into a VM template.
112 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
2. Select Unmount Datastore and select all connected hosts to unmount the datastore from
all hosts, as shown in Figure 5-39. Click OK.
3. After the datastore is unmounted from all hosts, it is automatically removed from vCenter.
4. Repeat these steps for all the vVol datastores that are presented by IBM Spectrum
Connect.
114 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Removing IBM Spectrum Connect Storage Provider
After all the vVol datastores are unmounted and removed, it is safe to remove the Storage
Provider from within vCenter. Complete the following steps:
1. Find the vCenter entry in the inventory tree and click the Configure tab.
2. Select Storage Providers.
3. Identify the IBM Spectrum Connect Storage Provider in the list and select REMOVE
(Figure 5-42).
5.5.1 Identifying and removing the vVol child pools for IBM Spectrum Connect
Identify the child pools that were allocated to any vVol Storage Spaces within the
IBM Spectrum Connect GUI and delete them, as shown in Figure 5-43 on page 116.
Alternatively, log on to the storage system CLI with a user account that has the VASA
Provider role and run the following command to identify any existing vVol child pools that are
used by IBM Spectrum Connect:
lsmdiskgrp -filtervalue owner_type=<vvol_child_pool>
Note the mdiskgrp name or ID. Verify that the vVol pool is no longer required and that the
name and ID are correct because you cannot recover the pool after it is deleted. When you
are sure, run the following command to remove the child pool:
rmmdiskgrp <name or id>
Warning: Removing the pool might fail if any vVols are in the pool. You might need to
manually remove any vVols in the pool before removing the pool itself. To identify any vVols
that are in the pool to be deleted, run the following command:
lsvdisk -filtervalue mdisk_grp_name=<child pool name>
For each vVol, identify the VDisk ID or name and run the following command to delete the
vVol.
Warning: Verify that the vVol is no longer required and that the name and ID are correct
because there is no way to recover the data after the volume is deleted.
After any lingering vVols are deleted, retry the pool removal command until all IBM Spectrum
Connect vVol pools are removed.
116 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
5.5.2 Removing the user account that is used by IBM Spectrum Connect
Warning: If other integration interfaces are configured, for example, vCenter or vRealize
Orchestration, do not remove the user account because its removal will cause future
integration commands to fail.
Identify the user account that is used by IBM Spectrum Connect by either reviewing the
Storage System Credentials window in the IBM Spectrum Connect GUI or by using the CLI.
You see the user account that is used by IBM Spectrum Connect, as shown in Figure 5-45.
You can also use the command in Example 5-5 on page 118 on the storage system CLI.
Remove the user account that is used by IBM Spectrum Connect, and then run the following
command:
rmuser <user_id or name>
To identify the User Group that is associated with the VASA Provider role, run the command
in Example 5-6 on the storage system CLI.
If no other user accounts are in the user group, remove the VASA Provider user group by
running the following command:
rmusergrp <usergrp_id or name>
To identify the location of the metadata VDisk, run the command in Example 5-7 on the
storage system CLI.
Remove the metadata VDisk by running the following command on the storage system CLI.
Warning: The metadata VDisk contains all metadata that is associated with the vVol
environment. This operation cannot be undone.
rmmetadatavdisk
After the new storage provider is registered, and a new vVol datastore is online, complete the
migration by completing the following steps:
118 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
1. Identify the VMs to be migrated. Select them, right-click, and select Migrate, as shown in
Figure 5-46.
2. In the Select Storage window, identify the newly created vVol datastore and click NEXT, as
shown in Figure 5-47 on page 120.
3. Complete the Storage vMotion workflow and review the tasks to ensure that the VMs
successfully migrated (Figure 5-48 on page 120).
During the migration operation, vVols are automatically created on the storage system
within the child pool that was configured as a vVol storage container.
Figure 5-48 Reviewing the tasks to ensure that the VMs successfully migrated
4. To review the vVol objects within IBM Storage Virtualize, select Pools → Volumes by
Pool within the GUI, as shown in Figure 5-49 on page 121.
120 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 5-49 Volumes by Pool view
5. Select the vVol-enabled child pool in the list by identifying the vVol logo underneath the
pool information on the left panel. When you select the vVol pool, the individual vVol
objects appear, as shown in Figure 5-50 on page 122.
6. The Display Name column was added to provide more information about the vVol that can
help to identify the specific VM to which it belongs (Figure 5-51 on page 123).
Note: By default, the Name column is not displayed in the GUI table view, but it is an
optional field that can be added by right-clicking the column headers and selecting the
Name checkbox.
122 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 5-51 Displaying the Name column
To use these products with the plug-in, they must be on software level 8.4.2.0 or higher.
Note: Base functionality is supported from 8.4.2.0 or higher. However, specific features are
only available on supported platforms or software levels. Throughout the chapter, it is
specified whether the requirements change for different features.
To register the plug-in to a VMware vSphere client, the vCenter environment must be on
version 7.0 or higher.
Appliance requirements
Running the plug-in requires the following resources on the vSphere appliance where the
plug-in virtual machine is deployed:
2 vCPUs and 4 GB of memory
20 GB of datastore space (Either vSphere Virtual File Machine System (VMFS) or vVol)
Networking (Ethernet, TCPIP)
– One IP address (IPv4), Static or DHCP
– Gateway
– DNS
– Netmask
126 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Demonstration videos: The following demonstration videos are available for VMware
integration with IBM FlashSystem:
Connecting IBM Storage to VMware vSphere
Managing datastores provisioned in IBM FlashSystem from the vSphere client
Creating a datastore on IBM FlashSystem storage, directly from the vSphere client
You can also view Demo Videos to see a full list of the IBM Storage videos created by the
IBM Redbooks authors.
You can also download an upgrade bundle from Fix Central. To learn more about upgrading
from previous releases see the section 6.4, “Upgrading from a previous version” on page 137.
Deployment instructions
Perform the following steps to deploy the OVA into vSphere:
1. Right click either a Datacenter, ESX host, or vSphere cluster to open the actions menu
and click Deploy OVF Template. See Figure 6-1 on page 128.
2. When the OVF deployment wizard opens, click UPLOAD FILES, and in the file browser,
select the recently downloaded .ova file. See Figure 6-2 on page 128.
128 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
3. Enter a name for the virtual machine (VM) and select a location within the vSphere
inventory. See Figure 6-3.
4. Select which compute resource owns the VM. Evaluations are run on the selected host to
determine compatibility. See Figure 6-4.
Note: The default VM password is available in the Description section. You are prompted to
change the password on the first login.
After reviewing details, the licenses must be accepted before continuing with the deployment
of the OVA. The license to be accepted consists of 2 separate agreements:
– The VMware end-user license agreement
– IBM License information, containing the license in multiple languages
6. After reading both licenses, confirm if you agree with the license and proceed.
7. Select the storage for the configuration and files that are associated with the VM to be
deployed. Select whether the VM should be encrypted (which requires a Key Management
Server), the format of the virtual disk, and the datastore where the VM is stored. If the
selection is compatible, click Next. See Figure 6-6 on page 131.
130 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 6-6 Deploy OVF Template: Select Storage
8. Select the destination network for the VM. Other fields on this page are pre-selected and
cannot be changed.
9. Configure the network settings of the VM, using either static or DHCP networking.
a. DHCP option. To configure DHCP for the plug-in VM use static networking, select
DHCP from the dropdown in the Network Type section of the page. The only
requirement is ensuring that the network supports DHCP. The static network details
section does not need to be completed. See Figure 6-7 on page 132.
b. If you choose to configure the VM with a static network, select Static from the
dropdown in the Network Type section of the page.
Fill out the following details in the static network details section:
• Hostname
• IP Address
• Gateway
• Netmask
• DNS Server
The remaining fields are optional. See Figure 6-8 on page 132.
132 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
10.Review the summary of all the deployment details. Click Finish to deploy the OVA as a
VM. See Figure 6-9.
After the VM has successfully booted, you can connect to it by using ssh and by using the
root user with the password IBMplugin. Alternatively, you can use the web console from the
vSphere GUI to log in to the VM. The first time that you log in you are asked to change your
password. This is a requirement. See Figure 6-10.
When registering or unregistering the plug-in, you must first be connected to your plug-in
appliance. To connect, either use the web console or ssh and login as root using the updated
password.
When you are logged in, the ibm-plugin command can be used for registration or
un-registration.
Note: The specified FQDN of the vCenter must be either the fully qualified domain name
(FQDN) or the IP address of the vCenter instance. Registering the plug-in to a vCenter
with a short hostname can cause errors when you use the plug-in.
After issuing the registration command, the thumbprint of the vCenter is shown. Confirm
this is correct by selecting the Return or Enter key. Enter the password of the specified user to
the vCenter instance. See Figure 6-11.
For any issues when registering the plug-in, see 6.12, “Troubleshooting” on page 179. To
register the plug-in to multiple linked vCenters, refer to “Registering with linked vCenters” on
page 135.
After successful registration, a banner alert is shown in the vSphere Client and prompts for a
page refresh. If you are logged in, click Refresh Browser in the prompt to refresh the browser
and activate the plug-in. See Figure 6-12.
Note: The browser must be refreshed to activate the plug-in. If the browser is not
refreshed, the plug-in does not appear.
134 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Populating information from IBM Storage Systems
Several tasks appear in the Recent Tasks panel at the bottom of the vSphere Client GUI. A
task titled Populating information from IBM Storage Systems is a plug-in specific task that
discovers existing host and datastore information within the vSphere environment and
correlates to storage systems registered in the plug-in dashboard.
The population task is triggered when the plug-in is restarted or registered to a vCenter
instance, a storage system is added to the plug-in dashboard, or is manually initiated by the
user on the plug-in dashboard.
The time required for the task to finish depends on the size of the vSphere and storage
systems configurations. During this time there might be inconsistencies or inaccuracies with
the information displayed within the IBM Storage pages and the dashboard.
The plug-in is now ready to use. The rest of this chapter describes how to navigate and use
the plug-in.
Note: Although the plug-in supports multiple vCenters, it does not support multiple
vCenters, which are not in a linked-mode configuration. If multiple vCenters are to be used
that are not in linked mode, they must have their own instance of the plug-in appliance.
The following output will be displayed after a successful unregistration (Figure 6-13.):
Plugin successfully unregistered from vCenter.
The following message is displayed if the appliance cannot communicate to the vSphere
instance:
Cannot connect to registered vCenter instance at
plugin-vcsa-wrong.ssd.hursley.ibm.com.
Press the Y key to confirm and start the removal of the local registration of the plug-in
appliance. The plug-in is not removed from the vSphere client. In this scenario it would be
required to do a manual removal of the plug-in from the vCenter. Instructions for completing
this can be found in Manual unregistration on vCenter version 8.0 or higher and Manual
unregistration on vCenter version lower than 8.0.
To display the status of the plug-in, when connected to the VM run the command
ibm-plugin status.
Any plug-in registrations appear in a table under the "Registered vCenters" section of the
output.
136 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 6-14 Status of registrations
Note: No existing data will be lost from datastores. Volumes and volume mappings will
also be undisturbed.
After the upgrade is done, register the plug-in by following the instructions provided in
“Registering the plug-in” on page 134.
Updating the Photon OS packages requires that your VM has internet access, if the VM does
not have internet access, then it is best to follow the steps in 6.3, “Downloading and deploying
the OVA” on page 127 with the latest version of the appliance.
To upgrade your Photon OS packages, connect to the VM and run the command ibm-vm
--update. The plug-in should auto-start and wait for the plugin to become accessible in
vSphere.
If the plug-in does not auto-start, run the following commands to manually start the plug-in:
cd /opt/ibm-plugin
docker-compose up -d
The dashboard opens and includes information about any registered storage system and lists
them in the main datagrid. See Figure 6-16 on page 139.
138 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 6-16 Empty dashboard
Figure 6-17 Sync icon button next to add storage system primary action
1. In the wizard, enter the IP or FQDN of the storage system you want to add, and the
username and password of the user account you wish to use with the plug-in. Click
Validate. See Figure 6-18.
There is an optional alias input, which can be defined as a friendlier name to refer to each
registered storage system. This can be useful when you have multiple storage systems
registered to the plug-in and want an informal, unique identifier for the storage system used
throughout the plug-in's interfaces.
2. Enter an alias name. See Figure 6-19.
3. If there are multiple pools on your storage system, you are asked to select multiple pools
to register to the plug-in, select the pools you want to use.
If there is a provisioning policy attached to the pool, it is displayed in a column within the
list of pools datatable. For more information about provisioning policies, see 6.11.3,
“Provisioning policies” on page 178. Additionally, the parent pool, capacity, and status of
each pool are shown within the list of pools datatable.
Note: The pools that are selected are used when creating volumes. If there is only 1 pool
available, this page is not shown.
140 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 6-20 Add storage system: Selecting the pools to be used throughout the plug-in for standard
topology system
4. For storage system configured in the non-standard topology, you are asked to select pools
from each site of the storage system. Select the pools you want to use from site1 first
(Figure 6-21), and then from site2 (Figure 6-22 on page 142). At least one pool from each
site must be selected.
Figure 6-21 Add storage system: Selecting the pools from site1 for non-standard topology system
The wizard closes, and the dashboard refreshes and lists the recently added storage system
in the datagrid.
Tip: There is no restriction on the number of systems that can be added to the plug-in.
After a storage system has been added to the plug-in, a task will appear within the vSphere
client to show that the plug-in is discovering vSphere managed objects that are associated
with that storage system. See “Populating information from IBM Storage Systems” on
page 135. Some panels might show incorrect information or work incorrectly until this is
completed.
142 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
6.5.3 Refreshing the inventory of registered storage systems
The database behind the plug-in stores various information and relationships between
different vSphere and Storage Virtualize objects. When various panels open, a direct
database read is performed to get the displayed information. This means that some
information might be out of date since the last database update.
To refresh the storage system objects in the database, click the sync icon on the dashboard, a
recent task appears as the plug-in discovers potential matches between vSphere and
registered storage system environments. See Figure 6-24.
Figure 6-24 Sync icon button next to add storage system primary action
This action creates an inventory of all storage systems registered in the plug-in dashboard
and compares any discovered objects against any existing ESXi hosts and VMFS datastores
that are configured in vSphere. When the discovery task is finished, you can use the plug-in
workflows to manage pre-existing datastores.
Note: Editing your storage system by using the plug-in updates only the alias name as
displayed in the plug-in. No changes are made on the underlying storage system.
Figure 6-25 Edit storage system: Delete storage system icon button next to edit storage system icon
button
3. Re-validate your storage system by entering the password associated with the pre-filled
user account. Alternatively, you can change the user account that is used by the plug-in
but ensure that the new user has visibility of the previously selected pool. See Figure 6-26
on page 144.
Note: You cannot edit the IP/FQDN of the system because the system interprets the
change as registering a new storage system. To update the connection address for an
existing storage system registration, remove the selected storage system from the
dashboard, and then re-add using the new IP/FQDN.
4. After re-authenticating, you can edit your alias name. This is pre-filled with the current
alias but is not compulsory. Click the NEXT button.
Figure 6-27 Edit storage system: Edit the alias name of the storage system
5. Select or unselect pools to update the pools registered to the plugin and click the NEXT
button. See Figure 6-28 on page 145.
144 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Note: If pools have associated datastores that are managed by the plug-in, they cannot be
unselected.
6. For non-standard topology storage system, to update the pools registered to the plugin
select or unselect pools from site1 first then select/unselect pools from site2. See
Figure 6-29 and Figure 6-30 on page 146.
Figure 6-29 Edit storage system: Update registered pools from Site 1
7. Click the Edit button to finalize the details and update the plug-in's database. See
Figure 6-31.
The dashboard refreshes, and the new details are shown. The errors for invalid credentials
are resolved.
146 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
To delete a storage system from the plug-in:
1. Select the storage system that you want to remove, edit, and delete icon buttons are
shown.
2. Select the red trash bin icon.
3. A warning message is displayed. To continue, click Delete. See Figure 6-32.
The dashboard refreshes, and the storage system removes any relationship between the
system and any objects within the vSphere inventory.
4. Click Volume Groups to see volume groups on the storage system. See Figure 6-34 on
page 148.
5. Click Datastores to see datastores created using the storage system. See Figure 6-35.
6. Click More Details to see more details of the storage system. See Figure 6-36.
148 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
6.6.1 Creating datastore(s)
In the current version of the plug-in, you can create single or multiple datastores within one
workflow. To create a datastore:
1. Right click a cluster in the inventory or navigate to the cluster and expand the actions
menu. Navigate to Create VMFS Datastore and select IBM Storage. See Figure 6-37.
2. Enter the details of the one or more datastores that you want to create.
The options are:
– Name: The name of the datastore.
– Version: This will always be VMFS 6.
– Size: Size of the datastore, in TB or GB.
– Number of Datastores: The number of datastores to create.
See Figure 6-38 on page 150.
To create multiple datastores, you can use the fields Start from and Delimiter. These
fields specify the suffix that is added to the defined datastore name. For example, if 3
datastores named dstore are created and if Start from is set to 1 and Delimiter is set to
‘-’, then the names of the datastores created are dstore-1, dstore-2, and dstore-3.
This naming convention is useful for adding multiple datastores of the same name with a
different suffix so that datastores can be grouped by name.
3. Select a storage system on which to create the volume. To aid in the selection of the
storage system, the capacity information is displayed. See Figure 6-39.
Figure 6-39 Create datastore: Selecting the storage system to support the underlying volumes
When you select a storage system, the plug-in verifies the following factors:
– The host objects exist on the selected storage system. If not, an error is displayed.
– The hosts belong to the same ownership group as the registered user for that storage
system. If not, then an error is displayed.
– All hosts in the selected vSphere cluster are members of the same host cluster on the
specified storage system. If the hosts are not in the same host cluster, an error is
displayed.
– All the hosts are not in a host cluster. If the hosts do not belong to a host cluster, then a
warning is raised but you can continue.
150 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
See Figure 6-40.
Figure 6-40 Create datastore: Assessing the host connectivity between vSphere and Storage Virtualize
Note: The number of hosts in the vSphere cluster and the storage system host cluster
must match before you can continue creating the datastores.
Notes:
You cannot select pools that are not online or that belong to different ownership groups
than the registered user.
If only one pool is registered, the pool selection page is skipped, and the single
registered pool is automatically selected.
5. Optionally, you can select a volume group to place the newly created volume in. Upon
selecting the volume group, the snapshot and replication policy details will be displayed
above the datagrid. See Figure 6-44 on page 153.
152 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 6-44 Volgrp selection
6. Review the summary information and click Create. The review page shows the selections
made and lists a summary of the defined parameters.
A progress bar shows that status as the plug-in creates and maps volumes on the storage
system and creates the datastore(s).
The time it takes for the process to finish depends on the vSphere environment or the
storage system configuration. See Figure 6-45.
Figure 6-45 Create datastore: Review the details for the datastore workflow
The datastore summary card can be found on the summary tab of the datastore. Navigate to
the IBM Storage panel by scrolling down the page. See Figure 6-46 on page 154.
The table in this view shows information that can be used to access characteristics of the
datastore's representation on the storage system without accessing the IBM Storage
Virtualize GUI. Following vSphere best practices, each datastore is created with a single
underlying volume, the UID, name, and ID can all be used to determine the specific volume
on the storage system.
By default, during the create datastore workflow, each volume is created as a thin provisioned
volume, unless the registered pool has a provisioning policy specifying thick-provisioned. In
which case, the volume will have no capacity savings. To read more about capacity savings
go to 6.11.3, “Provisioning policies” on page 178.
The first tab of the summary card displays the following datastore details: the name of the
storage system where the volume was created, the name of the pool where the standard
volume was created for standard or HyperSwap configured storage system, the name of the
pool on site1 and site2 where the volume was created on a system with a non-standard
topology, the UID of the underlying volume (vDisk UID), the type of provisioning policy for
capacity savings, the name and ID of the underlying volume, the topology of the non-standard
configured storage system (Stretched or HyperSwap, visible only for non-standard
configurations), and whether the datastore is HA or not. See Figure 6-47 on page 155 and
Figure 6-48 on page 155.
154 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 6-47 Stretched-ds summary
If the volume is part of a volume group on the storage system, the volume group details are
listed in the adjacent tab of the summary card as follows: the name of the volume group the
underlying volume is part of, the number of snapshots for the underlying volume (if
supported), the frequency and retention period of any assigned snapshot policy, whether the
snapshot policy marks snapshots as safeguarded, the replication status of the partnership
link if a replication policy is assigned, and whether the storage systems are replicating within
the policy-defined RPO alert time. See Figure 6-49 on page 156
Snapshot and Replication policy details are listed if they are assigned to a volume group. The
snapshot count field can be used to identify datastores with the snapshots associated to
them. To see how this information can be useful when creating a new datastore from a
snapshot, see 6.9, “Datastore snapshots” on page 169.
To expand a datastore:
1. Right-click the datastore that you want to expand or navigate to the datastore and select
the actions menu. Select IBM Storage → Expand VMFS Datastore. See Figure 6-50 on
page 157.
156 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 6-50 Datastore actions entry point
A window opens within the vSphere UI (Figure 6-51 on page 158) that shows the following
information:
– Name of the selected datastore
– Current total capacity of the datastore
– New total capacity after expanding
Enter the amount to expand the datastore. There is also an entry field to enter the amount
to expand the datastore by.
2. Declare the number value to expand the datastore by in the Expand By field, then select
either GB or TB by selecting TB to open drop-down menu.
The Datastore Capacity Usage bar changes as you enter new values to show how much
of the datastore will be used after the datastore is expanded.
3. Click Expand. The plug-in expands the datastore by doing the following actions:
– Expand the capacity of the volume on the storage system.
– Rescan the storage of the hosts mounted to the datastore.
– Expand the datastore to fill the capacity of the volume presented.
During expansion, a plug-in specific recent task shows that the attempt to expand the volume
has happened. Errors are listed in the details column.
158 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 6-52 Deleting a managed datastore with snapshots
If any snapshots are associated with the datastore, you are asked to acknowledge it
before proceeding with the deletion. If you click Delete, the volume state changes to
deleting on the storage system, which means the volume is not visible in the Storage
Virtualize GUI. However, the volume is visible from the CLI. When you view the volume on
the CLI, the name is changed by attaching a prefix of del_ and the time of deletion as a
suffix.
Note: When a volume is in the deleting state, it is not removed from the storage system
until all associated snapshots are also deleted.
Deleting the datastore through the plug-in does the following actions:
– Unmount the VMFS volume on each host associated with the datastore.
– Remove the volume on the storage system.
– Rescan the storage on each host associated with the datastore.
The rescan removes the datastore from the vSphere client. During this process, a new
task is listed in the vSphere client that shows whether the volume on the storage system
was deleted successfully. If the volume on the storage system has volume protection
enabled, then the datastore is left in an inaccessible state, with the recent task displaying
the error returned from the storage system.
Note: To remove this inaccessible datastore, the storage admin will need to remove the
volume from the storage system after the volume protection period expires. Then, each
host should run a rescan to remove the datastore from the vSphere client.
The host summary window can be found on the summary tab of the ESX host object. Scroll
down to view the IBM Storage panel. See Figure 6-53.
Tip: The panel can be moved to another location by dragging and dropping it. This action
changes the layout and is saved for the next time the user navigates to this view.
The table in this view shows how the selected ESX host is represented on each of the
registered storage systems. If the plug-in finds a match for the ESX host, the latest status of
the host on the storage system is listed. If the host with site is defined on the storage system
having non-standard topology, the site name will be listed in the host summary card.
However, if the plug-in does not find a match, the status is Undefined.
Tip: If there is no alias for any of the storage systems the alias column is automatically
hidden. You can alter the column's visibility using the icon in the bottom left of the datagrid.
If a host is in an Undefined state, you cannot create datastores by using the vSphere Cluster
that the ESX host is a member of, and errors are listed in other UI panels. It is possible to
resolve this error using the plug-in. See 6.7.3, “Creating a new host” on page 162 to find out
how to create a host.
Note: This might mean that information is outdated in this summary view. To refresh the
database and get a live view of the hosts representation and for further details, click the Go
to Host Connections link, which takes you to the Host Configure Panel.
160 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
6.7.2 Configure panel
The Host Configure panel provides further insights into defined hosts, and it is also the entry
point to create an undefined host. For more information, see 6.7.3, “Creating a new host” on
page 162.
Note: The time it takes for the view to update depends on both your vSphere and
registered storage system's configuration.
The datagrid by default shows details about each of the registered storage systems and the
visibility of the ESX host on the associated system. Similarly, to the host summary view, if no
match is found on the storage system the visibility Undefined. Otherwise, the live status of the
host is presented with a status label. Each of the columns in the datagrid can be filtered and
sorted alphabetically by clicking the column header.
Tip: To change the columns that are shown in the grid, click the column toggle icon in the
lower left of the datagrid.
To view more details about a defined storage system or to access the create host workflow on
an undefined host, toggle the detail caret icon on the left side of each storage system name.
See Figure 6-55 on page 162.
In this view, details are presented about the host. The details are taken directly from the
storage system. The adapter identifiers are taken from the ESX Hosts Storage Adapters view
and are always visible whether the host is defined. If the host is not defined, a warning
message is displayed that provides a link to the system's IP/FQDN. Clicking the link opens a
new tab and allows the user to create the host by using the storage system's GUI.
Alternatively, you can click the Add Host button to start the create host workflow. See 6.7.3,
“Creating a new host” on page 162 for how to proceed with this workflow.
Note: To enable support for vSphere Virtual Volumes, hosts are created with a vVol host
type by default.
You cannot manage host clusters in this workflow. If you are using host clusters on the
storage system, select the link to the GUI and add the host into a host cluster. If you do not
ensure that vSphere clusters and host clusters are configured in the same way, then you
cannot create datastores, and some UI panels present warnings.
Ensure that fibre channel zoning has been completed, so host FC initiators can
communicate with the FC target ports on the storage system. This is a prerequisite for
creating a new host
162 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 6-56 Host details
2. Adapters that are visible to the selected storage system are selectable in the GUI. Select
at least one adapter to use when you create the host.
3. Review the host details and click ADD to issue the command to create the host object.
See Figure 6-49.Click Add to issue the command to create the host object. See
Figure 6-58 on page 164.
The Host Configure panel is refreshed as the newly created host object is added to the
plug-in's database.
4. Monitor the recent tasks to ensure completion of the Create Host task. Any errors are
displayed in the details column. To view created snapshots refer to 6.9, “Datastore
snapshots” on page 169.
Tip: The IBM Storage panel can be moved to another location within the page by dragging
and dropping the card. The change is saved, so the storage panel is in the saved location
when the summary page is opened.
164 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 6-59 Cluster summary card
The table displayed within the IBM Storage panel shows whether the cluster is configured on
each of the IBM storage systems registered to the plug-in as of the last database write. If the
cluster is found within the database and on the storage system, the live status of the cluster is
shown. However, if the cluster is not present in both environments it is labeled as Not
Configured.
Note: This might mean that information in the cluster summary panel is out of date. For
more information about updating the cluster information and about how to view the current
state of a cluster, see 6.7.2, “Configure panel” on page 161.
If a vSphere Cluster is not represented in a storage system, you cannot use that system to
create a datastore.
The Go To Host Connections link will automatically redirect to the plug-in's IBM Storage
page under the cluster's configure page. For more information, see 6.7.2, “Configure panel”
on page 161.
Tip: It can take some time for the panel to load, depending on the number of hosts in the
cluster, and the storage system configuration.
By default, the table in this panel shows the storage system name, alias name if defined,
cluster name on the storage system, the storage system status, whether the ESXi hosts are
defined on the storage system, and storage system topology. This can be expanded to show
more details about each ESX host that belong to the vSphere Cluster and the status of the
cluster on the storage system. See Figure 6-61. Hosts and host clusters can be managed
through the configuration panel. For more details see 6.8.3, “Manage hosts and host clusters”
on page 167.
If there are differences between the vSphere Cluster or ESX hosts and the registered storage
system environment, then a mismatch error is displayed when the row is expanded. Directly
access the storage system's GUI using the link.
166 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Note: If the storage system hosts are not in a host cluster, then the cluster is not
represented, but the label is green, which indicates success because the host clusters are
not enforced throughout the plug-in.
2. Add Host page even enables you to create hosts on storage system if they are not defined,
if already defined then you can proceed to add them to the host cluster. See Figure 6-63
on page 168.
3. In the Add to Host cluster page, use the toggle to create a new host cluster on the storage
system. Alternatively, select an existing host cluster in which to add the hosts. Refer to
Figure 6-64 and Figure 6-65 on page 169.
168 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 6-65 Manage add new hostcluster
4. Review the host cluster details and click Create to manage hosts and host clusters as per
requirement. See Figure 6-66.
If the datastore is part of a volume group you can see panel where you can manage
datastore snapshots. See Figure 6-68.
170 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
1. Click Take Snapshot located on the right side, above the datagrid. This will open the Take
Snapshot modal. See Figure 6-69.
2. You can name the snapshot to support a naming convention or to help you identify the
snapshot more easily. By default, the name follows the standard snapshot naming
convention.
3. Click the Take Snapshot button in the modal. The modal closes, and the datagrid
refreshes to display the recently added snapshot. See Figure 6-70.
4. Monitor the recent tasks to ensure completion of the Take Snapshot task. Any errors are
displayed in the details column.
3. Click the clone icon next to Take Snapshot. See Figure 6-72.
4. The wizard will open. On the first page of the wizard, select the clone type Thin Clone or
Clone based on your requirement and click NEXT. See Figure 6-73.
172 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
5. By default, the new datastore name is generated by appending the selected snapshot's
creation timestamp, including minutes and seconds of the current timestamp, to the
source datastore's name to ensure uniqueness. If needed, you can edit the new
datastore's name and click NEXT. See Figure 6-74.
6. Select a pool from the datagrid where the cloned volume will be created. By default, the
parent datastore's pool is selected. Click NEXT to proceed. See Figure 6-75.
For storage systems configured in HyperSwap and Standard topologies, the cloned
datastore is generated as a standard datastore. You need to select a single pool from the
list of pools. See Figure 6-76 on page 174.
For storage systems configured in Enhanced Stretched Cluster (ESC) topology, copying to
a new datastore is supported on systems running code level 8.6.0.0 or higher.
For storage systems with code levels from 8.6.0.0 to 8.6.2.0, you need to select a single
pool from the list of pools. See Figure 6-76.
For storage systems with code level 8.6.2.0 or higher, the cloned datastore is generated
as an HA datastore by default. To proceed, you need to select one pool from each site.
See Figure 6-76.
To generate a standard clone datastore on systems with code level 8.6.2.0 or higher, turn
off the Clone HA Datastores toggle, and you will be prompted to select a single pool.
7. Review the details and click on Copy button to create a new datastore from snapshot. See
Figure 6-77.
Figure 6-77 Datastore Snapshots: Review details for a new datastore from snapshot
8. Monitor the recent tasks to ensure completion of the Copy to New Datastore task.
174 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
To delete snapshot:
1. Select the VMFS Datastore of which you wish to delete snapshots, then navigate to
IBM Storage → Datastore Snapshots within the Configure tab.
2. Select the snapshot to be deleted from the datagrid containing the list of snapshots.
3. Click the trash icon in the top right corner.
4. Click Delete on the confirmation modal. See Figure 6-78.
5. Monitor the Recent Tasks view in vSphere to confirm the delete snapshot task is
completed successfully.
If datastore is part of any volume group, the panel displays volume group name as its title. If
the volume group has an associated snapshot policy that is marked as safeguarded, the
"Safeguarded" label will be rendered next to the volume group name. Also, if the volume
group has an associated replication policy where the link status is running, the "Replicating"
label will be rendered next to the volume group name. See Figure 6-79 on page 176.
If snapshot policy is assigned to the volume group, the snapshot policy card is displayed
having the policy name, frequency, whether it is safeguarded or not and the timestamp of the
next scheduled snapshot.
If replication policy is assigned to the volume group, the replication policy card will be
displayed having the policy name, topology, replication policy's RPO alert threshold in
minutes and the production and recovery system names of the replication partnership.
Click the Manage Volume Group button at the top right of the panel. If the datastore is
already part of a volume group, it will be preselected.
Tip: By default, if no snapshot or replication policy is assigned to any volume group on the
storage system, the column will be hidden. To display the column, click the icon located at
the bottom left of the datagrid.
Select the volume group to assign to the datastore and confirm the assignment by clicking
Move. Successful movement of datastore to a volume group can be tracked through recent
task. See Figure 6-80 on page 177.
Note: If the datastore is currently associated with a volume group that has a replication
policy, you cannot directly move it to another volume group. You need to first remove the
datastore from its current volume group and then follow these steps. For instructions on
removing a datastore from a volume group, see 6.10.3, “Removing datastore from a
volume group” on page 177.
176 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 6-80 Moving volume group
Once the volume group details of the datastore have loaded, click the Unlink icon button in
the top right corner of the panel. Confirm that you wish to remove the datastore from the
volume group by clicking Remove. See Figure 6-81. Successful removal of datastore from a
volume group can be tracked through recent task.
Child pools can be assigned to an ownership group to further restrict access. See 6.11.2,
“Ownership groups” on page 178 for more information.
Child pools are represented in the plug-in as indented from their parent to highlight that they
are children. However, the view can be different in combination with ownership groups
because only the children are visible to the authenticated user.
When authenticating with a user that is part of an ownership group to register a storage
system, only the visible child pools are displayed and selectable.
When you create a datastore, if the user associated to the selected storage system is not in
the same ownership group as the defined hosts, an ownership error is displayed. To resolve
the error, open the management GUI using the link and ensure that the hosts are part of the
same ownership group.
All volumes created in a pool with a provisioning policy are automatically created with the
same provisioning policy assigned. This option can be useful if you want to define a
consistent behavior between all objects that are created in a registered pool.
Volumes created through the plug-in are thin-provisioned by default. However, if the pool has
a predefined provisioning policy, the plug-in uses the definitions in the policy for the pool.
When you delete a datastore, the system displays an informational alert if volume protection
is enabled on the underlying system. If the defined period has not elapsed since the last I/O
activity, the underlying volume has had recent I/O activity within the volume protection time,
the delete datastore action fails. To resolve this wait for the period to pass and try again.
178 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
6.12 Troubleshooting
This section includes a discussion of troubleshooting the plug-in configuration.
The command collects the current database, logs, plug-in config file and other files that can
be useful for debugging the issue and places them in a single .tgz file, in the /tmp directory.
When the snap is generated, you see a message, for example:
Snap file created at /tmp/snap.ibm-plugin.230718.173615.tgz
The snap can then be copied to a local machine by using scp. If your VM reboots before the
snap is copied to a storage system, then the snap is lost and a new one must be taken.
After the snap is on the storage system, it can be downloaded by doing the following:
1. Access the management interface of the IBM storage system by using the web browser
and select Settings → Support → Support Package.
2. Select Download Support Package → Download Existing Package.
3. Filter the list of support files by the plug-in snap, for example, snap.ibm-plugin and click
Download on the appropriate snap file.
Note: The Linux command less is not installed by default. You can install it by using the
command tdnf install less if you have internet connection on your plug-in VM.
In some circumstances, these tasks might be listed incorrectly in the vSphere client,
displaying the task identifier, instead of the correct task name or task description.
Note: The vSphere client is inaccessible for a few minutes while the VMware vSphere
Client service restarts.
If a restart of the vSphere Client does not resolve the issue, a restart of the vCenter Server
might be required.
180 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
4. Log in to the VM as root by using your credentials, which were set when the VM was
deployed.
5. Edit the network config file found in /etc/systemd/network/. Typically, this file is called
10-gosc-eth0.network. The file can be edited by using vi or vim.
6. In the editor, enter i to edit the contents of the file.
7. Change the appropriate values to the new values.
8. Exit and save the file by pressing the escape key and then typing :wq.
9. Apply the network changes by running systemctl restart systemd-networkd.
After completing this process, ensure that the VM can be reached with the new network
settings.
Note: There could be multiple reasons for a timeout. The list does not cover the entire
range of possible causes.
To check hardware compatibility in vSphere Client, navigate to the ESXi host. Click the
Updates tab. Under Hosts, check Hardware Compatibility.
Note: Check with your Storage Administrator before deleting any volumes from the
storage system. This is a permanent action.
5. In the Storage Devices table, look for the Datastore column. If it says “Not Consumed”,
there is no VMFS datastore on it.
6. Check the identifier of a device (in the format naa.*) in the Name column or by selecting
the device in the Storage Devices table. Copy the number after "naa." and this id should
match the vdisk uuid present on the storage system.
Note: To learn more about how to check vdisk uuid refer to your product
documentation.
7. Delete the volume on the storage system if no VMFS datastore has been created on the
volume.
Note: Check with the Storage Administrator before deleting any volumes from the
storage system. This is a permanent action.
182 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
8. Once the volume is deleted from the storage system, perform a rescan on all ESXi hosts
that had visibility of the volume. The device is automatically removed from the Storage
Adapters.
Note: Even if the devices are detached from the ESXi host using the detach option in
the Storage Devices in the Configure tab, the volumes will remain accessible to the
ESXi host as long as the volumes are not removed from the storage system.
IBM Storage Insights is an IBM Cloud® software as a service (SaaS) offering that can help
you monitor and optimize the storage resources in the system and across your data center.
IBM Storage Insights provides cognitive support capabilities, monitoring, and reporting for
storage systems, switches and fabrics, and VMware vSphere hosts in a single dashboard.
Monitoring Inventory IBM block storage, IBM and non-IBM block storage,
management switches, fabrics, and file storage, object storage,
VMware ESXi hosts switches, fabrics, and VMware
ESXi hosts
186 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Resource Functions IBM Storage Insights IBM Storage Insights Pro
management (subscription)
Drill-down No Yes
performance
workflows to enable
deep
troubleshooting
Explore No Yes
virtualization
relationships.
Exporting No Yes
performance data to
a file
Performance No Yes
planning
Restriction: You must have a current warranty or maintenance agreement for the IBM
block storage system to open tickets and send log packages.
As an on-premises application, IBM Spectrum Control does not send the metadata about
monitored devices off-site, which is ideal for sites that do not want to open ports to the cloud.
However, if your organization allows for communication between your local network and the
cloud, you can use IBM Storage Insights for IBM Spectrum Control to support your IBM block
storage.
188 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
IBM Storage Insights for IBM Spectrum Control is like IBM Storage Insights Pro in capability.
IBM Storage Insights is available for no additional cost if you have an active license with a
current subscription and support agreement for IBM Virtual Storage Center, IBM Spectrum
Storage Suite, or any edition of IBM Spectrum Control.
The data collector streams performance, capacity, asset, and configuration metadata to your
IBM Cloud instance.
The metadata flows in one direction, that is, from your data center to IBM Cloud over HTTPS.
In the IBM Cloud, your metadata is protected by physical, organizational, access, and security
controls. IBM Storage Insights is ISO/IEC 27001 Information Security Management certified.
Figure 7-1 shows the architecture of the IBM Storage Insights application, the supported
products, and the three main teams who can benefit from using the tool.
For more information about IBM Storage Insights and to sign up and register for the
no-charge service, see the following resources:
Fact sheet
Demonstration
Security guide
Figure 7-2 IBM Storage Insights System overview for block storage
The block storage dashboard is a default one that is shown when you go to the Operations
dashboard. To view the storage systems that are being monitored, select Dashboards →
Operations. Then, click the storage system in which you are interested in the left panel of the
dashboard. For example, you can see the health, capacity, and performance information for a
block storage system in the Operations dashboard. The storage system is colored red
because there are problems with nodes and logical components.
IBM Storage Insights supports Brocade and Cisco switches and fabrics so that you can detect
and investigate performance issues throughout your storage environment. You can follow the
trail of storage requests through the components in the SAN fabric to the target storage
systems.
Identify the IP addresses and user credentials of the vCenter Servers that manage the
VMware ESXi hosts and VMs that you want to monitor. IBM Storage Insights uses this
information to connect to the vCenter Servers and discover their managed VMware ESXi
hosts and VMs. You can add multiple vCenter Servers concurrently if they share the same
user credentials by completing the following steps:
190 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
1. In the menu bar, select Resources → Hosts.
2. Click Add vCenter Server.
3. Enter the IP addresses or hostnames that you use to connect to the vCenter Server, and
then enter the username and password that are shared by all the vCenter Servers that you
are adding as shown in Figure 7-3.
Note: When you add vCenter Servers for monitoring, you must specify the connection
credentials of users that are used to collect metadata. The users must meet the following
requirements:
Role. Read Only (minimum). For example, the Administrator role or the Virtual Machine
Power User role
Privilege. Browse datastore
After the vCenter server initial discovery starts, all ESXi hosts and VMs that are managed by
vCenter are discovered by IBM Storage Insights. You can see details of each ESXi host on
the IBM Storage Insights Pro edition, as shown in Figure 7-4 on page 192.
End-to-end SAN connectivity is visible when IBM Storage systems, SAN switches, and
VMware ESXi servers are added to IBM Storage Insights.
Most of the time, it is a complex process to find which VM creates heavy I/O on a datastore
volume. IBM Storage Insights Pro can monitor virtual machine disk (VMDK) level I/O
performance, and you can find the more heavily used VM, as shown in Figure 7-5.
Figure 7-5 VMDK level performance monitoring on IBM Storage Insights Pro
With the introduction of the Embedded VASA Provider in version 8.6.0.0, an additional field
was added to the output of lsvdisk to include the display_name for individual virtual volumes
on the system. This field is populated by the VASA provider whenever a vVol is created, and
192 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
contains information provided by vSphere to better identify the volume within the storage
system.
To help identify the individual vVols when reviewing performance information, this information
can be accessed by Storage Insights. For any virtual volume on the storage system, the value
as reported in the display_name field replaces the traditional value of volume name. See
Figure 7-6.
Figure 7-6 displays the view of volumes on the storage system. Traditional VMFS datastores,
such as VMFS-DS-001, 002, and 003, are listed with the individual vVols associated with a
VM named, vvolsftw-vm-11, such as vvolsftw-vm-11, vvolsftw-vm-11.vmdk,
vvolsftw-vm-11_1.vmdk, and volsftw-vm-11_2.vmdk.
Note: Because vVols are dynamically attached and detached from ESXi hosts during VM
power state changes or vMotion, the host column always reflects the currently connected
ESXi host for that specific vVol.
IBM Spectrum Connect provides a web-based GUI that can help make administration easier
and more straightforward. The GUI can save time when you set up, connect and integrate the
required storage resources into your cloud environment.
Through its user credential, storage system, storage space, and service management
options, IBM Spectrum Connect facilitates the integration of IBM storage system resources
with the supported virtualization, cloud, and container platforms.
The following storage services, which can be considered as storage profiles, are defined in
IBM Spectrum Connect and delegated for use in VMware for profile-based volume
provisioning:
VMware vSphere Web Client (vWC)
VMware vSphere Storage APIs - Storage Awareness (VASA)
VMware vRealize Operations (vROps) Manager
VMware vRealize Automation, and VMware vRealize Orchestrator (vRO)
Microsoft PowerShell
Note: The VASA for VMware vSphere Virtual Volumes (vVols) function is also available
through the embedded VASA Provider in IBM Storage Virtualize 8.5.1.0 or later. For more
information about using VASA or vVols with the embedded VASA Provider, see Chapter 5,
“Embedded VASA Provider for Virtual Volumes (vVol)” on page 83.
196 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
As shown in Figure 8-1, the IBM Spectrum Connect application communicates with IBM
storage systems by using command-line interface (CLI) commands over secure shell (ssh).
VMware also issues application programming interface (API) calls directly to IBM Spectrum
Connect over TCP/IP. Therefore, IP connectivity must exist between the IBM Spectrum
Connect server and the Management IP address, sometimes referred to as the cluster IP
address, of the storage system. For security, some network infrastructures might be
segmented into virtual local area networks (VLANs) or are isolated. The isolation prevents the
management of the storage system from being accessible from virtual machines (VMs) within
a vSphere environment. Check with your network administrator to ensure that IP connectivity
exists between the different components.
Communication between VMware vRO, vSphere, or vCenter and the IBM storage system by
using IBM Spectrum Connect is out-of-band and is therefore separate from I/O traffic between
a host and the storage system.
If network connectivity issues occur, which prevent IBM Spectrum Connect from
communicating with either the cloud interface or the storage system, the I/O workload that is
running on the hosts is unaffected.
Note: When you use vVols, IBM Spectrum Connect can operate in a high availability (HA)
model, where two IBM Spectrum Connect servers can be configured to run in
Active/Standby. For more information about this feature, see IBM Spectrum Connect
Version 3.11.0.
VM resource requirements might depend on the roles that are performed by the
IBM Spectrum Connect Server. The IBM Spectrum Connect server periodically queries the
Depending on the size and complexity of the IBM Storage Virtualize configuration, the
population task might take some time.
Tip: In large environments, consider increasing the population interval to allow sufficient
time for the task to complete. For more information about IBM Spectrum Connect, see
Note: Before you install the IBM Spectrum Connect application, it is a best practice to
configure a Network Time Protocol (NTP) client on the Linux operating system. A common
time frame for all components can make it easier to review log during debug operations.
For more information about installation instructions and minimum requirements, see
IBM Spectrum Connect Version 3.11.0.
The installation summary screen (Figure 8-2) provides details about additional steps to
configure firewall rules and SELinux, if required.
198 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
8.1.4 Initial configuration
When the IBM Spectrum Connect application is installed, you can access the management
web interface (Figure 8-3), by connecting to http://<IP or FQDN>:8440. The default
credentials are:
Name: admin
Password: admin1!
After a successful login, complete the initial setup wizard, which includes the following
mandatory configuration steps:
Set up an HA group and IBM Spectrum Connect server identity
Provide details for the SSL certificate
Define storage system credentials
Change the default IBM Spectrum Connect credentials
Notes:
When you specify the storage-system credentials, it is a best practice to create a
dedicated user account on the storage-system, which is easily identifiable as being
from an IBM Spectrum Connect server. This account is the user account that is used
when IBM Spectrum Connect issues CLI commands to the storage system. Having an
easily recognizable username can help make some tasks easier, such as reviewing
audit logs within IBM Storage Virtualize, and can make it clear if the CLI commands
were issued by IBM Spectrum Connect.
The storage-system credentials are global and apply to all storage systems being
registered in IBM Spectrum Connect. When possible, consider using Lightweight
Directory Access Protocol (LDAP) authentication on the storage system to simplify user
account management.
For example, when using VMware vSphere Virtual Volumes, the storage-system user account
must be assigned the “VASA Provider” role on each storage system. Create a User Group
with the “VASA Provider” role in IBM Storage Virtualize, as described in the following steps:
1. Open the IBM Storage Virtualize management interface and click Access option in the
navigation menu. Click Create User Group at the bottom of the window (Figure 8-4).
2. Enter a name for the User Group to be created, and select VASA Provider. Click Create
(Figure 8-5 on page 201).
200 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 8-5 VASA Provider option
4. Enter a suitable username and select the VASAUsers group created previously. Enter and
confirm the password for this user account then click Create, as shown in Figure 8-7 on
page 202.
5. When the initial configuration wizard is complete, in the IBM Spectrum Connect
management interface, you see a “Guided tour” that provides a brief overview of the
interface. When the tour is completed, an empty inventory of IBM Spectrum Connect is
displayed (Figure 8-8).
6. Click the Interfaces pane, on the left side of the screen, to view and configure the Cloud
Interfaces. You can configure items such as a vCenter server for the IBM Storage
Enhancements or the connector for vRO.
Alternatively, click the Monitoring pane, on the right side of the screen, to configure the
vROps Manager integration.
202 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
8.1.5 Registering a Storage System into IBM Spectrum Connect
On the main page of the IBM Spectrum Connect management interface, click the plus (+)
icon, and enter the fully qualified domain name (FQDN) or IP address for the storage system
(Figure 8-9).
You are prompted to enter only the IP or hostname of the storage system because the
Storage Credentials were defined in the initial configuration wizard (Figure 8-10).
Note: If you use VMware vSphere Virtual Volumes, ensure that the Storage Credentials
are configured with the VASA Provider role.
IBM Spectrum Connect uses the concept of a Storage Service for simpler and more flexible
storage management. A Storage Service can also be described as a Storage Provisioning
Profile. If a storage administrator defines the capabilities on the service or profile, all volumes
that are created in that service are created the same way and inherit the same attributes.
204 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Multiple Storage Services can be presented to the VMware interface to present multiple
provisioning options to the vSphere administrator (Figure 8-12).
However, a storage administrator might be reluctant to grant full access to a storage system
because if too many volumes are created, the capacity is difficult to manage. In this scenario,
the storage administrator can create child pools to present ring-fenced allocations of storage
capacity to a Storage Service. A vSphere administrator can then create volumes on-demand
within that storage allocation.
Tips:
When a Storage Service is dedicated to use with VMware vSphere Virtual Volumes,
ensure that the vVol Service checkbox is selected.
Consider a mix of configuration options, such as Space Efficiency, Encryption, Tier, and
Data Reduction.
Note: When you create the Storage Service and select the capabilities, Storage Systems
that are not compatible with the current selection are unavailable, such as:
When Synchronous Replication is selected, but the Storage System that is registered
in IBM Spectrum Connect is not yet configured for HyperSwap.
When Flash storage is requested, but storage pools do not exist in the Flash devices.
2. After you define the required capabilities of the storage profile, click Create.
The newly created Storage Service is listed. Notice that allocated storage capacity does
not exist for this Storage Service. A storage resource needs to be associated to the
Storage Service.
206 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
8.2.2 Allocating capacity to Storage Services
To allocate storage capacity to the Storage Services, complete the following steps:
1. Right-click the Storage Service and select Manage Resources (Figure 8-14).
Note: A Storage Resource can be either an existing parent pool within the IBM Storage
Virtualize storage system, or a new or existing child pool. A child pool is a ring-fenced
allocation of storage capacity that is taken from an existing parent pool.
2. To associate an existing parent pool to the selected Storage Service (Figure 8-16 on
page 208), click the Delegate icon ( ) associated with the specific parent pool.
Alternatively, to create and associate a new child pool, click the Plus icon (+) at the top of
the Storage System to create a Storage Resource.
This step creates a child pool of a defined name and capacity within a specified parent
pool.
Note: When using a HyperSwap configuration, you are asked to specify the parent pool
at both sites (Figure 8-17). This specification creates a paired child pool with the same
capacity that is defined at each site.
208 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 8-17 HyperSwap configuration
3. Verify that the Storage Resource allocation for the Storage Service is correct and click
Close (Figure 8-18).
Tip: When you use vVol-enabled Storage Services, this step is optional. However, it is a
best practice because it provides more visibility of the individual vVol-to-VM relationships.
Note the changes that occur when the Storage Service is delegated:
– The color changes to yellow on the Storage Service (Figure 8-19).
– The Delegate icon changes from to (Figure 8-20).
– The Allocated Storage capacity, which is described just under the selected vCenter
interface, is updated to reflect the new Storage Service delegation (Figure 8-20).
2. If you want to remove the delegation, click the Delegate icon ( ) icon to disassociate the
Storage Service mapping (Figure 8-20).
210 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Note: The creation and removal of delegation of Storage Service to the interface does
not impact the existing volumes or host mappings on the storage system.
For IBM Storage Virtualize version 8.5.1.0 and later, the vVol function can be provided in
either of two ways:
1. IBM Spectrum Connect. An isolated application that runs on a separate VM within the
vSphere environment that converts API calls from vSphere into CLI commands that are
issued to the IBM FlashSystem array.
2. Embedded VASA Provider. This application runs natively as a service on the
IBM FlashSystem configuration node, communicates directly with vSphere, and is not
dependent on any other external application or server.
The two models cannot be run in parallel, so you must choose which one to implement in your
infrastructure.
To learn more about the Embedded VASA Provider feature and to see whether it is
appropriate for your environment, see Chapter 5, “Embedded VASA Provider for Virtual
Volumes (vVol)” on page 83.
The vVol architecture, introduced in VMware vSphere 6.0 with VASA 2.0, preserves the
concept of a traditional datastore, maintaining familiarity and functions and also offers some
benefits of legacy Raw Device Mappings.
Also, in this scenario there are several VMs, distributed across many ESXi hosts. The VMs
perform I/O operations to that volume, which can generate a high workload. When
investigating these performance issues, the storage administrator might observe a high
number of IOPS and high response times against that volume. In this example, the storage
system has limited knowledge of the source of the I/O workload or the cause of the
performance issues.
When performing logical unit number (LUN)-level operations on the storage system, such as,
volume-level snapshots that use a FlashCopy map, all VMs that are on the VMFS datastore
that is backed by that volume are affected.
However, RDMs do not offer the same level of functions when compared with VMFS
datastores, specifically regarding VM cloning and snapshots. RDMs also require a storage
administrator to present dedicated volumes for each VMDK file, which can increase the
storage-provisioning management workload.
With vVol, the IBM storage systems recognize individual VMDKs, which allows data
operations, such as snapshots, to be performed directly by the storage system at the VM
level. The storage system uses the VASA provider IBM Spectrum Connect to present vVols to
the ESXi host and inform the vCenter of the availability of vVol-aware storage by using
Protocol Endpoints (PEs).
212 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
One PE, also known as Administrative LUN or PE, is presented at each node in the
IBM Storage Virtualize cluster. A PE presents itself as a traditional storage device and is
detected by an ESXi server like a normal volume mapping. However, PEs offer a more
efficient method for vVols to be mapped and detected by ESXi hosts when compared to
traditional volume mappings and do not require a rescan of host bus adapter (HBA).
IBM Storage Virtualize automatically provisions PEs to all ESXi hosts that meet the following
configuration requirements:
The vVol host type, when the UI is used to perform configuration
The adminlun option, when the CLI is used to perform configuration
Storage services are configured on the IBM Spectrum Connect server by the storage
administrator. Storage services are then used to configure various storage containers with
specific capabilities that can later be configured as vVol datastores in the vSphere Client.
Note: To keep time synchronized between all components of the vVol infrastructure,
ensure that an NTP server is defined on the storage system, IBM Spectrum Connect
server, vCenter, and ESXi hosts.
3. Choose a storage pool to store the 2 TB Utility Volume. This space-efficient volume holds
the IBM Spectrum Connect metadata database, which is essential for managing your vVol
environment.
Although the volume is created with a capacity of 2 TB, it is unlikely to require more than
2 GB of capacity.
If possible, consider creating a mirrored copy of this Utility Volume by selecting a second
pool to store the additional copy.
4. Define credentials for a newly dedicated user account, which enables the IBM Spectrum
Control server to connect to the CLI of the storage system.
This process initially creates a User Group within the IBM Storage Virtualize Storage
system that is assigned with the VASA Provider role, and then creates the User Account
by using the specified credentials within that user group.
5. Click Enable (Figure 8-22).
214 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
6. Configure host objects to be vVol-enabled.
You can modify host types on either an individual host or a host-cluster basis, as follows:
– Individual hosts:
i. Select Hosts. Right-click the hosts that you want to enable and select Modify Type
(Figure 8-24).
216 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 8-28 Warning message
Hosts with a generic host-type allow volumes to be mapped with a Small Computer
System Interface (SCSI) ID of 0–2048. However, ESXi detects only SCSI IDs 0–1023.
Note: Hosts with a vVol or adminlun host type are limited to SCSI IDs 0–511. This
warning refers to existing volumes that might be mapped with a SCSI ID that is greater
than 511 that are lost when changing the host type to support vVols.
Normally, the lowest available SCSI ID is used when a volume is mapped to a host.
Therefore, it is only in rare circumstances where either more than 511 volumes are
mapped to a host or host cluster or a SCSI ID above 511 is specified when a previous
mapping was created.
7. Verify the existing SCSI IDs by reviewing the existing host mappings for a host or host
cluster:
a. Select Hosts → Mappings in the navigation menu, and select All host mappings.
Sort the mappings in descending order by the SCSI ID column to show the highest
SCSI IDs in use (Figure 8-29).
b. Ensure that SCSI IDs above 511 are not in use before you change the host type.
id SCSI_id UID
0 0300000000000000 600507680CD00000DC000000C0000000
1 0301000000000000 600507680CD00000DC000000C0000001
e. Sort the LUN column in descending order and confirm that a PE exists for each node in
the storage system.
In a standard two-node, single I/O-group cluster, two PEs are presented. SCSI IDs that
are used for the first PEs start at 768–769, and increase for each additional node in the
storage system.
218 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 8-31 VASA Provider settings
3. Define the credentials that vCenter uses to connect to IBM Spectrum Connect when the
storage provider is registered (Figure 8-32).
4. Click Apply.
Note: The URL must be in the format of https://<IP or FQDN of Spectrum Connect
server>:8440/services/vasa.
220 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 8-34 New Storage Provider
4. Click OK. The Storage Providers list includes the newly registered storage provider
(Figure 8-35).
In this example, various vVol-enabled storage services are created and delegated in the
following environments:
Production systems and applications
Development or test environments
An application environment that requires encryption or extra security considerations and
must be isolated from other applications
Each storage service can be configured with specific capabilities such as Space Efficiency,
Encryption, and Storage Tier (Figure 8-36).
Each Storage Services was allocated a storage resource and associated to the vCenter
interface (Figure 8-37 on page 223). See 8.2.3, “Delegating Storage Services to vCenter” on
page 210.
222 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 8-37 vVol-enabled Storage Services delegated to a vCenter interface
After the Storage Containers are configured, create the vVol datastore by completing the
following steps:
1. In the vSphere Client window, identify the host or host-cluster for which you want to
configure the vVol datastore by right-clicking the object in the vSphere inventory and
selecting Storage → New Datastore (Figure 8-38 on page 224).
2. Select vVol, and click NEXT (Figure 8-39 on page 224).
3. Select the hosts that require access to the datastore (Figure 8-41 on page 225) and click
NEXT. Review the summary and click FINISH.
4. Enter a name for the VVol datastore, select the related Storage Container from the list,
and click NEXT (Figure 8-40 on page 225).
5. Repeat this process for any additional vVol datastores (Figure 8-42 on page 226). When
the process is finished, the datastores are ready for use.
224 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 8-40 New Datastore: 3
Therefore, it is a best practice to protect the IBM Spectrum Connect server to enable optimal
functions:
Where possible, use VMware vSphere Fault Tolerance (FT) to ensure that if there is an
outage to the ESXi host that is running the IBM Spectrum Connect server, then the wider
infrastructure is still able to access the IBM Spectrum Connect integrations. If FT is not
available, then ensure that vSphere HA is used to minimize downtime of the server hosting
the IBM Spectrum Connect application.
Allocate appropriate compute resources to the IBM Spectrum Connect VM. Because the
applications run as a conduit between the multiple VMware interfaces and multiple storage
systems, performance of the end-to-end system is impacted if resources are limited.
Most importantly, ensure that all VMs associated with providing the vVol infrastructure are
not stored on a VMware vSphere Virtual Volume datastore, which includes the vCenter
Service Appliance (vCSA), the IBM Spectrum Connect server, and other servers that
these applications require to function.
If an outage occurs, the IBM Spectrum Connect server must start before any vVol
operations function.
If the IBM Spectrum Connect server depends on an external LDAP or NFS server that is
on a vVol datastore, then it fails to successfully start IBM Spectrum Connect services. If
IBM Spectrum Connect services fail to start, then vVol datastores are inaccessible and
VMs on the vVol datastore are unavailable. If this situation occurs, contact IBM Support.
226 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
8.4.2 Viewing the relationships between vVol and VM
Because the vVols are created with a unique identifier, it can be difficult to establish to which
VM a vVol belongs and to which vVols a VM is associated.
By creating the association between the vVol-enabled storage service and the vCenter
interface in IBM Spectrum Connect, the IBM Storage Enhancements plug-in can assimilate
information from both systems.
The relationships between vVols and VMs can then be displayed for the vSphere
administrator.
2. Click IBM Storage vVols to list the detected vVols on the storage array and the vVols size
and type.
3. If required, export the list as a CSV file to generate a report.
For more information about the IBM Storage Enhancements plug-in, see 8.5, “IBM
Storage Enhancements for VMware vSphere Web Client plug-in” on page 234.
Therefore, VM-management tasks (for example, powering off a VM) fail when an upgrade is
running on the storage system. Automated services, such as VMware HA and Distributed
Resource Scheduler (DRS), also are affected because they send system commands by using
IBM Spectrum Connect.
Tip: After you perform an upgrade, it is possible that the vSphere Web Client will mark cold
VMs as inaccessible, which means that ESXi hosts were unable to start a new binding to
these VMs. This is expected during a code upgrade.
To recover management of these VMs, the vSphere administrator removes the affected
VMs from the inventory and then adds them again.
228 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 8-44 Audit log entries
To prevent an accidental action in the IBM Storage Virtualize from affecting the IBM Spectrum
Connect objects, the storage system prevents manipulation of these objects unless the
manipulation is performed by a user with the VASA-Provider role. When a “superuser” or
similar account is used to change objects that were created by IBM Spectrum Connect, the
following error is reported:
CMMVC8652E The command failed as a volume is owned and has restricted use.
Warning: Do not make manual changes unless advised by a member of IBM Support
because it might cause more issues.
When configured in an HA configuration, both Active and Standby IBM Spectrum Connect
servers use the same database to ensure infrastructure consistency.
Tip: For more information about the HA configuration, see the IBM Spectrum Connect
User Guide available at IBM Fix Central.
If the IBM Spectrum Connect server is permanently unavailable (and no backup exists), a
new IBM Spectrum Connect server can be commissioned to recover the metadata database
on the metadata VDisk, and the environment can be resurrected. For more information about
this recovery process, contact IBM Support.
To query the active state of the metadata VDisk, connect to the Storage System by using the
CLI management interface and run the following command (Example 8-2).
If the status of the metadatavdisk reports corrupt, verify that the underlying VDisk is in an
operational state by checking the detailed view of the specific vdisk_id (vdisk_id 16 in
Example 8-3 on page 231). Verify that the underlying VDisk is in an operational state.
230 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Example 8-3 Verify that the underlying VDisk is in an operational state
IBM_2145:vvolsftw-sv1:superuser> lsvdisk 16 | grep ^status
status online
status online
In rare circumstances, the output from the lsmetadatavdisk command shows corrupt, and
there are messages in the IBM Spectrum Connect hsgsvr.log file reporting the following
error:
CMMVC8580E The action failed because the metadata utility volume is corrupt.
This error might occur if the configuration node experienced issues when attempting to
access the internal mounted volume. To resolve this issue, put the configuration node into
service state by using the service assistant. After a few minutes, bring the node back in to the
cluster, and retry the lsmetadatavdisk command again. The metadata VDisk can now report
its online status.
Note: If you are unsure of the status of the metadata VDisk, contact IBM Support.
8.4.9 Certificates
Certificates are used by vSphere, IBM Spectrum Connect, and IBM Storage Virtualize to
secure communication between the separate components. Therefore, ensure that the
IBM Spectrum Connect certificate is configured correctly.
If this certificate-issue occurs, regenerate the certificate in IBM Spectrum Connect with the
correct common name and fully qualified domain name, as follows:
1. Go to the IBM Spectrum Connect management interface window and click Server
Certificate (Figure 8-45).
232 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
3. After the certificate regenerates, you must remove and reregister the Storage Provider.
Prerequisites
The following items are the prerequisites:
NTP. An NTP client must be configured on the base Linux OS under the IBM Spectrum
Connect application, the IBM Storage Virtualize storage system, and vSphere, which
includes vCenter and ESXi hosts. Given the multiple components in the end-to-end
infrastructure, any time-skew between IBM Spectrum Connect, IBM Storage Virtualize,
and the VMware platforms can complicate debugging issues when logs are reviewed.
Supported versions. Check IBM Documentation for the interoperability matrix for
supported versions of IBM Spectrum Connect, IBM Storage Virtualize, and vSphere. For
IBM Spectrum Connect V3.11, which is the latest version at the time of writing, see
Compatibility and requirements.
Restrictions
The following are the restrictions:
Array-based replication for vVol is not currently supported by IBM Spectrum Connect or
IBM Storage Virtualize. This item is planned for a future release.
vVols are not supported in a HyperSwap configuration.
Limitations
The following are the limitations:
At the time of writing, The number of vVols in an IBM Storage Virtualize storage system is
limited to 10,000 per IBM Storage Virtualize cluster. Depending on the scale of the
vSphere environment, the number of VMs might not be suitable for a vVol implementation,
especially given the number of volumes that can be consumed by a single VM.
With traditional VMFS datastores, a single LUN can host thousands of VMs. The
best-practice guidance on the number of VMs per datastore is more focused on the
workload being generated, rather than the number of VMs.
In a vVol environment, a single VM requires a minimum of three volumes:
– Configuration vVol
– Swap vVol
– Data vVol
Note: For more information about the types of vVols, see Virtual Volume Objects.
If more VMDKs are configured on the VM, then more associated vVols are created. If a
VM-level snapshot is taken of the VM, then an additional vVol is created for each VMDK
configured on the VM.
The storage administrator can use the Storage Spaces and Storage Services objects in
IBM Spectrum Connect to complete the following actions:
Create a preset of specific storage capabilities.
Allocate pools of storage capacity, either parent pools or child pools, in which volumes that
are created by using the IBM Storage Enhancements vSphere plug-in are located.
Delegate those presets to vCenter so they can be used by a vSphere administrator as
either VMFS datastores or RDMs.
The IBM Storage Enhancements for VMware vSphere Web Client plug-in is automatically
deployed and enabled for each vCenter server that is registered in the Interfaces window of
IBM Spectrum Connect.
The storage services that you attach on the IBM Spectrum Connect side are accessible by
vSphere Client, and can be used for volume creation by using the IBM Storage
Enhancements for vSphere Client.
Before you begin, log out of vSphere Client browser windows on the vCenter server to which
you want to add IBM Spectrum Connect. Otherwise, after IBM Spectrum Connect is added,
you must log out and log in again before you can use the extension.
Add the vCenter servers to which you can later attach storage services that are visible and
accessible on the vSphere Client side. You can add a single vCenter server at a time.
When you enter the vCenter credentials on the IBM Spectrum Connect side, verify that the
vCenter user has sufficient access level in vCenter to complete this procedure.
234 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 8-47 Add New vCenter Server for vWC
2. Enter the IP address or FQDN of the vCenter server, and the username and password for
that vCenter server (Figure 8-48).
If the provided IP address and credentials are accepted by the vCenter server, it is added
to the list of servers in the Interfaces window. The yellow frame and the exclamation mark
in Figure 8-48 indicate that storage services are not yet delegated to the interface.
Notes:
If you want to use the vSphere Web Client extension on all vCenter servers that
operate in linked mode, each server instance must be added to IBM Spectrum
Connect, which ensures that the extension is registered on all linked servers
properly.
The same vCenter server cannot be added to more than one IBM Spectrum
Connect instance. Any attempt to add an already registered vCenter server to
another IBM Spectrum Connect overrides the primary connection.
To provision volumes by using the IBM Storage Enhancements plug-in, complete the following
steps:
1. In the vSphere Client, select the Hosts & Clusters tab. Right-click the host or cluster to
which you want to provision volumes and select IBM Storage → Create new IBM
Storage volume (Figure 8-50 on page 237).
236 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 8-50 Creating an IBM Storage volume
2. In the Hosts Mapping field, click the arrow and select the hosts to which you want to map
the volumes. If you select a vSphere Cluster from the list, the volumes are mapped by
using the Host Cluster feature in IBM Storage Virtualize (see 2.1.1, “IBM Storage
Virtualize host clusters” on page 8). This mapping ensures that if more hosts are added to
the Host Cluster on the IBM Storage Virtualize system, they automatically inherit existing
Host Cluster mappings.
4. Enter the required size, quantity, and name for the volumes to be created. When you
create multiple volumes simultaneously, the text box next to the Volume Name entry
displays vol_{1} by default. The number between the brackets ({ }) is incremented for
each volume being created (Figure 8-53).
5. Select the Storage Service in which you want to create the volumes. If multiple Storage
Services exist, they are included in the list.
– Storage capabilities that were defined on the Storage Service are listed in the green
area of the window.
– A summary of the task is shown in the blue area of the window.
238 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
6. The Storage LUN value defines the SCSI ID to be used when mapping the volume to the
host or host cluster. Unless for a specific requirement, select Auto for the Storage LUN
value so that the SCSI ID can be automatically determined by the system.
7. Click OK to begin the storage provisioning task.
The status of the operation is displayed in the Recent Tasks tab of the vSphere Client
window (Figure 8-54).
Hosts that were involved in the storage-provisioning operation are automatically scanned
for storage changes. Also, the display names for the volumes also reflect the names that
are defined in the previous step (Figure 8-55).
8. Verify that the commands were issued correctly by checking the Audit Log (Figure 8-56 on
page 240) in the IBM Storage Virtualize storage system. To access the Audit Log, log in to
the web interface of the Storage System and select Access → Audit Log. You can also
use the CLI to run the catauditlog command when you are logged in using ssh.
The names of the created volumes were carried through from the previous New Volume
step (Figure 8-57).
The volumes are created and mapped to the selected Hosts or Clusters, and the vSphere
administrator is now able to create VMFS datastores or RDMs from these volumes by
using the normal vSphere workflow (Figure 8-58 on page 241).
240 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 8-58 New Datastore
8.5.4 Viewing more storage information from within the vSphere Client
When logged in to the vSphere Client, the vSphere administrator can use the IBM Storage
Enhancements plug-in to view additional information about the storage objects that were
delegated to the vCenter Interface.
For each vCenter server, the following IBM Storage categories are available to view for
IBM Storage Virtualize platforms:
Storage services
Storage spaces
Storage volumes
Storage vVols
Important: You might notice references to IBM consistency groups. However, this
integration applies only to IBM FlashSystem A9000/R storage systems.
To view additional information about the storage objects, complete the following steps:
1. Go to the vCenter Server under the Resources list from the Global Inventory Lists view in
the vSphere Client. Open an IBM Storage category to view additional information about
the objects that are currently delegated to the selected vCenter server (Figure 8-59 on
page 242).
2. To view the capabilities names that are defined on a Storage Service, select IBM Storage
Services in the menu on the left. Beneath the menu, select the required Storage Service
(Figure 8-60).
Figure 8-60 Viewing the capabilities that are defined on a Storage Service
3. To find specific information about a particular volume, select IBM Storage Volumes from
the menu in the left panel. Beneath the menu, select the specific storage volume
(Figure 8-61 on page 243).
242 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 8-61 Finding specific information about a particular volume
8.6.1 Considerations
Volume protection is an IBM Storage Virtualize feature that prevents volumes from being
inadvertently deleted or unmapped from a host or host cluster. When attempting to delete a
volume or remove existing host mappings, this task might fail if volume protection is enabled
on the storage system, and the volume recently processed I/O operations. When this setting
is enabled, volumes must be idle before they can be deleted from the system or unmapped
from a host or host cluster. Volumes can be deleted only if they have been idle for the
specified interval. By default this interval is set to 15 minutes.
When volume protection is disabled, volumes can be deleted even if they recently processed
I/O operations. In the management GUI, select Settings → System → Volume Protection to
The Extend volume size task might fail if a thick-provisioned (fully allocated) volume is created
on the storage system, and a fast-format task is still in progress. If this situation occurs, wait
for the fast-formatting process to complete, and then run the command again.
When the Extend volume size task successfully finishes, the LUN that is backing the
datastore increases in size but the VMFS file system does not change. Rescan the HBA for
each host that accesses the datastore so that the host can detect the change in volume size.
Then, you must expand the VMFS file system by right-clicking a datastore and selecting
Increase Datastore Capacity to take advantage of the additional capacity.
To access the vRO management options, go to the Interfaces window of the IBM Spectrum
Connect server and add the vRO server interface, as shown in Figure 8-62.
The yellow frame and the exclamation mark (Figure 8-62) indicate that Storage Services are
not yet delegated to the interface. You can then manage the integration with vRO as
described in the following sections.
To download and install the IBM Storage plug-in package, complete the following steps:
1. On the Interfaces window, right-click the vRO server, and then select Modify.
244 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
2. In the vRO Settings dialog, click Download plug-in package to save the package to your
local computer (Figure 8-63).
Alternatively, you can download the package from Downloading and installing the plug-in
package for vRO.
3. Copy the current vRO token key from the Current vRO token input box. The current vRO
token key is used in step 11 on page 247.
4. In the vSphere Client, select the Configure tab.
5. Click Manage Plug-Ins in the Plug-Ins category. Select Browse → Install.
6. Locate and choose the downloaded plug-in file.
7. Accept the license agreement and click INSTALL (Figure 8-64 on page 246).
Installation is completed, and the IBM Storage plug-in is displayed in the list of vRO
plug-ins.
8. Start the VRO Client, and go to the Workflows tab.
9. On the Workflows tab, add a search filter for IBM to list the new workflows available that
are using IBM Spectrum Connect (Figure 8-66 on page 247).
246 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 8-66 Listing the new workflows
10.Locate the Set Server and Token workflow and click Run.
11.Enter the following information in the correct fields in Figure 8-67:
– In the server field, enter the FQDN.
– In the port field, enter the port of the IBM Spectrum Connect server.
– In the token field paste the token from step 3 on page 245, and click Run.
Tip: If you experience issues running the Set Server and Token workflow, retry the
procedure with a web browser in Incognito or Private Browsing modes.
The workflow starts and a completion status is returned (Figure 8-68 on page 248).
After a Storage Service is created and allocated, complete the following steps:
1. Run the Create and Map a Volume workflow (Figure 8-69).
248 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
2. Click in the “Service on which the volume should be created” window and search for a
Storage Service (Figure 8-70).
The VMware plug-in is available through the VMware Marketplace and is published as
“vRealize Operations Management Pack for IBM SAN Volume Controller and IBM Storwize
4.0.0”. For more information, see IBM Storage Management Pack for VMware vRealize
Operations Manager.
When using IBM Spectrum Connect for vROps integration, the management pack can be
downloaded from the IBM Spectrum Connect GUI and then deployed on the vROps Manager
server. After a VMware vROps Manager server is registered on an instance of IBM Spectrum
Connect that is configured with storage systems, storage spaces, services, and vRealize
servers, the storage-related data is pushed to the vROps Manager server in 5-minute
intervals by default.
The dedicated IBM storage system adapter that is deployed on the vROps Manager server
enables monitoring of the supported IBM storage system by using the vROps Manager. This
adapter reports the storage-related information, such as monitoring data of all logical and
physical elements, covering storage systems, storage domains, storage pools, volumes,
hosts, modules, target ports, disks, health status, events, thresholds, and performance. It also
provides the dashboards that display detailed status, statistics, metrics, and analytics data
alongside hierarchical flowcharts with graphic representation of IBM storage system
elements.
250 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Relationships between the IBM storage system elements (storage systems, ports, storage
pools, volumes, host, host initiator, modules, domain) and datastores, VMs, and hosts are
displayed graphically in a drill-down style. This display provides VMware administrators with a
complete and up-to-date picture of their used storage resources.
To download the PAK file from IBM Spectrum Connect, complete the following steps:
1. Go to the Monitoring window of the IBM Spectrum Connect GUI. The Set vROps Server
dialog is displayed (Figure 8-74).
To deploy the management package on the vROps, complete the following steps:
1. Access the vROps Manager administrative web console by using https://<hostname or
IP address of the vROps UI>.
2. Select Administration → Solutions → Repository.
3. In the Repository window, click ADD/UPGRADE to add a management package. The Add
Solution dialog is displayed.
4. In the Add Solution dialog, click Browse and select the management package that is
downloaded from IBM Spectrum Connect (Figure 8-75 on page 253). Click UPLOAD to
start deployment.
252 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 8-75 Starting a deployment
After the package is uploaded, the package information is displayed (Figure 8-76 on
page 254).
5. Click NEXT. The IBM license agreement is displayed.
6. Accept the IBM license agreement and click NEXT to continue. The Installation Solution
progress is displayed.
7. Click FINISH to complete the installation. More configuration of the management package
is not required on the vROps. Under certain conditions, the package’s status might appear
as Not configured on the vROps. You can disregard this information.
To add the VROps Manager server to IBM Spectrum Connect, complete the following steps:
1. Go to the Monitoring window of the IBM Spectrum Connect GUI (Figure 8-77 on
page 255).
2. Enter the following information:
– IP/Hostname. IP address or FQDN of the vROps Manager server
– Username
– Password
3. Select the checkbox to confirm you installed the PAK file on the vROps Manager server.
254 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Figure 8-77 Monitoring window
256 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
9
Chapter 9. Troubleshooting
This chapter provides information to troubleshoot common problems that can occur in an
IBM FlashSystem and VMware vSphere environment. It also explains how to collect the
necessary problem determination data.
For more information about the level of logs to collect for various issues, see What Data
Should You Collect for a Problem on IBM Storage Virtualize systems?
For the topics covered in the scope of this document, you typically need to gather a snap
(option) 4, which contains standard logs plus new statesaves. Because this data often takes a
long time to collect, it might be advantageous to manually create the statesaves, and then
collect the standard logs afterward. This task can be done by using the svc_livedump
command-line interface (CLI) utility, which is available in the product command-line interface
(Example 9-1 on page 259).
258 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Example 9-1 Using svc_livedump to manually generate statesaves
IBM_FlashSystem:Cluster_9.42.162.160:superuser>svc_livedump -nodes all -y
Livedump - Fetching Node Configuration
Livedump - Checking for dependent VDisks
Livedump - Check Node status
Livedump - Preparing specified nodes - this may take some time...
Livedump - Prepare node 1
Livedump - Prepare node 2
Livedump - Trigger specified nodes
Livedump - Triggering livedump on node 1
Livedump - Triggering livedump on node 2
Livedump - Waiting for livedumps to complete dumping on nodes 1,2
Livedump - Waiting for livedumps to complete dumping on nodes 2
Livedump - Successfully captured livedumps on nodes 1,2
After you generate the necessary statesaves, collect standard logs and the latest statesaves
(option 3), and use the GUI to create a support package including the manually generated
livedumps. Alternatively, you can create the support package by using the CLI (Example 9-2).
When the support package is generated by using the command line, you can download it by
using the GUI or using a Secure Copy Protocol (SCP) client.
When downloading a package for an ESXi host, the default settings provide the information
that is needed to analyze most problems.
Important: When collecting data for problems that are related to SRM, make sure to
collect data from all sites associated with the problem.
9.1.4 Data collection guidelines for IBM Spectrum Connect (VASA or vVols)
For troubleshooting issues associated with VASA or VMware vSphere Virtual Volume (vVol),
the following sets of data are required for troubleshooting:
1. A support package from the storage system as shown in 9.1.1, “Data collection guidelines
for SAN Volume Controller and IBM FlashSystem” on page 258
2. A support package from IBM Spectrum Connect
3. A support package from the management application interfacing with IBM Spectrum
Connect
4. If the problem includes access to the data, ESXi logs as shown in 9.1.2, “Data collection
guidelines for VMware ESXi” on page 259
260 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Collecting data for VMware vCenter
vCenter logs can be collected by using the same process as ESXi hosts, as described in
9.1.2, “Data collection guidelines for VMware ESXi” on page 259. The difference is when
selecting resources for which to collect logs, select the vCenter server instead of (or in
addition to) an ESXi host.
The three general categories of storage loss of access events in VMware products are:
All paths down (APD)
Permanent device loss (PDL)
Virtual machine (VM) crash
These issues are typically the result of errors in path recovery. Corrective actions include:
Validating the best practice multipathing configuration is in use as shown in 2.3,
“Multi-path considerations” on page 18.
Validate all server driver and firmware levels are at the latest supported level.
Validate the network infrastructure connecting the host and storage is operating correctly.
For a list of I/O errors that trigger PDL, see Permanent Device Loss (PDL) and
All-Paths-Down (APD) in vSphere 6.x and 7.x (2004684).
These types of events are often the result of a hardware failure or low-level protocol error in
the server host bus adapter (HBA), storage area network (SAN), or the storage array. If
hardware errors are found that match the time in which the PDL happens, PDL is likely the
cause.
262 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
9.2.2 VMware migration task failures
This section discusses the failures for two types of migrations:
1. vMotion is a migrate task that is used to move the running state of the VM (for example,
memory and compute resource) between ESXi hosts.
2. Storage vMotion is a migrate task that is used to move the VM storage resources between
datastores, for example VMDK files datastores.
vMotion tasks
vMotion tasks are largely dependent on the Ethernet infrastructure between the ESXi hosts.
The only real storage interaction is at the end when file locks must move from one host to
another. In this phase, it is possible for SCSI Reservation Conflicts or file lock contention to
result in the failing of the migration task. The following articles describe the most frequent
issues:
Investigating virtual machine file locks on ESXi hosts (10051)
Resolving SCSI reservation conflicts (1002293)
IBM Spectrum Virtualize APAR HU01894
Migrating virtual machines between datastores within the same storage controller uses the
storage array's capabilities for faster performance. This is achieved through a technology
such as Extended Copy (XCOPY) or VAAI Hardware Accelerated Move, which offloads the
copy operation to the storage array itself.
The default timeout for the task to complete is 100 seconds. If the migration takes longer than
100 seconds to complete, then the task fails with a timeout, as shown in Example 9-6.
The task is generic by nature and the root cause behind the timeout typically requires
performance analysis of the storage arrays that are involved and a detailed review of the ESXi
logs for the host performing the task. In some circumstances, it might be appropriate to
increase the default timeout, as described at Using Storage Motion to migrate a virtual
machine with many disks fails without timeout (1010045).
Assuming the VASA Provider URL is correctly defined and is accessible from vSphere,
registration failure cases can generate an event log entry detailing the reason for the failure.
In situations with connectivity issues, no further debug information is generated on the
storage system. See Figure 9-4.
Review the event log by selecting Monitoring → Events from the storage system GUI.
Review any event log entries titled VASA provider registration failed. Double-click the
failure message, or right-click and select properties to view additional context.
264 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
The following example and Figure 9-5 provide some context about the error:
Event ID989050
Event ID TextVASA provider registration failed
ExplanationAn attempt to register the VASA provider within a vCenter console
failed
Review the error log entry and associated sense data, and make note of the value for Sense
byes 0 to 15. See Figure 9-6 on page 266.
This information is also available within the detailed view of the specific eventlog entry. See
Example 9-7.
The value associated with sense1 will indicate the reason for the registration failure. Provide
this information when dealing with IBM Support. The logged failure cases can be summarized
by Table 9-1.
266 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Sense Code Reason for failure
Value
06 Failed to add the vCenter IP to the certuid field on the user account.
The following sections include some common reasons for registration failures and some
suggested recovery actions.
Additionally, if a previous attempt to register the storage provider has failed for another
reason, the password authentication capability might have been removed from the user
account.
To check the user account status on the storage system locate the user account and ensure
the Password authentication is Configured before attempting registration.
If the Password value is displayed as No, it suggests password authentication has been
disabled for the user account in favor of certificate authentication.
To verify, use the storage system CLI to identify the user account. Run the lsuser command
and specify the username or id. See Example 9-8 on page 268.
In the CLI example, as shown in Example 9-8, the values for password=no and
certuid=9.71.20.191 suggests that a partial registration was triggered in vSphere and has
caused the removal of password authentication for the user account. Additionally, the IP
address of the vCenter appliance that is used for registration is defined in the certuid field.
For the user ID, vmware, the command might look like the following:
chuser -password C0mpl3xPasswd -nocertuid vmware
After completing the preceding task, verify that the output of the lsuser command correctly
reflects the changes and then reattempt the Storage Provider registration with the new
credentials.
To validate this, connect to the storage system management CLI and run the lstruststore
command, as shown in Example 9-9.
Review the VASA column in the lstruststore output. Any truststores that show vasa=on will
have been created by an attempted registration of the Storage Provider in vCenter. If the
268 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Storage Provider has not been successfully registered in vSphere, and you have multiple or
duplicated entries displayed in the output of lstruststore where vasa=on. Remove each
truststore using the rmtruststore <id> command and retry Storage Provider registration.
The certificates being too large for the truststore (Sense 05 01)
In certain situations with environments with complex certificate configurations, vSphere
attempts to submit multiple certificates within a single file, which cannot be added in a single
truststore because of the large size. The truststore within IBM Storage Virtualize has a limit of
12 KB per truststore, so any certificate file must not exceed 12 KB.
To workaround this limitation, the certificate file can be split in to multiple, smaller files that
individually are less than the 12 KB limit.
After an attempted registration, the certificates are temporarily stored within the configuration
node in the file /dumps/vmware-vasa-cert. The following command can be run from an
external machine to securely copy the certificate file to a local /tmp directory:
scp superuser@<Cluster_IP>:/dumps/vmware-vasa-certs /tmp
This certificate file must be split manually into two or more smaller files with a different name,
such as vmware-vasa-certs-1, vmware-vasa-certs-2. The final size of each file must be less
than 12 KiB and must include one entire certificate, which includes the line having the text
BEGIN CERTIFICATE and the line having the text END CERTIFICATE.
These multiple certificate files must then be securely copied back to the Storage Virtualize
config node:
scp /tmp/vmware-vasa-certs-* superuser@<Cluster_IP>:/dumps/
Create multiple truststores by using the split certificate files copied to the storage system by
running the mktruststore command:
svctask mktruststore -file /dumps/vmware-vasa-certs-1 -vasa on
svctask mktruststore -file /dumps/vmware-vasa-certs-2 -vasa on
Also create a file /dumps/bypass-truststore to trigger the VASA provider to bypass the
truststore creation as it has already been manually created. On a local system, create an
empty file with the name bypass-truststore and copy it to the /dumps directory on the
configuration node.
/tmp # touch /tmp/bypass-truststore
/tmp # scp /tmp/bypass-truststore superuser@<Cluster_IP>:/dumps
Register the VASA Provider again within vCenter, and verify that it registered successfully.
The incorrect user role being assigned to the user's usergroup (Sense
07)
By default, when enabling the vVol function on the system, a VASA usergroup is created, and
assigned the VASAProvider role, which allows manipulation of vVol objects. No other user
role is permitted for the VASA provider user. If configuring vVol by using the CLI, ensure that
the appropriate VASAProvider role is assigned to the user group that contains the user
account.
For additional validation, verify that the URL is visible in a web browser before continuing to
register the Storage Provider. You should be able to see the following information, as shown
in Figure 9-8.
Furthermore, consider testing IP connectivity from the vCenter Appliance console by running
a curl request to the VASA provider URL from an ssh or console session that is similar to the
following example:
curl -i -k https://vvolsftw-af7.ssd.hursley.ibm.com:8440/services/vasa
270 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Pragma: no-cache
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Content-Security-Policy: default-src 'none'; img-src 'self'; script-src 'self'; style-src
'self';
<vasa-provider><supported-versions><version id="4" serviceLocation="/services/vasa3/vasa3"
/></supported-versions></vasa-provider>
Note: To determine the accessibility of the VASA URL, consider testing the preceding
command from the vCenter Server Appliance (vCSA) CLI or other machines in the same
subnet as the storage system.
If the information shown in Example 9-11 on page 270 is not being returned, either by using
the browser or curl query, verify that network connectivity is available to the storage system
on TCP port 8440. If the issue persists, it might be necessary to restart the VASA Provider
services on the storage system. To do this, access the cluster management CLI through an
ssh connection and run the following command:
satask restartservice -service nginx
Be aware that it can take approximately 5 minutes for all VASA provider services to become
functional. After issuing the restartservice command, wait for approximately 5 minutes and
retry the previous diagnostics steps.
Note: The VASA Provider processes and services are supported by the nginx service and
so forcing a nginx restart will subsequently trigger a restart of all VASA provider services
on the node. All VASA provider services run on only the configuration node of the system
at any one time. In the event of a configuration node failover, for example, during firmware
upgrade or maintenance procedure, the VASA provider services will be started on the
newly definde configuration node.
If after 5–-10 minutes there is still no response from the VASA Provider URL, contact IBM
support.
Any systems that enabled vVol by using the storage system management GUI on firmware
versions 8.5.4 or earlier are configured with the legacy version1 metadata model.
Any system that enables vVols by using the storage system management GUI on firmware
versions 8.6.0.0 or later is automatically configured with the version2 metadata model.
Firmware levels 8.6.1 and later of IBM Storage Virtualize do not support the legacy metadata
model, so 8.6.0 firmware is the final stream that supports configurations with legacy
Example 9-12 shows the output as seen on a system running 8.5.4.x or earlier, although a
version number is not displayed, this presents as a version1 metadata vdisk:
Upgrading a system that has a version1 metadatavdisk to 8.6.0.x firmware updates the CLI
output.
Example 9-13 lsmetadatavdisk - After upgrading a system with a version1 metadatavdisk to 8.6.0.*
firmware
IBM_FlashSystem:vvolsftw-af7:superuser>lsmetadatavdisk
vdisk_id vdisk_name status version
0 vdisk0 online 1
During the Software Upgrade Test process, the following message will appear:
******************** Error found ********************
The system identified that a version 1 metadata volume exists for VMware Virtual
Volumes (vVols). Please see the following web page for the actions that must be
completed before this upgrade can be performed:
https://www.ibm.com/support/pages/node/6983196.
Refer to the listed support page for the most recent information on the migration process.
Migration procedure
The version2 metadatavdisk will be created as a 4 TB space-efficient (thin-provisioned)
volume. However, the metadata consumes a fraction of this capacity.
The actual consumption depends on the size of the vSphere configuration, such as the
number of Virtual Machines and the associated number of Virtual Machine Disks being stored
on vVol storage.
For planning and capacity management within the metadatavdisk, account for approximately
1 MB per vVol.
272 IBM Storage Virtualize and VMware: Integrations, Implementation and Best Practices
Note: During the metadata migration process, the embedded VASA provider services are
temporarily inaccessible, and the Storage Provider is listed as offline in vSphere.
Subsequently, any vVol datastores presented from this storage system are inaccessible
and although any existing I/O requests from VMs are serviced normally, any management
tasks performed on virtual machines within these datastores fails.
Procedure
Identify the parent pool in which to store the new version2 metadatavdisk.
Use the mkmetadatavdisk command specifying the <parent pool id> and the -migrate flag
as shown in the following example:
mkmetadatavdisk -mdiskgrp 0 -migrate
Virtual Disk, id [51], successfully created
The creation of the second metadatavdisk prompts the migration process to begin. After
successful completion, the original version1 metadatavdisk is removed and an event log entry
is posted. This process might take approximately 1 hour though it depends on the size of the
vVol configuration and number of vVols.
Event ID 989055
Event ID Text Metadata migration complete
if the migration fails, a message will be reported in the system event log:
Event ID 009220
Event ID Text Metadata migration failed
After the process finishes, the output of lsmetadatavdisk is updated to reflect the new
metadata volume. See Example 9-15.
Revalidate the system configuration by using the Software Update Test Utility and proceed
with the upgrade as instructed.
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this
document. Note that some publications referenced in this list might be available in softcopy
only.
Implementation Guide for IBM Storage FlashSystem and IBM SAN Volume Controller:
Updated for IBM Storage Virtualize Version 8.6, SG24-8542
Performance and Best Practices Guide for IBM Storage FlashSystem and IBM SAN
Volume Controller: Updated for IBM Storage Virtualize Version 8.6, SG24-8543
IBM FlashSystem Safeguarded Copy Implementation Guide, REDP-5654
You can search for, view, download or order these documents and other Redbooks,
Redpapers, Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks
Online resources
These websites are also relevant as further information sources:
IBM Storage Virtualize Family Storage Replication Adapter documentation
https://www.ibm.com/docs/en/spectrumvirtual-sra
VMware Reference Architectures
https://core.vmware.com/reference-architectures
V8.6.0.x Configuration Limits and Restrictions for IBM FlashSystem 9100 and 9200
https://www.ibm.com/support/pages/v860x-configuration-limits-and-restrictions-i
bm-flashsystem-9100-and-9200
SG24-8549-01
ISBN 0738461849
Printed in U.S.A.
®
ibm.com/redbooks