Dell EMC VPLEX CLI Reference Guide R6.2
Dell EMC VPLEX CLI Reference Guide R6.2
Dell EMC VPLEX CLI Reference Guide R6.2
Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.” DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED
IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.
Dell Technologies, Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property
of their respective owners. Published in the USA.
Dell EMC
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381
www.DellEMC.com
Tables 11
Preface 13
Chapter 2 Commands 37
advadm dismantle............................................................................................. 44
alias.................................................................................................................. 45
amp register......................................................................................................47
amp unregister..................................................................................................49
array claim........................................................................................................ 50
array forget....................................................................................................... 51
array re-discover.............................................................................................. 52
array used-by....................................................................................................54
authentication directory-service configure....................................................... 55
authentication directory-service map............................................................... 59
authentication directory-service show..............................................................60
authentication directory-service unconfigure................................................... 62
authentication directory-service unmap............................................................62
back-end degraded list......................................................................................64
back-end degraded recover.............................................................................. 65
batch-migrate cancel........................................................................................ 66
batch-migrate check-plan................................................................................. 67
batch-migrate clean..........................................................................................69
batch-migrate commit...................................................................................... 70
batch-migrate create-plan................................................................................. 71
batch-migrate pause......................................................................................... 73
batch-migrate remove.......................................................................................74
batch-migrate resume.......................................................................................75
batch-migrate start...........................................................................................76
batch-migrate summary.................................................................................... 78
battery-conditioning disable.............................................................................. 81
battery-conditioning enable.............................................................................. 83
battery-conditioning manual-cycle cancel-request........................................... 85
battery-conditioning manual-cycle request.......................................................86
battery-conditioning set-schedule.................................................................... 87
battery-conditioning summary.......................................................................... 90
cache-invalidate................................................................................................92
cache-invalidate-status.................................................................................... 95
capture begin....................................................................................................98
capture end.......................................................................................................99
capture pause..................................................................................................100
capture replay................................................................................................. 100
capture resume................................................................................................ 101
cd.................................................................................................................... 102
chart create.....................................................................................................103
cluster add...................................................................................................... 105
cluster cacheflush........................................................................................... 106
cluster configdump.......................................................................................... 107
cluster expel.................................................................................................... 109
cluster forget................................................................................................... 110
cluster restart-local-cluster.............................................................................. 111
cluster show-remote-devices...........................................................................112
cluster shutdown..............................................................................................114
cluster status................................................................................................... 116
cluster stop-local-cluster................................................................................. 119
cluster summary...............................................................................................119
cluster unexpel................................................................................................ 123
cluster-witness configure................................................................................ 124
cluster-witness disable.................................................................................... 126
cluster-witness enable.....................................................................................128
collect-diagnostics.......................................................................................... 133
configuration complete-system-setup.............................................................135
configuration configure-auth-service.............................................................. 136
configuration connect-local-directors..............................................................137
configuration connect-remote-directors......................................................... 138
configuration continue-system-setup.............................................................. 139
configuration cw-vpn-configure...................................................................... 139
configuration cw-change-password................................................................. 141
configuration cw-vpn-reset..............................................................................141
configuration enable-front-end-ports.............................................................. 143
configuration event-notices-reports config..................................................... 144
configuration event-notices-reports reset....................................................... 144
configuration event-notices-reports-show...................................................... 145
configuration flashdir-backup disable.............................................................. 146
configuration flashdir-backup enable............................................................... 147
configuration get-product-type....................................................................... 147
configuration join-clusters............................................................................... 148
configuration metadata-backup.......................................................................149
configuration register-product......................................................................... 151
configuration remote-clusters add-addresses................................................. 152
configuration remote-clusters clear-addresses............................................... 154
configuration show-meta-volume-candidates................................................. 155
configuration subnet clear............................................................................... 156
configuration subnet remote-subnet add.........................................................158
configuration subnet remote-subnet remove...................................................159
configuration sync-time................................................................................... 161
configuration sync-time-clear..........................................................................162
configuration sync-time-show......................................................................... 163
configuration system-reset..............................................................................164
configuration system-setup.............................................................................165
configuration upgrade-meta-slot-count.......................................................... 166
connect........................................................................................................... 167
connectivity director....................................................................................... 169
connectivity list all........................................................................................... 170
connectivity list directors................................................................................. 171
connectivity list initiators.................................................................................172
connectivity list storage-volumes.................................................................... 173
connectivity show............................................................................................174
connectivity validate-be...................................................................................174
connectivity validate-local-com....................................................................... 177
connectivity validate-wan-com........................................................................178
consistency-group add-virtual-volumes...........................................................179
consistency-group choose-winner...................................................................180
consistency-group convert-to-local................................................................ 183
consistency-group create................................................................................ 184
consistency-group destroy.............................................................................. 186
consistency-group list-eligible-virtual-volumes................................................187
consistency-group remove-virtual-volumes.....................................................188
consistency-group resolve-conflicting-detach................................................ 190
consistency-group resume-at-loser.................................................................192
consistency-group set-detach-rule no-automatic-winner................................194
consistency-group set-detach-rule winner...................................................... 195
consistency-group summary............................................................................197
date................................................................................................................. 197
describe...........................................................................................................198
device attach-mirror........................................................................................199
device collapse................................................................................................ 201
device detach-mirror...................................................................................... 202
device mirror-isolation auto-unisolation disable.............................................. 206
device mirror-isolation auto-unisolation enable............................................... 207
device mirror-isolation disable........................................................................ 209
device mirror-isolation enable.......................................................................... 211
device mirror-isolation show............................................................................ 213
device resume-link-down.................................................................................215
device resume-link-up......................................................................................217
device resurrect-dead-storage-volumes..........................................................218
director appcon............................................................................................... 219
director appdump............................................................................................220
director appstatus...........................................................................................222
director commission........................................................................................223
director decommission.................................................................................... 224
director fc-port-stats......................................................................................224
director firmware show-banks........................................................................ 226
user remove.....................................................................................................519
user reset........................................................................................................520
validate-system-configuration......................................................................... 521
vault go...........................................................................................................522
vault overrideUnvaultQuorum......................................................................... 523
vault status..................................................................................................... 525
verify fibre-channel-switches......................................................................... 529
version............................................................................................................ 529
virtual-volume create...................................................................................... 533
virtual-volume destroy.................................................................................... 540
virtual-volume expand..................................................................................... 541
virtual-volume list-thin....................................................................................545
virtual-volume provision..................................................................................546
virtual-volume re-initialize...............................................................................548
virtual-volume set-thin-enabled...................................................................... 548
virtual-volume summary..................................................................................549
vpn restart...................................................................................................... 553
vpn start......................................................................................................... 553
vpn status....................................................................................................... 554
vpn stop..........................................................................................................555
wait................................................................................................................ 555
webserver.......................................................................................................556
Index 557
1 Typographical conventions................................................................................................ 14
2 Deprecated options to commands..................................................................................... 19
3 Default password policies..................................................................................................23
4 authentication directory-service show field descriptions................................................... 61
5 batch migration summary field descriptions...................................................................... 79
6 battery conditioning field descriptions.............................................................................. 88
7 battery conditioning summary field descriptions .............................................................. 90
8 Important cache invalidate status fields............................................................................ 96
9 cluster status field descriptions........................................................................................116
10 cluster summary field descriptions.................................................................................. 120
11 cluster witness display fields............................................................................................129
12 Connection Types............................................................................................................145
13 Supported configuration parameters............................................................................... 214
14 director firmware show-banks field descriptions ............................................................ 227
15 ds summary field descriptions ........................................................................................ 258
16 extent summary field descriptions ..................................................................................298
17 getsysinfo field descriptions............................................................................................302
18 local device summary field descriptions ......................................................................... 345
19 logging volume display fields........................................................................................... 353
20 recoverpoint display fields.............................................................................................. 425
21 rp summary display fields ............................................................................................... 432
22 Certificate parameters.................................................................................................... 463
23 Create hints files for storage-volume naming .................................................................494
24 storage-volume summary field descriptions.................................................................... 503
25 Vault state field descriptions...........................................................................................526
26 Software components.....................................................................................................530
27 virtual-volume field descriptions..................................................................................... 534
28 virtual-volume summary field descriptions...................................................................... 550
As part of an effort to improve its product lines, Dell EMC periodically releases revisions of its
software and hardware. Therefore, some functions described in this document might not be
supported by all versions of the software or hardware currently in use. The product release notes
provide the most up-to-date information on product features.
Contact your Dell EMC technical support professional if a product does not function properly or
does not function as described in this document.
Note: This document was accurate at publication time. Go to Dell EMC Online Support
(https://www.dell.com/support) to ensure that you are using the latest version of this
document.
Purpose
This document is part of the VPLEX documentation set, and describes the VPLEX features and use
cases, configuration options, VPLEX software and its upgrade, and the hardware overview.
Audience
This guide is intended for use by customers who wish to understand the software and hardware
features of VPLEX, the use cases of VPLEX, product offerings, and the configuration options.
Related documents (available on Dell EMC Online Support) include:
l VPLEX Release Notes for GeoSynchrony Releases
l VPLEX Product Guide
l VPLEX Hardware Environment Setup Guide
l VPLEX Configuration Worksheet
l VPLEX Configuration Guide
l VPLEX Security Configuration Guide
l VPLEX CLI Reference Guide
l VPLEX Administration Guide
l Unisphere for VPLEX Help
l VPLEX Element Manager API Guide
l VPLEX Open-Source Licenses
l VPLEX GPL3 Open-Source Licenses
l Procedures provided through the SolVe Desktop
l Dell EMC Host Connectivity Guides
l Dell EMC VPLEX Hardware Installation Guide
l Various best practices technical notes available on Dell EMC Online Support
Special notice conventions used in this document
Dell EMC uses the following conventions for special notices:
DANGER Indicates a hazardous situation which, if not avoided, will result in death or serious
injury.
WARNING Indicates a hazardous situation which, if not avoided, could result in death or
serious injury.
CAUTION Indicates a hazardous situation which, if not avoided, could result in minor or
moderate injury.
NOTICE Addresses practices not related to personal injury.
Typographical conventions
Dell EMC uses the following type style conventions in this document:
solutions. Interactively engage online with customers, partners, and certified professionals for all
Dell EMC products.
Your comments
Your suggestions will help us continue to improve the accuracy, organization, and overall quality of
the user publications. Send your opinions of this document to techpubcomments@emc.com.
The following commands and contexts have been added, removed, or changed in this release.
New commands in this release
The following commands have been added to this release:
l back-end degraded list
l back-end degraded recover
l esrs import-certificate
l esrs register
l esrs status
l esrs un-register
Changed commands
For this release, there are no changes in the commands.
Removed commands
Starting with this release, connectivity window set and connectivity window show
commands are removed.
New or changed contexts
Starting with this release, the cluster configuration limits are updated for Health-check limits on a
metro system .
To increase the overall scale and performance of VPLEX and to reduce the product complexity,
commands or contexts might be deprecated and contexts might be changed. These options and
arguments will be removed from the CLI in the future releases of VPLEX.
The deprecated options and arguments and the workaround to mitigate the deprecation are as
follows:
extent create optional argument: [- Specifies the number of extents Use array-native slicing
n|--num-extents] to be created on a specific capabilities.
storage volume. When this
argument is not used, VPLEX
cerates a single extent for specific
storage volume.
local-device The raid-0 value of Specifies the creation of a local Use array-native striping
create the optional argument [- RAID 0 device. capabilities.
g|--geometry]
local-device The raid-c value of the Specifies the creation of a local Use array-native striping
create optional argument [- RAID C device. capabilities.
g|--geometry].
logging-volume The raid-0 value of the Specifies the creation of a logging Use array-native striping
create optional argument [- volumes using RAID 0/striping, capabilities.
g|--geometry] internally.
storage-tool The raid-0 value of the Specifies the creation of a local Use array-native striping
compose optional argument [- RAID 0 device. capabilities.
g|--geometry] .
storage-tool The raid-c value of Specifies the creation of a local Use array-native striping
compose the optional argument [- RAID C device. capabilities.
g|--geometry]
virtual-volume The option [-e|-- The target local device or extent Use the storage-
expand extent] to add to the virtual volume using volume method of
expansion.
In addition to these options, the prod script that enables slicing-at-the-top functionality will be
deprecated in the future VPLEX releases. Use the array-native slicing capabilities to mitigate this
deprecation.
Login as:
service@ManagementServer:~>
vplexcli:
Note: Starting the VPLEX CLI no longer requires a username and password. Please
verify that no automated scripts supply usernames or passwords.
Results
You are now logged in to the VPLEX CLI.
Password Policies
The management server uses a Pluggable Authentication Module (PAM) infrastructure to enforce
minimum password quality.
For more information about technology used for password protection, refer to the Security
Configuration Guide.
Note the following:
l Password policies do not apply to users configured using the LDAP server.
l The Password inactive days policy does not apply to the admin account to protect the admin
user from account lockouts.
l During the management server software upgrade, an existing user’s password is not changed.
Only the user’s password age information changes.
l You must be an admin user to configure a password policy.
The following table lists and describes the password policies and the default values.
Minimum password age The minimum number of days 1 (0 for service account)
a password can be changed
after the last password
change.
Maximum password age The maximum number of days 90 (3650 days for service
that a password can be used account)
since the last password
change. After the maximum
number of days, the account
is locked and the user must
contact the admin user to
reset the password.
Password expiry warning The number of days before 15 (30 days for service
the password expires. A password)
warning message indicating
that the password must be
changed is displayed.
The password policy for existing admin, service, and customer-created user accounts is updated
automatically as part of the upgrade to this release. See the VPLEX Security Configuration Guide for
information about account passwords.
VPlexcli:/clusters> exit
Connection closed by foreign host.
For example, specifying the literal pattern -c returns all contexts containing data
matching that specific literal when used with the * wildcard. In this case, the
command retrieves contexts containing the -c option.
VPlexcli:/> find -c *
[/alerts, /clusters, /data-migrations, /distributed-storage, /
engines,
/management-server, /monitoring, /notifications, /recoverpoint, /
security,
/system-defaults]
Note: find searches the contents of the current directory. In the example above,
since the current directory is the root, find searches the entire context tree.
Using a wildcard returns only results matching a particular string pattern. For example,
specifying the pattern find /clusters/cluster-1/devices/rC* returns the
following contexts matching this pattern.
/clusters/cluster-1/devices/rC_C1_0002, /clusters/cluster-1/devices/
rC_C1_0003]
VPlexcli:/> cd /clusters/cluster-1/devices/
VPlexcli:/clusters/cluster-1/devices>
For example, to navigate from the root (/) context to the connectivity context to view
member ports for a specified FC port group:
VPlexcli:/clusters/cluster-1/connectivity/back-end/port-groups/fc-port-
group-1-0/
member-ports> ll
Alternatively, type all the context identifiers in a single command. For example, the above
navigation can be typed as:
VPlexcli:/> cd clusters/cluster-1/connectivity/back-end/port-groups/iscsi-
port-group-8/member-ports> ll
Use the cd command with no arguments or followed by a space and three periods (cd ...) to
return to the root context:
VPlexcli:/engines/engine-1-1/fans> cd
VPlexcli:/>
Use the cd command followed by a space and two periods (cd ..) to return to the context
immediately above the current context:
VPlexcli:/monitoring/directors/director-1-1-B> cd ..
VPlexcli:/monitoring/directors>
To navigate directly to a context from any other context use the cd command and specify the
absolute context path. In the following example, the cd command changes the context from the
data migrations/extent-migrations context to the engines/engine-1/fans context:
VPlexcli:/data-migrations/extent-migrations> cd /engines/engine-1-1/fans/
VPlexcli:/engines/engine-1-1/fans>
VPlexcli:/clusters/cluster-1> dirs
[/clusters/cluster-1, /, /, /engines/engine-1-1/directors/director-1-1-A/
hardware/ports/A5-GE01, /]
l Use the popd command to remove the last directory saved by the pushd command and jump
to the new top directory.
In the following example, the dirs command displays the context stack saved by the pushd
command, and the popd command removes the top directory, and jumps to the new top
directory:
VPlexcli:/engines/engine-1-1/directors/director-1-1-A> dirs
[/engines/engine-1-1/directors/director-1-1-A, /monitoring/directors/
director-1-1-A]
VPlexcli:/engines/engine-1-1/directors/director-1-1-A> popd
[/engines/engine-1-1/directors/director-1-1-A]
VPlexcli:/monitoring/directors/director-1-1-A>
VPlexcli:/> cd /monitoring/directors/director-1-1-B/monitors/
VPlexcli:/monitoring/directors/director-1-1-B/monitors>
l The ls command displays the sub-contexts immediately accessible from the current context:
VPlexcli:/> ls
clusters data-migrations distributed-storage
engines management-server monitoring
notifications system-defaults
VPlexcli:/data-migrations> ls -l
Name Description
----------------- -------------------------------------
device-migrations Contains all the device migrations in the system.
extent-migrations Contains all the extent migrations in the system.
l For contexts where the next lowest level is a list of individual objects, the ls command
displays a list of the objects:
VPlexcli:/clusters/cluster-1/exports/ports> ls
P000000003B2017DF-A0-FC00 P000000003B2017DF-A0-FC01
P000000003B2017DF-A0-FC02 P000000003B2017DF-A0-FC03
P000000003B3017DF-B0-FC00 P000000003B3017DF-B0-FC01
P000000003B3017DF-B0-FC02 P000000003B3017DF-B0-FC03
l The cd command followed by a <Tab> displays the same information as ls at the context
level.
For example, type cd and press <Tab> in the data-migrations context to display available
options:
VPlexcli:/data-migrations> cd <Tab>
device-migrations/ extent-migrations/
l The tree command displays the immediate sub-contexts in the tree using the current context
as the root:
l The tree -e command displays immediate sub-contexts in the tree and any sub-contexts
under them:
Note: For contexts where the next level down the tree is a list of objects, the tree command
displays the list. This output can be very long. For example:
Use the help -G command to display a list of available commands in the current context excluding
the global commands:
VPlexcli:/notifications> help -G
Commands specific to this context and below:
call-home snmp-trap
Some contexts “inherit” commands from their parent context. These commands can be used in
both the current context and the context immediately above in the tree:
VPlexcli:/distributed-storage/bindings> help -G
Commands inherited from parent contexts:
dd rule rule-set summary
Some commands are loosely grouped by function. For example, the commands to create and
manage performance monitors start with the word “monitor”.
Use the <Tab> key display the commands within a command group. For example, to display the
commands that start with the word “monitor”, type “monitor” followed by the <Tab> key:
Page output
For large configurations, output from some commands can reach hundreds of lines.
Paging displays long output generated by the ll and ls commands one page at a time:
To enable paging, add -p at the end of any command:
VPlexcli:/clusters/cluster-1/storage-elements> ls storage-volumes -p
One page of output is displayed. The following message is at the bottom of the first page:
Tab completion
Use the Tab key to:
l vplex_c_complete_a_command
l vplex_c_display_valid_contexts_and_commands
l vplex_c_display_command_arguments
Complete a command
Use the Tab key to automatically complete a path or command until the path or command is no
longer unique.
For example, to navigate to the UPS context on a single cluster (named cluster-1), type:
cd /clusters/cluster-1/uninterruptible-power-supplies/
cd /clusters/
There is only one cluster (it is unique). Press <Tab> to automatically specify the cluster:
cd /clusters/cluster-1/
cd /clusters/cluster-1/uninterruptible-power-supplies/
VPlexcli:/> cd /clusters/cluster-1/
Wildcards
The command line interface includes 3 wildcards:
l * - matches any number of characters.
l ? - matches any single character.
l [a|b|c] - matches any of the single characters a or b or c.
Note: Use the find command with wildcards to find context names and data matching
specific patterns in the CLI context tree. See Context Tree Searching for more information.
* wildcard
Use the * wildcard to apply a single command to multiple objects of the same type (directors or
ports).
For example, to display the status of ports on each director in a cluster, without using wildcards:
ll engines/engine-1-1/directors/director-1-1-A/hardware/ports
ll engines/engine-1-1/directors/director-1-1-B/hardware/ports
ll engines/engine-1-2/directors/director-1-2-A/hardware/ports
ll engines/engine-1-2/directors/director-1-2-B/hardware/ports
.
.
.
Alternatively:
l Use one * wildcard to specify all engines, and
l Use a second * wildcard specify all directors:
ll engines/engine-1-*/directors/*/hardware/ports
** wildcard
Use the ** wildcard to match all contexts and entities between two specified objects.
For example, to display all director ports associated with all engines without using wildcards:
ll /engines/engine-1-1/directors/director-1-1-A/hardware/ports
.
.
.
ll /engines/engine-1-1/directors/director-1-1-B/hardware/ports
.
.
.
Alternatively, use a ** wildcard to specify all contexts and entities between /engines and ports:
ll /engines/**/ports
? wildcard
Use the ? wildcard to match a single character (number or letter).
ls /storage-elements/extents/0x1?[8|9]
[a|b|c] wildcard
Use the [a|b|c] wildcard to match one or more characters in the brackets.
ll engines/engine-1-1/directors/director-1-1-A/hardware/ports/A[0-1]
displays only ports with names starting with an A, and a second character of 0 or 1.
Names
Major components are named as follows:
Clusters
VPLEX Local configurations have a single cluster, with a cluster ID of cluster 1. VPLEX Metro
configurations have two clusters with cluster IDs of 1 and 2.
VPlexcli:/clusters/cluster-1/
Engines
engines are named engine-n-n where the first value is the cluster ID (1 or 2) and the second
value is the engine ID (1-4).
VPlexcli:/engines/engine-1-2/
Directors
Directors are named director-n-n-n where the first value is the cluster ID (1 or 2), the second
value is the engine ID (1-4), and the third is A or B.
VPlexcli:/engines/engine-1-1/directors/director-1-1-A
For objects that can have user-defined names, those names must comply with the following rules:
l Can contain uppercase and lowercase letters, numbers, and underscores
l No spaces
l Cannot start with a number
l No more than 63 characters
Specifying addresses
VPLEX uses both IPv4 and IPv6 addressing. Many commands can be specified as IPv4 or IPv6
formats.
See the Dell EMC VPLEX Administration Guide for usage rules and address formats.
Command globbing
Command globbing combines wildcards and context identifiers in a single command. Globbing can
address multiple entities using a single command.
Example 1
To display the status of all the director ports on a large configuration using no wildcards, type:
ll /engines/engine-1-Enclosure_ID/directors/director_name/hardware/ports
ll /engines/engine-1-*/directors/*/hardware/ports
ll /**/ports
Example 2
In the following example, a single command enables ports in all engines and all directors (A and B)
whose name include 0-FC and 1-FC:
alias
[-n|--name] alias_name
[-t|to] “string of commands in quotes”
Type the command with the arguments with identifiers in any order (not as specified by the
syntax):
or,
Type the command with the arguments without identifiers in the order specified by the command
syntax:
--verbose argument
The --verbose argument displays additional information for some commands. For example,
without --verbose argument:
(reverse-i-search)'':
Type the first letter of the command to search for. After you type the first letter, the search
tool displays a list of possible matches.
Use the history command to display a complete list of commands executed in the current
session:
VPlexcli:/engines/engine-0-0/directors> history
0 cd engines/engine-0-0/directors
1 extent unclaim *
2 ls
3 ls -l
4 extent claim *
5 ls
6 ls -l
7 ls -la
Use the history nn command to display the last nn entries in the list:
VPlexcli:/clusters/cluster-1> history 22
478 ls storage-volumes -p
479 cd clusters/cluster-1/
480 ls storage-volumes
481 cd storage-elements/
482 ls storage-volumes -p
Get help
l Use the help or? command with no arguments to display all the commands available in the
current context, including global commands.
l Use the help or ? command with -G argument to display all the commands available in the
current context, excluding global commands:
VPlexcli:/clusters> help -G
Commands specific to this context and below:
add cacheflush configdump expel forget shutdown summary unexpel
l Use the help command or command --help to display help for the specified command.
l advadm dismantle..................................................................................................................44
l alias....................................................................................................................................... 45
l amp register.......................................................................................................................... 47
l amp unregister...................................................................................................................... 49
l array claim.............................................................................................................................50
l array forget............................................................................................................................51
l array re-discover................................................................................................................... 52
l array used-by........................................................................................................................ 54
l authentication directory-service configure............................................................................55
l authentication directory-service map....................................................................................59
l authentication directory-service show.................................................................................. 60
l authentication directory-service unconfigure........................................................................62
l authentication directory-service unmap................................................................................ 62
l back-end degraded list.......................................................................................................... 64
l back-end degraded recover...................................................................................................65
l batch-migrate cancel............................................................................................................ 66
l batch-migrate check-plan..................................................................................................... 67
l batch-migrate clean.............................................................................................................. 69
l batch-migrate commit........................................................................................................... 70
l batch-migrate create-plan..................................................................................................... 71
l batch-migrate pause..............................................................................................................73
l batch-migrate remove........................................................................................................... 74
l batch-migrate resume........................................................................................................... 75
l batch-migrate start............................................................................................................... 76
l batch-migrate summary........................................................................................................ 78
l battery-conditioning disable...................................................................................................81
l battery-conditioning enable...................................................................................................83
l battery-conditioning manual-cycle cancel-request................................................................85
l battery-conditioning manual-cycle request........................................................................... 86
l battery-conditioning set-schedule.........................................................................................87
l battery-conditioning summary.............................................................................................. 90
l cache-invalidate.................................................................................................................... 92
l cache-invalidate-status......................................................................................................... 95
l capture begin........................................................................................................................ 98
l capture end........................................................................................................................... 99
l capture pause...................................................................................................................... 100
l capture replay......................................................................................................................100
l capture resume.....................................................................................................................101
l cd.........................................................................................................................................102
l chart create......................................................................................................................... 103
l cluster add........................................................................................................................... 105
l cluster cacheflush................................................................................................................106
l cluster configdump.............................................................................................................. 107
l cluster expel........................................................................................................................ 109
l cluster forget........................................................................................................................110
l cluster restart-local-cluster...................................................................................................111
l cluster show-remote-devices............................................................................................... 112
l cluster shutdown.................................................................................................................. 114
l cluster status........................................................................................................................116
l cluster stop-local-cluster......................................................................................................119
l cluster summary................................................................................................................... 119
l cluster unexpel.....................................................................................................................123
l cluster-witness configure.....................................................................................................124
l cluster-witness disable.........................................................................................................126
l cluster-witness enable......................................................................................................... 128
l collect-diagnostics............................................................................................................... 133
l configuration complete-system-setup................................................................................. 135
l configuration configure-auth-service...................................................................................136
l configuration connect-local-directors.................................................................................. 137
l configuration connect-remote-directors..............................................................................138
l configuration continue-system-setup.................................................................................. 139
l configuration cw-vpn-configure...........................................................................................139
l configuration cw-change-password......................................................................................141
l configuration cw-vpn-reset.................................................................................................. 141
l configuration enable-front-end-ports.................................................................................. 143
l configuration event-notices-reports config..........................................................................144
l configuration event-notices-reports reset........................................................................... 144
l configuration event-notices-reports-show.......................................................................... 145
l configuration flashdir-backup disable...................................................................................146
l configuration flashdir-backup enable....................................................................................147
l configuration get-product-type............................................................................................147
l configuration join-clusters................................................................................................... 148
l configuration metadata-backup........................................................................................... 149
l configuration register-product............................................................................................. 151
l configuration remote-clusters add-addresses......................................................................152
l configuration remote-clusters clear-addresses.................................................................... 154
l configuration show-meta-volume-candidates...................................................................... 155
l configuration subnet clear................................................................................................... 156
l configuration subnet remote-subnet add............................................................................. 158
l configuration subnet remote-subnet remove....................................................................... 159
l configuration sync-time........................................................................................................161
l configuration sync-time-clear.............................................................................................. 162
l configuration sync-time-show............................................................................................. 163
l configuration system-reset.................................................................................................. 164
l configuration system-setup................................................................................................. 165
l configuration upgrade-meta-slot-count............................................................................... 166
l connect................................................................................................................................167
l connectivity director............................................................................................................169
l connectivity list all............................................................................................................... 170
l connectivity list directors..................................................................................................... 171
l connectivity list initiators..................................................................................................... 172
l connectivity list storage-volumes.........................................................................................173
l connectivity show................................................................................................................ 174
l connectivity validate-be....................................................................................................... 174
l connectivity validate-local-com............................................................................................177
l connectivity validate-wan-com............................................................................................ 178
l consistency-group add-virtual-volumes............................................................................... 179
l consistency-group choose-winner....................................................................................... 180
l consistency-group convert-to-local..................................................................................... 183
l consistency-group create.................................................................................................... 184
l consistency-group destroy.................................................................................................. 186
l snmp-agent start.................................................................................................................482
l snmp-agent status.............................................................................................................. 483
l snmp-agent stop................................................................................................................. 483
l snmp-agent unconfigure..................................................................................................... 484
l source................................................................................................................................. 484
l storage-tool dismantle........................................................................................................ 485
l storage-tool compose......................................................................................................... 486
l storage-volume auto-unbanish-interval............................................................................... 489
l storage-volume claim.......................................................................................................... 490
l storage-volume claimingwizard........................................................................................... 493
l storage-volume find-array...................................................................................................496
l storage-volume forget.........................................................................................................498
l storage-volume list-banished.............................................................................................. 499
l storage-volume list-thin-capable.........................................................................................500
l storage-volume resurrect.................................................................................................... 501
l storage-volume summary.................................................................................................... 503
l storage-volume unbanish.................................................................................................... 507
l storage-volume unclaim...................................................................................................... 508
l storage-volume used-by...................................................................................................... 510
l syrcollect..............................................................................................................................511
l tree...................................................................................................................................... 512
l unalias..................................................................................................................................513
l user add............................................................................................................................... 514
l user event-server add-user..................................................................................................515
l user event-server change-password.................................................................................... 516
l user list................................................................................................................................ 517
l user passwd......................................................................................................................... 518
l user remove......................................................................................................................... 519
l user reset............................................................................................................................ 520
l validate-system-configuration............................................................................................. 521
l vault go............................................................................................................................... 522
l vault overrideUnvaultQuorum..............................................................................................523
l vault status......................................................................................................................... 525
l verify fibre-channel-switches..............................................................................................529
l version................................................................................................................................ 529
l virtual-volume create.......................................................................................................... 533
l virtual-volume destroy........................................................................................................ 540
l virtual-volume expand..........................................................................................................541
l virtual-volume list-thin........................................................................................................ 545
l virtual-volume provision...................................................................................................... 546
l virtual-volume re-initialize................................................................................................... 548
l virtual-volume set-thin-enabled.......................................................................................... 548
l virtual-volume summary...................................................................................................... 549
l vpn restart.......................................................................................................................... 553
l vpn start..............................................................................................................................553
l vpn status........................................................................................................................... 554
l vpn stop.............................................................................................................................. 555
l wait..................................................................................................................................... 555
l webserver........................................................................................................................... 556
advadm dismantle
Dismantles storage objects down to the storage-volume level, and optionally unclaims the storage
volumes.
Contexts
All contexts.
Syntax
advadm dismantle
[-r|--devices] context path,context path
[-v|--virtual-volumes] context path,context path
[--unclaim-storage-volumes] [-f|--force]
Arguments
Required arguments
[-r|--devices] context One or more devices to dismantle. Entries must be separated
path,context path... by commas. You can use glob patterns.
[-v|--virtual-volumes] One or more virtual volumes to dismantle. Entries must be
context path,context path... separated by commas. You can use glob patterns.
Optional Arguments
--unclaim-storage-volumes Unclaim the storage volumes after the dismantle is
completed.
[-f | --force] Force the dismantle without asking for confirmation. Allows
the command to be run from a non-interactive script.
Description
To dismantle a virtual volume, the specified volume must:
l Not be exported to a storage view.
l Not a member of a consistency group
virtual volume exported through a storage view or belonging to a consistency group are not eligible
to be dismantled. The command skips any volumes that are not eligible for dismantle, prints a
message listing skipped volumes, and dismantles those volumes that are eligible.
If the --force argument is used, no confirmation is displayed before the dismantle.
Examples
In the following example, the specified volume is dismantled:
In the following example, the specified volumes are NOT dismantled because they are exported or
are members of a consistency group:
No virtual-volumes to dismantle.
See also
l ds dd create
l iscsi sendtargets add
l virtual-volume create
alias
Creates a command alias.
Contexts
All contexts.
Syntax
alias
[-n|--name] name
[-t|--to] "commands and arguments"
Arguments
Required arguments
[-n|--name] name * The name of the new alias.
l Up to 63 characters.
l May contain letters, numbers, and underscores '_'. s
l Cannot start with a number.
* - argument is positional.
Description
Aliases are shortcuts for frequently used commands or commands that require long strings of
context identifiers.
Use the alias command with no arguments to display a list of all aliases configured on the system.
Use the alias name command to display the underlying string of the specified alias.
Use the alias name “string of CLI commands" command to create an alias with the specified
name that invokes the specified string of commands.
Use the unalias command to delete an alias.
l ? Substitutes for the help command.
l ll Substitutes for the ls -a command.
l quit Substitutes for the exit command.
An alias that executes correctly in one context may conflict with an existing command when
executed from another context (pre-existing commands are executed before aliases if the syntax
is identical).
The following aliases are pre-configured:
1. Local command in the current context.
2. Global command in the current context.
3. Root context is searched for a match.
An alias set at the command line does not persist when the user interface is restarted. To create
an alias command that persists, add it to the /var/log/VPlex/cli/VPlexcli-init file.
Make sure that the alias name is unique, that is, not identical to an existing command or alias.
Examples
Create an alias:
VPlexcli:/> alias
Name Description
--------------- -------------------------------------------------------
? Substitutes the 'help' command.
mon-Dir-1-1-B Substitutes the 'cd /monitoring/directors/director-1-1-B'
ll Substitutes the 'ls -al' command.
quit Substitutes the 'exit' command.
Use an alias:
VPlexcli:/> mon-Dir-1-1-B
VPlexcli:/monitoring/directors/director-1-1-B>
See also
l ls
l unalias
amp register
Associates an Array Management Provider to a single Cluster.
Contexts
/clusters/ClusterName/storage-elements/array-providers
Syntax
amp register
[-n | --name] name
[-i | --ip-address] ip-address of Array Management Provider
[-a | --array]=storage-array
[-c | --cluster]cluster context path
[-s | --use-ssl]
[-h | --help]
[--verbose]
[-u | --username]user name
[-t | --provider-type]provider type
[-p | --port-number]port number
Arguments
Required arguments
[-n | --name] name * The name of the Array Management Provider.
[-i | --ip-address] ip-address * The IP address of the Array Management Provider.
of Array Management Provider-
[-u | --username] user name * The user name required for connecting to the Array
Management Provider.
[-t | --provider-type] * The type of Array Management Provider. SMI-S and
provider type REST are currently the only provider types supported.
[-p | --port-number] port * The port number to use along with the IP-address to
number construct the URL indicating the Array Management
Provider.
Optional arguments
* - positional argument
Description
An Array Management Provider (AMP) is an external array management endpoint that VPLEX
communicates with to execute operations on individual arrays. Examples of AMPs include external
SMI-S and REST providers.
An AMP exposes a management protocol/API. Array operations can be executed through this API.
Examples of Management protocols include Provisioning and Snap & Clone.
An AMP manages one or more arrays. For example, an SMI-S provider can manage multiple arrays.
Examples
Registering (adding) an array provider:
See also
l amp unregister
amp unregister
Unregisters Array Management Provider. The Array Management provider is no longer available for
any operations after being unregistered.
Contexts
/clusters/ClusterName/storage-elements/array-providers
Syntax
amp unregister
[-n | --name]name
[-c | --cluster] cluster context
[-f | --force]
[-h | --help]
[--verbose]
Arguments
Required arguments
[-n | --name] name * The name of the array management provider to unregister.
Optional arguments
[-c | --cluster] cluster The cluster associated with the Array Management Provider.
context Can be omitted when the command is executed from or below
a cluster context in which case the cluster is implied.
[-f | --force] Force the operation without confirmation. Allows the
command to be run from a non-interactive script.
[-h |--help] Displays the usage for this command.
[--verbose] Provides more output during command execution.
* - positional argument
Description
An Array Management Provider (AMP) is an external array management endpoint that VPLEX
communicates with to execute operations on individual arrays. Examples of AMPs include external
SMI-S providers
The amp unregister command unregisters an array provider. The AMP is no longer available for
any operations.
After being unregistered, the array-provider is no longer available. Re-registering the provider
results in definition of a new provider.
Examples
Unregistering an array provider:
See also
l amp register
array claim
Claims and names unclaimed storage volumes for a given array.
Contexts
All contexts.
Syntax
array claim
[-s|--storage-array] context-path
[-m|--mapping-file] mapping file
[-t|--tier]
[-l|--claim]
[--force]
Arguments
Required arguments
[-s|--storage-array] context- * Context path of the storage-array on which to claim
path storage volumes.
Optional arguments
[-m|--mapping-file] mapping Location of the name mapping file.
file
[-t|--tier] mapping file Add a tier identifier to the storage volumes to be claimed.
[-l|--claim] Try to claim unclaimed storage-volumes.
[--force] Force the operation without confirmation. Allows the
command to be run from a non-interactive script.
* - argument is positional.
Description
Claims and names unclaimed storage volumes for a given array.
Some storage arrays support auto-naming (Dell EMC Symmetrix/VMAX, CLARiiON/VNX,
XtremIO, Hitachi AMS 1000, HDS 9970/9980, and USP VM) and do not require a mapping file.
Other storage arrays require a hints file generated by the storage administrator using the array’s
command line. The hints file contains the device names and their World Wide Names.
Use the --mapping-file argument to specify a hints file to use for naming claimed storage
volumes. File names will be used to determine the array name.
Use the --tier argument to add a storage tier identifier in the storage-volume names.
This command can fail if there is not a sufficient number of meta-volume slots. See the
troubleshooting section of the VPLEX procedures in the SolVe Desktop for a resolution to this
problem.
See also
l storage-volume find-array
array forget
Removes a storage-array that is being retired from VPLEX.
Context
All contexts.
Syntax
array forget [-h|--help]
[--verbose]
[-r|--retire-logical-units]
[-a|--array]array
Arguments
Optional arguments
-h | --help Displays the usage for this command.
--verbose Provides more output during command execution. This might
not have any effect for some commands.
-r | --retire-logical- Retires all logical units before retiring the array. If not
units specified, the command fails if there are still logical units
from the array in the logical-units context on VPLEX.
Required arguments
-a | --array= array Specifies the context path of the storage-array to forget.
* - argument is positional
array re-discover
Re-discovers an array, and makes the array's storage volumes visible to the VPLEX.
Contexts
Cluster-specific context and lower.
Syntax
array re-discover
[-a|--array]context-path
[-c|--cluster]cluster-id
[-d|--hard]
[-f|--force]
Arguments
Required
arguments
[-a|--array] * Context path that specifies the storage-array to re-discover.
context-path
[-c|--cluster] Cluster ID of the target cluster.
cluster-id
Optional arguments
[-d|--hard] l Perform a hard rediscover. This is a disruptive operation because ITLs
are destroyed and full discoveries executed. I/O temporarily stops until
the array responds with data for each LUN. Discovery time correlates to
array response time, number of provisioned volumes, and number of
paths per volume. Large numbers of volumes result in longer discovery
times.
l VPLEX automatically verifies the volume ID (VPD ID) on existing
provisioned volumes to detect if the array's device/LUN mapping has
changed.
l *LUN swapping: Logical-unit swapping occurs when the array's back-
end device/LUN mapping has changed. This can be detected by
comparing the system's saved copy of the volume's ID (VPD_ID) with
value returned by INQ VPD83 to its LUN.
l For example: A LUN is removed from a storage group on an array and
then re-added. The LUN may now be mapped to a different device
which reports a different VPD_ID value. Data corruption could occur if
writes are sent to old VPD_ID value.
l If logical-unit swapping has occurred use the --hard option to force
fresh discovery of all ITLs on the array.
Note: using the --hard option is disruptive and can result in data
unavailability and/or data loss on live exported paths.
[-f|--force] Force the operation without confirmation. Allows the command to be run
from a non-interactive script.
* - argument is positional.
Description
Manually synchronizes the export state of the target device. Used in two scenarios:
l When the exported LUNs from the target array to VPLEX are modified.
Newer protocol-compliant SCSI devices return a notification code when the exported set
changes, and may not require manual synchronization. Older devices that do not return a
notification, must be manually synchronized.
l When the array is not experiencing I/O (the transport interface is idle), there is no mechanism
by which to collect the notification code. In this scenario, do one of the following:
n Wait until I/O is attempted on any of the LUNs,
n Disruptively disconnect and reconnect the array, or
n Use the array rediscover command.
CAUTION This command cannot detect LUN-swapping conditions on the arrays being
re-discovered. On older configurations, this might disrupt I/O on more than the given
array.
Use the ll /clusters/*/storage-elements/storage-arrays/ command to display the
names of storage arrays.
Examples
In the following example:
l The ll /clusters/*/storage-elements/storage-arrays/ command displays the
names of storage arrays.
l The array re-discover command re-discovers a specified array:
VPlexcli:/> cd /clusters/cluster-1
VPlexcli:/clusters/cluster-1> array re-discover storage-elements/storage-
arrays/EMC-0x00000000192601378 --force
VPlexcli:/> cd /clusters/cluster-1/storage-elements/storage-arrays/
VPlexcli:/clusters/cluster-1/storage-elements/storage-arrays/
EMC-0x00000000192601378> array re-discover --force
See also
l storage-volume find-array
array used-by
Displays the components that use a specified storage-array.
Contexts
All contexts.
Syntax
array used-by
[-a|--array]context-path
Arguments
[-a|--array] context- * Specifies the storage-array for which to find users. This argument
path is not required if the context is the target array.
* - argument is positional.
Description
Displays the components (storage-volumes) that use the specified storage array.
Examples
Display the usage of components in an array from the target storage array context:
VPlexcli:/clusters/cluster-1/storage-elements/storage-arrays/EMC-CLARiiON-
APM00050404263> array used-by
Used-by details for storage-array EMC-CLARiiON-APM00050404263:
/clusters/cluster-1/storage-elements/extents/
extent_6006016061211100363da903017ae011_1:
SV1
/clusters/cluster-1/devices/dev_clus1:
extent_SV1_1
SV1
/clusters/cluster-1/system-volumes/log1_vol:
extent_SV1_2
SV1
/clusters/cluster-1/devices/clus1_device1:
extent_SV1_3
SV1
/clusters/cluster-1/devices/clus1_dev2:
extent_SV1_4
SV1
/clusters/cluster-1/devices/device_6006016061211100d42febba1bade011_1:
extent_6006016061211100d42febba1bade011_1
VPD83T3:6006016061211100d42febba1bade011
/distributed-storage/distributed-devices/dev1_source:
dev1_source2012Feb16_191413
extent_sv1_1
sv1
/clusters/cluster-1/system-volumes/MetaVol:
VPD83T3:6006016022131300de76a5cec256df11
/clusters/cluster-1/system-volumes/MetaVol:
VPD83T3:600601606121110014da56b3b277e011
/clusters/cluster-1/system-volumes/MetaVol_backup_2012Feb13_071901:
VPD83T3:6006016061211100c4a223611bade011
Summary:
Count of storage-volumes that are not in use: 0
See also
l storage-volume find-array
l storage-volume summary
Arguments
Note: The -m option is no longer supported. Use the -g and -u options for managing access to
the management server by groups and users.
Required arguments
[-d|--directory- Specifies the directory server to configure on the cluster to
server] [1|2] authenticate users.
l 1 - Configures the directory service to map attributes for
OpenLDAP directory with POSIX attributes.
l 2 - Configures the directory service to map attributes for
Active Directory.
Note: If option 2 (Active Directory) is selected, use the --
custom-attributes argument to map attributes if directory
server UNIX attributes are different from the default attributes
mapped by VPLEX.
dc=org,dc=company,dc=com
cn=Manager,dc=my-domain,dc=com
Description
This command configures an authentication service on the VPLEX cluster.
VPLEX supports two types of authentication service providers to authenticate users: OpenLDAP
and Active Directory servers.
When VPLEX is configured with OpenLDAP, it uses POSIX account attribute mapping by default.
When VPLEX is configured with Active Directory server, it uses SFU 3.0 attribute mapping by
default.
If directory server UNIX attributes are different, use --custom-attributes and --
directory-server arguments.
Best practice is to add groups rather than users. Adding groups allows multiple users to be added
using one map-principal. VPLEX is abstracted from any changes (modify/delete) to the user.
In order to authenticate directory service users, directory service must be configured on the
VPLEX. Configuration includes:
l The type of directory server (OpenLDAP or Active Directory).
n OpenLDAP by default maps POSIX attributes.
n Active Directory by default maps SFU 3.0 attributes.
l The directory server’s IP address
l Whether the LDAP or LDAPs protocol is used
l Base Distinguished Name, for example:
dc=security,dc=orgName,dc=companyName,dc=com
cn=Administrator,dc=security,dc=orgName,dc=companyName,dc=com
ou=people,dc=security,dc=orgName,dc=companyName,dc=com
Examples
Configure the Active Directory directory service on the VPLEX cluster:
OR
OR
Note: To define a different posixGroup attribute use custom attributes. Use an appropriate
objectClass attribute for posixGroup (e.g., “posixGroup” or “groupOfNames” or “Group”) as
used by the OpenLDAP server.
See also
l authentication directory-service map
l authentication directory-service show
l authentication directory-service unconfigure
Arguments
Optional arguments
[-m|--map- Map a directory Server user or user group to the cluster. A map-
principal] ”map- principal is a sequence of relative distinguished names connected by
principal” commas. For example:
OU=eng,dc=vplex,dc=security,dc=lab,dc=emc,dc=com
Note: Option -m for commands can only be used with the older
configuration. Use the -g and -u options for adding user and
group access to the management server.
Enter:
[-g|--map-group] Users in the group matching this search string are authenticated.
"group-principal" Members of the group should be part of the user-search-path specified
during the configuration. The group-principal must be enclosed in
quotes.
--dry-run Run the command but don’t do anything.
Description
A directory server user is an account that can log in to the VPLEX Management Server. Users can
be specified explicitly or implicitly via groups.
Best practice is to add groups rather than users. Adding groups allows multiple users to be added
using one map-principal. VPLEX is abstracted from any changes (modify/delete) to the user.
Examples
Map an LDAP user to a VPLEX cluster:
See also
l authentication directory-service configure
l authentication directory-service show
l authentication directory-service unmap
Description
The fields shown in the output of the authentication directory-service show command
are described in the following table.
Field Description
Examples
Display the authentication service configuration:
See also
l authentication directory-service configure
l authentication directory-service map
Optional arguments
[-f|--force] Force the unconfigure without asking for confirmation.
Description
Removes the existing directory service configuration from the VPLEX management server.
Examples
Remove the existing directory service configuration:
See also
l authentication directory-service configure
l authentication directory-service show
l authentication directory-service unmap
Optional
arguments
[-m|--map- Mapped directory server Distinguished Name to unmap. For example:
principal]
“mapped- ou=eng,dc=vplex,dc=security,dc=lab,dc=emc,dc=com
principal”
l The map-principal must be enclosed in quotes.
l To include a backslash (\) in the map-principal, precede the backslash by a
second backslash. For example, to specify the map-principal:
Enter:
[-u|--map- Users matching this search string will be authenticated. The user-principal
user] "user- must be enclosed in quotes.
principal"
[-g|--map- Users in the group matching this search string are authenticated. Members of
group] "group- the group should be part of the user-search-path specified during the
principal" configuration. The group-principal must be enclosed in quotes.
Description
This command unmaps a directory server user or a user group from the VPLEX cluster. There must
be at least one principal mapped. If there is only one principal, it cannot be unmapped. If a user
search path is specified, unmapping all users and group principals will provide access to all users in
the user search path.
CAUTION VPLEX does not currently support unmapping LDAP users from LDAP servers if the
users have been removed from the servers. Ensure that the LDAP user is unmapped from
VPLEX before removing the user from the LDAP server.
Examples
Unmap a directory server user group from a VPLEX cluster:
OR
OR
See also
l authentication directory-service configure
l authentication directory-service map
l authentication directory-service show
Optional arguments
[-h|--help] Display the usage for this command.
[--verbose] Provides more output during command execution.
[-g|--group-by= <group_by>] Group degraded I-Ts by the specified field. Supported
fields: array, director.
Description
Lists I-Ts that have degraded performance, and I-Ts that have been isolated manually or isolated
due to unstable performance.
Examples
List all degraded I-Ts grouped by director.
Degraded I-Ts:
Degraded I-Ts:
See also
back-end degraded recover
Arguments
Optional arguments
[-h|--help] Display the usage for this command.
[--verbose] Provides more output during command execution.
[-p|--paths= The degraded I-Ts to recover. Each I-T must be expressed as a pair
<paths>] in the form "(<initiator>,<target>)".
[--all] Recover all currently degraded I-Ts.
Description
Assert that the specified I-Ts are healthy and move them out of their degraded state.
Examples
Recover a specific degraded I-T.
Recovered I-Ts:
Recovered I-Ts:
See also
back-end degraded list
batch-migrate cancel
Cancels an active migration and returns the source volumes to their state before the migration.
Contexts
All contexts.
Syntax
batch-migrate cancel
[-f|--file] pathname
Arguments
Required arguments
[-f|--file] Directory and filename of migration plan file. Relative paths can be
pathname used. If no directory is specified, the default directory is /var/log/
VPlex/cli on the management server.
Description
Attempts to cancel every migration in the specified batch file. If the command encounters an error,
the command prints a warning to the console and continues until every migration listed in the file
has been processed.
Note: In order to re-run a canceled migration plan, first run the batch-migrate remove
command to remove the records of the migration.
Examples
The following shows an example of the batch-migrate cancel command used to cancel every
migration in the migrate.txt file.
VPlexcli:/data-migrations/device-migrations>
batch-migrate cancel --file migrate.txt
See also
l batch-migrate clean
l batch-migrate commit
l batch-migrate create-plan
l batch-migrate pause
l batch-migrate remove
l batch-migrate resume
l batch-migrate start
l batch-migrate summary
batch-migrate check-plan
Checks a batch migration plan.
Contexts
All contexts.
Syntax
batch-migrate check-plan
[-f|--file] pathname
Arguments
Required arguments
[-f|--file] Directory and filename of migration plan file. Relative paths can be
pathname used. If no directory is specified, the default directory is /var/log/
VPlex/cli on the management server.
Description
Checks the following conditions:
l Block-size of source and target extents is equal (4 K bytes)
l Capacity of target extent is equal to, or larger than the source extent's capacity
l Device migrations:
n Target device has no volumes on it
n Source device has volumes on it
l Extent migrations:
n Target extent is claimed and ready for use
n Source extent is in use
Check all migration plans before beginning execution.
Examples
In the following example, a migration plan fails the check.
See also
l batch-migrate cancel
l batch-migrate clean
l batch-migrate commit
l batch-migrate create-plan
l batch-migrate pause
l batch-migrate remove
l batch-migrate resume
l batch-migrate start
l batch-migrate summary
batch-migrate clean
Cleans the specified batch migration and deletes the source devices.
Contexts
All contexts.
Syntax
batch-migrate clean
[-f|--file] pathname
[-e|--rename-targets]
Arguments
Required arguments
[-f|--file] pathname *Directory and filename of migration plan file. relative paths can be
used. If no directory is specified, the default directory
is /var/log/VPlex/cli on the management server.
Optional arguments
[-e|--rename-targets] rename the target devices and virtual volumes to the source device
names.
* argument is positional.
Description
Dismantles the source device down to its storage volumes and unclaims the storage volumes.
l For device migrations, cleaning dismantles the source device down to its storage volumes. The
storage volumes no longer in use are unclaimed.
For device migrations only, use the optional --rename-targets argument to rename the
target device after the source device. If the target device is renamed, the virtual volume on top
of it is also renamed if the virtual volume has a system-assigned default name.
Without renaming, the target devices retain their target names, which can make the
relationship between volumes and devices less evident.
l For extent migrations, cleaning destroys the source extent and unclaims the underlying
storage-volume if there are no extents on it.
CAUTION This command must be run before the batch-migration has been removed. The
command will not clean migrations that have no record in the CLI context tree.
Example
In the following example, source devices are torn down to their storage volumes and the target
devices and volumes are renamed after the source device names:
See also
l batch-migrate cancel
l batch-migrate check-plan
l batch-migrate commit
l batch-migrate create-plan
l batch-migrate pause
l batch-migrate remove
batch-migrate commit
Commits the specified batch migration.
Contexts
All contexts.
Syntax
batch-migrate commit
[-f|--file] pathname
Arguments
Required argument
[-f|--file] pathname *Directory and filename of migration plan file. relative paths can be
used. if no directory is specified, the default directory is /var/log/
VPlex/clion the management server.
Description
Attempts to commit every migration in the batch. Migrations in the batch cannot be committed
until all the migrations are complete.
If the command encounters an error, the command displays a warning continues until every
migration has been processed.
The batch migration process inserts a temporary RAID 1 structure above the source devices/
extents with the target devices/extents as an out-of-date leg of the RAID. Migration can be
understood as the synchronization of the out-of-date leg (the target).
After the migration is complete, the commit step detaches the source leg of the temporary RAID
and removes the RAID.
The virtual volume, device, or extent is identical to the one before the migration except that the
source device/extent is replaced with the target device/extent.
In order to clean a migration job, you must first commit the job.
Use the batch-migrate summary command to verify that the migration has completed with no
errors before committing the migration.
Examples
This example commits a list of batch migrations specified in BSO_19.
See also
l batch-migrate cancel
l batch-migrate check-plan
l batch-migrate clean
l batch-migrate create-plan
l batch-migrate remove
batch-migrate create-plan
Creates a batch migration plan file.
Contexts
All contexts.
Syntax
batch-migrate create-plan
[-f|--sources] local-devices
[-t|--targets] local-devices
[--file] pathname
[--force]
Arguments
Required arguments
[-f|--sources] local- * List of local-devices to migrate virtual volumes from. May contain
devices wildcards.
[-t|--targets] local- * List of local-devices to migrate the source virtual volumes to.
devices May contain wildcards.
--file pathname * Directory and filename of migration plan file. Relative paths can
be used. If no directory is specified, the default directory
is /var/log/VPlex/cli on the management server.
Optional arguments
--force Forces an existing plan file with the same name to be overwritten.
* - argument is positional.
Description
The following rules apply to the batch-migrate create-plan command:
l The source and target extents must be typed as a comma-separated list, where each element
is allowed to contain wildcards.
l If this is an extent migration, the source and target cluster must be the same.
l If this is a device migration, the source and target clusters can be different.
l The source and target can be either local-devices or extents. Mixed migrations from local-
device to extent and vice versa are not allowed.
l The command attempts to create a valid migration plan from the source devices/extents to
the target devices/extents.
If there are source devices/extents that cannot be included in the plan, the command prints a
warning to the console, but still creates the plan.
l Review the plan and make any necessary changes before starting the batch migration.
Examples
Example: perform a batch migration
1. Create a migration plan.
Use the batch-migrate create-plan command to create a plan to migrate the volumes
on all the devices at cluster-1 to the storage at cluster-2:
If problems are found, correct the errors and re-run the command until the plan-check passes.
3. Use the batch-migrate start command to start the migration:
5. When all the migrations are complete, use the batch-migrate commit command to commit
the migration:
This dismantles the source devices down to their storage volumes and renames the target
devices and volumes using the source device names.
7. Use the batch-migrate remove command to remove the record of the migration:
See also
l batch-migrate cancel
l batch-migrate check-plan
l batch-migrate clean
l batch-migrate commit
l batch-migrate pause
l batch-migrate remove
l batch-migrate resume
l batch-migrate start
l batch-migrate summary
batch-migrate pause
Pauses the specified batch migration.
Contexts
All contexts.
Syntax
batch-migrate pause
[--file] pathname
Arguments
Required arguments
--file pathname Directory and filename of migration plan file. Relative paths can be used. If
no directory is specified, the default directory is /var/log/VPlex/cli
on the management server.
Description
Pauses every migration in the batch. If the command encounters an error, the command prints a
warning and continues until every migration has been processed.
You can pause active migrations (a migration that has been started) and resume that migration at
a later time.
l Pause an active migration to release bandwidth for host I/O during periods of peak traffic.
Use the batch-migrate pause --file pathname command to pause the specified active
migration.
l Resume the migration during periods of low I/O.
Use the batch-migrate resume --file pathname command to resume the specified
paused migration.
Examples
The following example pauses all of the migrations listed in BSO_19.
See also
l batch-migrate cancel
l batch-migrate check-plan
l batch-migrate clean
l batch-migrate commit
l batch-migrate create-plan
l batch-migrate remove
l batch-migrate resume
l batch-migrate start
l batch-migrate summary
batch-migrate remove
Removes the record of the completed batch migration.
Contexts
All contexts.
Syntax
batch-migrate remove
[--file] Required arguments
Arguments
Required arguments
--file pathname Directory and filename of migration plan file. Relative paths can be used. If
no directory is specified, the default directory is /var/log/VPlex/cli
on the management server.
Description
Remove the migration record only if the migration has been committed or canceled.
Migration records are in the /data-migrations/device-migrations context.
Examples
Remove a group of migration jobs.
or:
See also
l batch-migrate cancel
l batch-migrate check-plan
l batch-migrate clean
l batch-migrate commit
l batch-migrate create-plan
l batch-migrate pause
l batch-migrate resume
l batch-migrate start
l batch-migrate summary
batch-migrate resume
Attempts to resume every migration in the specified batch.
Contexts
All contexts.
Syntax
batch-migrate resume
[--file] pathname
Arguments
Required arguments
--file pathname Directory and filename of migration plan file. Relative paths can be used. If
no directory is specified, the default directory is /var/log/VPlex/cli
on the management server.
Description
Resumes the given batch migration.
If an error is encountered, a warning is printed to the console and the command continues until
every migration has been processed.
Examples
Resume all of the migrations specified in the file BSO_19.
See also
l batch-migrate cancel
l batch-migrate check-plan
l batch-migrate clean
l batch-migrate commit
l batch-migrate create-plan
l batch-migrate pause
l batch-migrate remove
l batch-migrate start
l batch-migrate summary
batch-migrate start
Starts the specified batch migration.
Contexts
All contexts.
Syntax
batch-migrate start
[--file] pathname
[-s|transfer-size] 40K - 128M
--force
--paused
Arguments
Required arguments
--file pathname * Directory and filename of migration plan file. Relative paths can be used.
If no directory is specified, the default directory is /var/log/
VPlex/cli on the management server.
Optional arguments
[-s|transfer- Maximum number of bytes to transfer as one operation per device.
size] size Specifies the size of read sector designated for transfer in cache. Setting
transfer size to a lower value implies more host I/O outside the transfer
boundaries. Setting transfer size to a higher value may result in faster
transfers. See About transfer-size below. Valid values must be a multiple
of 4 K.
l Range: 40 K - 128 M.
l Default: 128 K.
--force Do not ask for confirmation when starting individual migrations. Allows this
command to be run using a non-interactive script, .
--paused Starts the migration in a paused state. The migration remains paused until
restarted using the batch-migrate resume command.
* - argument is positional.
Description
Starts a migration for every source/target pair in the given migration-plan.
CAUTION Inter-cluster migration of volumes is not supported on volumes that are in use.
Schedule this activity as a maintenance activity to avoid Data Unavailability.
Consider scheduling this activity during maintenance windows of low workload to reduce
impact on applications and possibility of a disruption.
If a migration fails to start, the command prints a warning to the console. The command continues
until every migration item completes been processing.
Individual migrations may ask for confirmation when they start. Use the --force argument to
suppress these requests for confirmation.
Batch migrations across clusters can result in the following error:
Refer to the troubleshooting section of the VPLEX procedures in the SolVe Desktop for
instructions on increasing the number of slots.
About transfer-size
Transfer-size is the size of the region in cache used to service the migration. The area is globally
locked, read at the source, and written at the target.
Transfer-size can be as small 40 K, as large as 128 M, and must be a multiple of 4 K. The default
recommended value is 128 K.
A larger transfer-size results in higher performance for the migration, but may negatively impact
front-end I/O. This is especially true for VPLEX Metro migrations.
A smaller transfer-size results in lower performance for the migration, but creates less impact on
front-end I/O and response times for hosts.
Set a large transfer-size for migrations when the priority is data protection or migration
performance.Set a smaller transfer-size for migrations when the priority is front-end storage
response time.
Factors to consider when specifying the transfer-size:
l For VPLEX Metro configurations with narrow inter-cluster bandwidth, set the transfer size
lower so the migration does not impact inter-cluster I/O.
l The region specified by transfer-size is locked during migration. Host I/O to or from that
region is held. Set a smaller transfer-size during periods of high host I/O.
l When a region of data is transferred, a broadcast is sent to the system. Smaller transfer-size
mean more broadcasts, slowing the migration.
Examples
See also
l batch-migrate cancel
l batch-migrate check-plan
l batch-migrate clean
l batch-migrate commit
l batch-migrate create-plan
l batch-migrate pause
l batch-migrate remove
l batch-migrate resume
l batch-migrate summary
l dm migration start
batch-migrate summary
Displays a summary of the batch migration.
Contexts
All contexts.
Syntax
batch-migrate summary
[--file] pathname
[-v|--verbose]
Arguments
Required arguments
--file pathname Directory and filename of migration plan file. Relative paths can be used. If
no directory is specified, the default directory is /var/log/VPlex/cli
on the management server.
Optional arguments
[-v|verbose] In addition to the specified migration, displays a summary for any in-
progress and paused migrations.
Description
Displays a summary of the batch migration.
If the --verbose option is used, displays in the batch that are in an error state.
Field Description
Field Description
Note: If more than 25 migrations are active at the same time, they are queued, their status is
displayed as in-progress, and percentage-complete is displayed as ?.
Examples
Display a batch migration:
See also
l batch-migrate cancel
l batch-migrate check-plan
l batch-migrate clean
l batch-migrate commit
l batch-migrate create-plan
l batch-migrate pause
l batch-migrate remove
l batch-migrate resume
l batch-migrate start
battery-conditioning disable
Disables battery conditioning on the specified backup battery units.
Contexts
All contexts.
Syntax
battery-conditioning disable
[-s|--sps-unit]context-path,context path,...
[-c|--all-at-cluster]cluster
[-t|--bbu-type]bbu-type
[-f|--force]
Arguments
Optional arguments
[-s|--sps-unit] * Standby power supplies (SPS) units on which to disable battery
context path,context conditioning. If this argument is used:
path... l Do not specify the --all-at-cluster argument
l The command ignores the --bbu-type.
[-t|--bbu-type] Type of battery unit on which to disable conditioning. For the current
bbu-type release, only standby-power-supply (SPS) units are supported. The
command ignores this argument if you enter the --sps-unit
argument.
[-f|--force] Skips the user confirmation that appears when the battery unit on
which conditioning is being disabled is currently undergoing
conditioning. Allows the command to execute from an non-interactive
script.
* - argument is positional.
Description
Automatic battery conditioning of every SPS is enabled by default. Use this command to disable
battery conditioning for all SPS units in a cluster, or a specified SPS unit.
Disabling conditioning on a unit that has a cycle in progress causes that cycle to abort, and user
confirmation is required. Use the --force argument to skip the user confirmation.
Automatic battery conditioning must be disabled during:
l Scheduled maintenance
l System upgrades
l Unexpected and expected power outages
CAUTION For all procedures that require fully operational SPS, ensure that SPS
conditioning is disabled at least 6 hours in advance of the procedure. This prevents the
SPS from undergoing battery conditioning during the procedure.
Disable battery conditioning for a specified SPS unit and display the change:
Disable battery conditioning on a specified SPS that is currently undergoing battery conditioning:
See also
l battery-conditioning enable
battery-conditioning enable
Enables conditioning on the specified backup battery units.
Contexts
All contexts.
Syntax
battery-conditioning enable
[-s|--sps-unit] context-path,context path,...
[-c|--all-at-cluster] cluster
[-t|--bbu-type] bbu-type
Arguments
Optional arguments
[-s|--sps-unit] * Standby power supplies (SPS) units on which to enable battery
context path,context conditioning. If this argument is used:
path... l Do not specify the --all-at-cluster argument
l The command ignores the --bbu-type argument
[-t|--bbu-type] bbu- Type of battery unit on which to enable conditioning. For the current
type release, only standby-power-supply (SPS) units are supported. The
command ignores this argument if you specify the --sps-unit
argument.
* - argument is positional.
Description
Use this command to enable battery conditioning for all standby power supply (SPS) units in a
cluster, or for a specific SPS unit.
SPS battery conditioning assures that the battery in an engine’s standby power supply can provide
the power required to support a cache vault. A conditioning cycle consists of a 5 minute period of
on-battery operation and a 6 hour period for the battery to recharge. Automatic conditioning runs
every 4 weeks, one standby power supply at a time.
Automatic battery conditioning of every SPS is enabled by default.
Automatic battery conditioning must be disabled during:
l Scheduled maintenance
l System upgrades
l Unexpected and expected power outages
Use this command to re-enable battery conditioning after activities that require battery
conditioning to be disabled are completed.
Examples
Enable battery conditioning on a specified SPS and display the change:
Enable battery conditioning on all SPS units in cluster-1 and display the change:
See also
l battery-conditioning disable
l battery-conditioning manual-cycle cancel-request
l battery-conditioning manual-cycle request
l battery-conditioning set-schedule
l battery-conditioning summary
Required arguments
[-s|--sps-unit] Standby power supply (SPS) unit on which to cancel a previously
context path requested battery conditioning. The full context path is required
when this command is run from the engines context or higher.
Description
Cancels a manually requested conditioning cycle on the specified backup battery unit.
Automatic battery conditioning cycles run on every SPS every 4 weeks.
Manually requested battery conditioning cycles are in addition to the automatic cycles.
Use this command to cancel a manual battery conditioning cycle.
Note: This command does not abort a battery conditioning cycle that is underway. It cancels
only a request for a manual cycle.
Examples
Cancel a manually-scheduled SPS battery conditioning from the root context, and display the
change:
See also
l battery-conditioning disable
l battery-conditioning enable
l battery-conditioning manual-cycle request
l battery-conditioning set-schedule
l battery-conditioning summary
Required arguments
[-s|--sps-unit] * Standby power supply (SPS) units on which to request the battery
context path conditioning cycle. The full context path is required when this
command is run from the engines context or higher.
Optional arguments
[-f|--force] Forces the requested battery conditioning cycle to be scheduled
without confirmation if the unit is currently in a conditioning cycle.
Allows this command to be run from non-interactive scripts.
* - argument is positional.
Description
Requests a conditioning cycle on the specified backup battery unit, and displays the time the cycle
is scheduled to start. The requested battery conditioning cycle is scheduled at the soonest
available time slot for the specified unit.
If the specified unit is currently undergoing a conditioning cycle, this command requests an
additional cycle to run at the next available time slot.
If battery conditioning is disabled, the manually requested cycle does not run.
Use this command to manually schedule a battery conditioning cycle when automatic conditioning
has been disabled in order to perform maintenance, upgrade the system, or shut down power.
The conditioning cycle invoked by this command runs in the next 6-hour window available for the
selected unit.
Note: Scheduling a manual conditioning cycle while a conditioning cycle is already in progress
contributes to shortened battery life and is not recommended.
Examples
Schedule a manual SPS battery conditioning cycle from the root context and display the change:
Schedule a manual SPS battery conditioning from the engines/engine context when a conditioning
cycle is already underway:
See also
l battery-conditioning disable
l battery-conditioning enable
l battery-conditioning manual-cycle cancel-request
l battery-conditioning set-schedule
l battery-conditioning summary
battery-conditioning set-schedule
Set the battery conditioning schedule (day of week) for backup battery units on a cluster.
Contexts
All contexts.
Syntax
battery-conditioning set-schedule
[-t|--bbu-type] bbu-type
[-d|--day-of-week] [sunday|monday|...|saturday]
[-c|--cluster] cluster
Arguments
Required arguments
[-t|--bbu-type] bbu-type * Type of battery backup unit to be conditioned.
Note: In the current release, the only bbu-type
supported is sps.
[-d|-- day-of-week] * Day of the week on which to run the battery conditioning.
[sunday|monday|...| Valid values are: sunday, monday, tuesday, wednesday, etc.
saturday]
[-c|--cluster] cluster * Cluster on which to set the battery conditioning schedule.
* - argument is positional.
Description
Sets the day of week when the battery conditioning cycle is started on all backup battery units
(BBU) on a cluster.
The time of day the conditioning cycle runs on an individual backup battery unit is scheduled by
VPLEX.
SPS battery conditioning assures that the battery in an engine’s standby power supply can provide
the power required to support a cache vault. A conditioning cycle consists of a 5 minute period of
on-battery operation and a 6 hour period for the battery to recharge. Automatic conditioning runs
every 4 weeks, one standby power supply at a time.
Automatic battery conditioning of every SPS is enabled by default.
Use this command to set the day of the week on which the battery conditioning cycle for each
SPS unit (one at a time) begins.
The following table shows battery conditioning fields:
Field Description
Field Description
Examples
Set the start day of the battery conditioning cycle for all SPS units in cluster-1 to Saturday and
display the change:
See also
l battery-conditioning disable
l battery-conditioning enable
l battery-conditioning manual-cycle cancel-request
l battery-conditioning manual-cycle request
l battery-conditioning summary
battery-conditioning summary
Displays a summary of the battery conditioning schedule for all devices, grouped by type and
cluster
Contexts
All contexts.
Syntax
battery-conditioning summary
Description
Displays a summary of the conditioning schedule for all devices, grouped by type and cluster.
Field Description
Field Description
Examples
Display battery conditioning schedule for all devices:
See also
l battery-conditioning disable
l battery-conditioning enable
cache-invalidate
In the virtual-volume context, invalidates cached reads on all directors based on its visibility to the
clusters. In the consistency-group context, invalidates the cached reads of all exported virtual
volumes in the specified consistency group, on all directors based on the virtual volume’s visibility
to the clusters.
Contexts
virtual-volume
consistency-group
Syntax
In virtual-volume context:
virtual-volume cache-invalidate
[-v|--virtual-volume]virtual-volume
[--force]
[--verbose]
In consistency-group context:
consistency-group cache-invalidate
[-g|--consistency-group ] consistency-group context path
[--force]
[--verbose]
Arguments
Required arguments
[-v|--virtual-volume] * Invalidates the cache only for the specified virtual volume.
virtual-volume Wildcard patterns (CLI glob patterns) are NOT allowed.
[-g|--consistency-group] * Invalidates the cache for all exported virtual volumes in the
consistency-group context path consistency-group. Wildcard patterns (CLI glob patterns) are
NOT allowed.
Optional arguments
[--force] The --force option suppresses the warning message and
executes the cache-invalidate directly.
[--verbose] The --verbose option provides more detailed messages in
the command output when specified.
* - argument is positional.
Note: This command does suspend I/O on the virtual volume while invalidation is in progress.
Use of this command causes data unavailability on the virtual volume for a short time in a
normal operating setup.
Description
In the virtual-volume context, invalidates cache reads of a virtual volume on all directors
based on its visibility to the clusters. In the consistency-group context, invalidates cache
reads of all exported virtual volumes in the specified consistency group, on all directors based on
the virtual volume’s visibility to the clusters.
All cached reads associated with the selected volume are invalidated on all directors of the VPLEX
cluster. Subsequent reads from host application will fetch data from storage volume due to cache
miss.
Cache can be invalidated only for exported virtual volumes. There are no cache reads on non-
exported volumes.
When executed for a given virtual volume, based on the visibility of the virtual volume, this
command invalidates the cache reads for that virtual volume on the directors at clusters that
export them to hosts.
Note: The CLI must be connected to all directors in order for the cache reads to be flushed
and invalidated. The command performs a pre-check to verify all directors are reachable.
When executed for a given consistency group, based on the visibility of each of the virtual volumes
in the consistency group, this command invalidates the cache reads for the virtual volumes part of
the consistency-group on the directors at clusters that export them to hosts.
The command performs pre-checks to verify all expected directors in each cluster are connected
and in healthy state before issuing the command.
If I/O is in progress on the virtual volume, the command issues an error requesting the user to stop
all host applications accessing the virtual-volume first.
The command issues a warning stating that the cache invalidate could cause a data unavailability
on the volume that it is issued on.
Any cached data at the host or host applications will not be cleared by this command.
Do not execute the cache-invalidate command on a RecoverPoint enabled virtual-volume.
The VPLEX clusters should not be undergoing an NDU while this command is being executed.
There should be no mirror rebuilds or migrations in progress on the virtual volume on which the
command is executed.
There should be no volume expansion in progress on the virtual volume that the command is being
executed.
The applications accessing data on the virtual volume should be stopped. The application or the
host operating system should not have any access to the VPLEX virtual volume.
There are four categories of cache-invalidate results tracked and reported back by the command:
l If the command completes successfully, the result output will state that the operation was
successful. The cache-invalidate-status command does not track or report any result
for this operation, as the command completed successfully.
l If the command cannot be run because the volume is not exported from any directors. If any of
the configured directors are not exporting the virtual volume, then an informational message is
displayed and the command output ends.
l If the command failed on a director. The result displays an error indicating that the command
failed on a specific director.
l If the command execution exceeds five minutes and there are no directors with a failed
response, a check is made on directors that have the operation running in background. The
result displays an error that shows the directors with the operation still running, and an
instruction to execute the cache-invalidate-status command to view progress.
Examples
Example output when cache invalidation for the virtual volume and consistency group is
successful.
Brief message:
Verbose message:
Example output when cache-invalidation failed for the virtual volume and for the consistency
group.
Brief message:
Verbose message:
Example output when any directors cannot be reached. The message includes an instruction to re-
issue the command.
Note: The cache-invalidate command must be completed for every director where the virtual
volume or consistency group is potentially registered in cache.
Virtual-volume:
Consistency group:
Consistency group:
See also
l cache-invalidate-status
l virtual-volume destroy
l virtual-volume provision
cache-invalidate-status
Displays the current status of a cache-invalidate operation on a virtual volume, or for a specified
consistency group.
Contexts
virtual-volume
consistency-group
Syntax
In virtual-volume context:
cache-invalidate-status
[-v|--virtual-volume] virtual-volume
[-h|--help]
[--verbose]
In consistency-group context:
cache-invalidate-status
[-g|--consistency-group] consistency-group context path
[-h|--help]
[--verbose]
Arguments
Required arguments
[-v | --virtual-volume] virtual- * Specifies the virtual volume cache invalidation
volume status to display.
[-g| --consistency-group] * Specifies the consistency group cache invalidation
consistency-group context path status to display.
Optional arguments
[-h | --help] Displays the usage for this command.
[--verbose] Provides a detailed summary of results for all
directors when specified. If not specified, only the
default brief summary message is displayed.
* - argument is positional.
Description
This command is executed only when the virtual-volume cache-invalidate command or
the consistency-group cache-invalidate command has exceeded the timeout period of
five minutes. If this command is executed for a reason other than timeout, the command will return
“no result.”
The cache-invalidate status command displays the current status for operations that are
successful, have failed, or are still in progress.
If the --verbose option is not specified, this command generates a brief message summarizing
the current status of the cache-invalidation for the virtual volume or consistency group, whichever
context has been specified. In the event of cache-invalidation failure, the brief mode output will
only summarize the first director to fail, for if one director fails, the entire operation is considered
to have failed.
If the --verbose option is specified, then the cache-invalidate status is displayed on
each of the directors on which the cache-invalidate is pending.
There are three fields of particular interest
Field Description
This command is read-only. It does not make any changes to storage in VPLEX or to VPLEX in any
way. It is only for providing information on the current status of cache-invalidation operations.
Examples
Example output when the cache-invalidate operation finished with success.
Brief message:
Verbose message:
Verbose message:
Verbose message:
cache-invalidate-status
-----------------------
director-1-1-A status: in-progress
result: -
cause: -
director-1-1-B status: in-progress
result: -
cause: -
See also
l cache-invalidate
capture begin
Begins a capture session.
Contexts
All contexts.
Syntax
capture begin
[-s|session] session name
[-c|capture-directory] capture-directory
Arguments
Required arguments
[-s|--session] session name * Name of capture session. Output files from the capture
session are named using this value.
[-c|--capture-directory] * Pathname for the capture directory. Default capture
directory directory: /var/log/VPlex/cli/capture
* - argument is positional.
Description
The session captures saves all the stdin, stdout, stderr, and session I/O streams to 4 files:
l session name-session.txt - Output of commands issued during the capture session.
l session name-stdin.txt - CLI commands input during the capture session.
l session name-stdout.txt - Output of commands issued during the capture session.
l session name-stderr.txt - Status messages generated during the capture session.
Note: Raw tty escape sequences are not captured. Use the --capture shell option to
capture the entire session including the raw tty sequences.
Capture sessions can have nested capture sessions but only the capture session at the top of the
stack is active.
Use the capture end command to end the capture session.
Use the capture replay command to resubmit the captured input to the shell.
Examples
In the following example, the capture begin command starts a capture session named
TestCapture. Because no directory is specified, output files are placed in the /var/log/
VPlex/cli/capture directory on the management server.
See also
l capture end
l capture pause
l capture replay
l capture resume
capture end
Ends the current capture session and removes it from the session capture stack.
Contexts
All contexts.
Syntax
capture end
Description
The session at the top of the stack becomes the active capture session.
Examples
End a capture session.
See also
l capture begin
l capture pause
l capture replay
l capture resume
capture pause
Pauses the current capture session.
Contexts
All contexts.
Syntax
capture pause
Description
Pause/resume operates only on the current capture session.
Examples
Pause a capture session.
See also
l capture begin
l capture end
l capture replay
l capture resume
capture replay
Replays a previously captured session.
Contexts
All contexts.
Syntax
capture replay
[-s|-session] session name
[-c|--capture-directory] directory
Arguments
Required arguments
[-s|--session] session name * Name of existing capture session.
[-c| --capture-directory] * Directory where existing captured session is
directory located. Default directory /var/log/VPlex/cli/
capture/recapture
* - argument is positional.
Description
Replays the commands in the stdin.txt file from the specified capture session.
Output of the replayed capture session is written to the /var/log/VPlex/cli/capture/
recapture directory on the management server.
Output is the same four files created by capture begin.
Example
Replay a capture session.
See also
l capture begin
l capture end
l capture pause
l capture resume
capture resume
Resumes the current capture session.
Contexts
All contexts.
Syntax
capture resume
Description
Pause/resume operates only on the current capture session.
Examples
Resume the current capture session.
See also
l capture begin
l capture end
l capture pause
l capture replay
cd
Changes the working directory.
Contexts
All contexts.
Syntax
cd [context]
Arguments
Optional
arguments
context Change to the specified context. The context can be one of the following:
l context path - The full or relative pathname of the context.
l .. - the parent context of the context you are currently in.
l ... - the root context.
l -(dash) - The context you were in before changing to this context.
If you do not specify a context, the cd command changes to the root directory.
Description
Use the cd command with no arguments or followed by three periods (cd ... ) to return to the
root context.
Use the cd command followed by two periods (cd .. ) to return to the context immediately
above the current context.
Use the cd command followed by a dash (cd -) to return to the previous context.
To navigate directly to a context from any other context, use the cd command and specify the
context path.
Examples
Return to the root context:
VPlexcli:/engines/engine-1-1/fans> cd
VPlexcli:/>
VPlexcli:/monitoring/directors/director-1-1-B> cd ..
VPlexcli:/monitoring/directors>
VPlexcli:/engines/engine-2-1/fans> cd /engines/engine-1-1/fans/
chart create
Creates a chart based on a CSV file produced by the report command.
Contexts
All contexts.
Syntax
chart create
[--input] “input file"
[--output] "output file”
[--series] series column
[--range] series range
[--domain] domain column
[--width] chart width
[--height] chart height
[--aggregate] aggregate-series-name
Arguments
Required arguments
--input "input file" CSV file to read data from, enclosed in quotes.
--output "output file" PNG file to save chart to, enclosed in quotes.
--series series column The column in the CSV file to use as series.
--range series range The column in the CSV file to use as range.
--domain domain column - The column in the CSV file to use as domain.
--width chart width The width of chart graphic.
l Range: 64-2048.
l Default: 500.
VPlexcli:/> exit
Connection closed by foreign host.
service@ManagementServer:~> cd /var/log/VPlex/cli/reports
service@ManagementServer:/var/log/VPlex/cli/reports> tail CapacityClusters.csv
Time, Cluster, Unclaimed disk capacity (GiB), Unclaimed storage_volumes, Claimed disk
capacity(GiB), Claimed storage_volumes, Used storage-volume capacity (GiB), Used
storage_volumes, Unexported volume capacity (GiB), Unexported volumes, Exported volume
capacity (GiB), Exported volumes
2010-06-21 15:59:39, cluster-1, 5705.13, 341, 7947.68, 492, 360.04, 15, 3.00, 3, 2201.47,
27
.
.
.
service@ManagementServer:~> vplexcli
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Enter User Name: service
Password: :
creating logfile:/var/log/VPlex/cli/session.log_service_localhost_ T28921_20101020175912
VPlexcli:> chart create "CapacityClusters.csv" "CapacityClusters.png" 1 2 0 500 500
VPlexcli:/> exit
Connection closed by foreign host.
service@ManagementServer:~> cd /var/log/VPlex/cli/reports
service@ManagementServer:/var/log/VPlex/cli/reports>
total 48
.
.
.
-rw-r--r-- 1 service users 844 2010-07-19 15:55 CapacityClusters.csv
-rw-r--r-- 1 service users 18825 2010-07-19 15:56 CapacityClusters.png
.
.
.
See also
l report aggregate-monitors
l report capacity-arrays
l report capacity-clusters
l report capacity-hosts
l report create-monitors
cluster add
Adds a cluster to a running VPLEX.
Contexts
All contexts.
Syntax
cluster add
[-c|--cluster] context path
[-t|--to] cluster
[-f|--force]
Arguments
Required arguments
[-c|--cluster] context path * Cluster to add.
[-t|--to] cluster * Cluster to which the given cluster is added. This is only
necessary if the system cannot be automatically determined.
Optional arguments
[-f|--force] Forces the cluster addition to proceed even if conditions are
not optimal.
* - argument is positional.
Description
Before a cluster can communicate with the other cluster of a Metro, you must use the cluster add
command.
Use the --to argument:
l During system bring-up when no clusters have yet been told about other clusters. In this
scenario, any cluster can be used as the system representative.
l Multiple systems have been detected. Connection to multiple systems, is not supported.
If there only one system actually present, but it has split into islands due to connectivity
problems, it is highly advisable to repair the problems before proceeding. Add the given cluster
to each island separately.
If the intention is to merge two existing systems, break up one of the systems and add it to the
other system cluster-by-cluster.
Examples
In the following example:
l The cluster add command adds two clusters.
l The cluster summary command verifies that the two clusters have the same island ID:
See also
l cluster expel
l cluster status
l cluster summary
cluster cacheflush
Flushes the cache on directors at the specified clusters to the back-end storage volumes.
Contexts
All contexts.
In /clusters context, command is cacheflush.
Syntax
cluster cacheflush
[-e|--sequential]
[-c|--clusters]cluster,cluster
[-v|--volumes] volumes
--verbose
Arguments
Required arguments
[-c|--clusters] Flushes the cache for every exported virtual volume of every
clusters,cluster director at the specified clusters. Entered as wildcard patterns.
[-v|--volumes] volumes Flushes the cache only for the specified list of virtual volumes.
Entries must be separated by commas. Wildcard patterns (CLI
glob patterns) are allowed.
Optional arguments
[-e|--sequential] Flushes the cache of multiple directors sequentially. Default is to
flush the caches in parallel.
--verbose Displays a progress report during the flush. Default is to display
no output if the run is successful.
Description
Note: The CLI must be connected to a director before the cache can be flushed. Only exported
virtual volumes can be flushed.
When executed from a specific cluster context, this command flushes the cache of the directors at
the current cluster.
Examples
See also
l cluster status
l cluster summary
cluster configdump
Dumps cluster configuration in an XML format, optionally directing it to a file.
Contexts
All contexts.
In /clusters context, command is configdump.
Syntax
cluster configdump
[-c|--cluster] cluster
[-d|--dtdOnly]
[-f|--file] filename
Arguments
Optional arguments
[-c|--clusters] cluster Dump configuration information for only the specified cluster.
[-d|--dtdOnly] Print only the Document Type Definitions (DTD) document.
[-f|--file] filename Direct the configdump output to the specified file. Default location
for the output file on the management server is: /var/log/
VPlex/cli.
Description
Dumped data includes:
l I/O port configurations
l Disk information, including paths from the directors to the storage volumes
l Device configuration and capacity
l Volume configuration
l Initiators
l View configuration
l System-volume information
The XML output includes the DTD to validate the content.
Examples
Dump cluster-1’s configuration to an .xml file:
Dump the configuration at cluster-1, navigate to the cli context on the management server, and
display the file:
See also
l collect-diagnostics
l director appcon
l getsysinfo
l sms dump
cluster expel
Expels a cluster from its current island.
Contexts
All contexts.
In /clusters context, command is expel.
Syntax
cluster expel
[-c|--cluster] cluster
[-f|--force]
Arguments
Required arguments
[-c|--clusters] cluster * The cluster to expel.
[-f|--force] Forces the cluster to be expelled.
* - argument is positional.
Description
Cluster expulsion prevents a cluster from participating in a VPLEX. Expel a cluster when:
l The cluster is experiencing undiagnosed problems.
l To prepare for scheduled outage.
l The target cluster, or the WAN over which the rest of the system communicates, is going to be
inoperable for a while.
l An unstable inter-cluster link impacts performance.
An expelled cluster is still physically connected to the VPLEX, but not logically connected.
The --force argument is required for the command to complete.
Use the cluster unexpel command to allow the cluster to rejoin the island.
Examples
In the following example:
l The cluster expel command expels the cluster.
l The cluster summary and cluster status commands verify the change.
See also
l cluster unexpel
cluster forget
Tells VPLEX and Unisphere for VPLEX to forget the specified cluster.
Contexts
All contexts.
In /clusters context, command is forget.
Syntax
cluster forget
[-c|--cluster] context path
[-d|--disconnect]
[-f|--force]
Arguments
Required arguments
[-c|--clusters] context path * Cluster to forget.
Optional arguments
[-d|--disconnect] Disconnect from all directors in the given cluster and remove
the cluster from the context tree after the operation is
complete.
[-f|--force] Force the operation to continue without confirmation.
* - argument is positional.
Description
Removes all references to the specified cluster from the context tree.
The prerequisites for forgetting a cluster are as follows:
l The target cluster can not be in contact with other connected clusters.
l The Unisphere for VPLEX cannot be connected to the target cluster.
l Detach all distributed devices with legs at the target cluster (there must be no distributed
devices with legs on the target cluster).
l No rule sets that affect the target cluster.
l No globally visible devices at the target cluster.
Use the following steps to forget a cluster:
1. If connected, use the cluster forget command on the target cluster to forget the other
clusters.
2. Use the cluster forget command on all other clusters to forget the target cluster.
This command does not work if the clusters have lost communications with each other. If a cluster
is down, destroyed, or removed, use the cluster expel command to expel it.
Examples
See also
l cluster add
l cluster expel
l cluster status
l cluster unexpel
cluster restart-local-cluster
Initiates cluster restart and restores local cluster configuration on VPLEX VS2 Metro
configuration.
Context
All
Syntax
cluster restart-local-cluster
-h | --help
--verbose
-p | --pre-check
Arguments
Optional arguments
-h | --help Displays the usage for this command.
--verbose Provides more output during command execution. This may not have any
effect for some commands.
-p | --pre-check Performs cluster pre-restart validation.
Description
cluster restart-local-cluster supports cluster restart on VS2 Metro configuration.
Follow manual cluster shutdown and restart procedure available in the SolVe Desktop for cluster
restart scenarios:
Run this command on the same cluster that was shutdown using cluster stop-local-
cluster.
This command is cluster specific. Run this command on the management server of the cluster that
was shutdown. For example, if you shutdown cluster-1, run the command cluster restart-
local-cluster the Cluster-1 management server.
The management server and cluster-witness IP addresses should remain the same before
restarting the cluster.
cluster show-remote-devices
Displays the list of remote devices for the specified cluster.
Contexts
All contexts.
Syntax
cluster show-remote-devices options cluster
Description
The command displays the list of remote devices for the specified cluster. The top-level volumes
and the list of views at which the devices are exported are also listed. Use the --verbose option to
see the complete list.
Arguments
Required arguments
options
positional arguments
Optional arguments
-h | --help Displays the usage for the command.
--verbose Provides more output during command execution. This
may not have any effect for some commands.
-s | --include-sub-devices Displays all remote RAIDs. If sub devices are not
specified, the command displays only the top-level
RAIDs.
-[ -c | --cluster=] cluster Specifies the context path of the cluster to show the
context remote devices.
Examples
c2_Dr_device0049_2 - - -
c2_Dr_device0048_2 - - -
c2_Dr_device0047_2 vol2 - -
c2_Dr_device0045_2 - - -
c2_Dr_device0044_2 - - -
c2_Dr_device0042_2 - - -
c2_Dr_device0041_2 - - -
(181 more)
To see all results please run the command with --verbose option.
c2_Dr_device0049_2 - - -
c2_Dr_device0048_2 - - -
c2_Dr_device0047_2 vol2 - -
c2_Dr_device0045_2 - - -
c2_Dr_device0044_2 - - -
c2_Dr_device0042_2 - - -
c2_Dr_device0041_2 - - -
c2_Dr_device0040_2 - - -
c2_Dr_device0039_2 - - -
c2_Dr_device0038_2 - - -
c2_Dr_device0037_2 - - -
c2_Dr_device0036_2 - - -
c2_Dr_device0035_2 - - -
c2_Dr_device0034_2 - - -
c2_Dr_device0033_2 - - -
cluster shutdown
Starts the orderly shutdown of all directors at a single cluster.
Contexts
All contexts.
In /clusters context, command is shutdown.
Syntax
cluster shutdown
[-c|--cluster] context path
--force
Arguments
Required arguments
[-c|--cluster] context path Cluster to shut down.
[-f|--force] Forces the shutdown to proceed.
Description
WARNING Shutting down a VPLEX cluster could cause data unavailability. Please refer to the
VPLEX procedures in the SolVe Desktop for the recommended procedure to shut down a
cluster.
Shuts down the cluster firmware.
Note: Does not shut down the operating system on the cluster.
Use this command as an alternative to manually shutting down the directors in a cluster. When
shutting down multiple clusters:
l Shut each cluster down one at a time.
l Verify that each cluster has completed shutdown prior to shutting down the next one.
If shutting down multiple clusters, refer to the VPLEX procedures in the SolVe Desktop for the
recommended procedure for shutting down both clusters.
When a cluster completes shutting down, the following log message is generated for each director
at the cluster:
Examples
In the following example:
l The cluster shutdown command without the --force argument starts the shutdown of the
specified cluster.
Because the --force argument was not used, a prompt to continue is displayed.
l The cluster summary commands display the transition to shutdown.
l The ll command in clusters/cluster-n context displays the shutdown cluster.
Status Description
-------- -----------------
Started. Shutdown started.
VPlexcli:/> cluster summary
Clusters:
Name Cluster ID TLA Connected Expelled
Operational Status Health State
--------- ---------- --------- --------- -------- ------------------
------------
cluster-1 1 FNM00103600160 true false
unknown unknown
cluster-2 2 FNM00103600161 true false
ok ok
Islands:
Island ID Clusters
--------- --------------------
1 cluster-1, cluster-2
VPlexcli:/> cluster summary
Clusters:
Name Cluster ID TLA Connected
Expelled Operational Status Health State
--------- ---------- --------- --------- --------
------------------ ------------
cluster-1 1 FNM00103600160 false -
- -
cluster-2 2 FNM00103600161 true false
degraded degraded
Islands:
Island ID Clusters
--------- ---------
2 cluster-2
Connectivity problems:
From Problem To
--------- --------- ---------
cluster-2 can't see cluster-1
VPlexcli:/> ll /clusters/cluster-1
Attributes:
Name Value
---------------------- ------------
allow-auto-join -
auto-expel-count -
auto-expel-period -
auto-join-delay -
cluster-id 7
connected false
default-cache-mode -
default-caw-template true
director-names [DirA, DirB]
island-id -
operational-status not-running
transition-indications []
transition-progress []
health-state unknown
health-indications []
See also
l cluster add
l cluster expel
l cluster forget
l director shutdown
cluster status
Displays a cluster's operational status and health state.
Contexts
All contexts.
Syntax
cluster status
Description
The following table shows the fields displayed in the cluster status command:
Field Description
Field Description
Field Description
See also
l cluster summary
l ds summary
cluster stop-local-cluster
Perform graceful shutdown of a single local cluster in VPLEX VS2 Metro configuration.
Context
All
Syntax
cluster stop-local-cluster
[-h | --help]
[--verbose]
[-p | --pre-check]
Arguments
Optional arguments
-h | --help Displays the usage for this command.
--verbose Provides more output during command execution. This may not have any
effect for some commands.
-p | --pre-check Performs cluster pre-shutdown preparation and validation.
Description
cluster stop-local-cluster supports cluster shutdown on a VS2 Metro configuration.
Follow the manual cluster shutdown procedures available in the SolVe Desktop for these cluster
shutdown scenarios:
l Shutdown cluster in VPLEX local configuration (All VS1, VS2 and VS6 platforms)
l Shutdown both the clusters in VPLEX Metro configuration (All VS1, VS2 and VS6 platforms)
l Shutdown single cluster in VS6 VPLEX Metro configuration
Note:
n This command is cluster specific. Run this command on the management server of the
cluster to be shutdown. For example, to shut down Cluster-1, run the command
cluster stop-local-cluster on the cluster-1 management server only.
n Remote volumes should be manually migrated as the size of remote volumes may be
very huge and user should have disk space to migrate these volumes.
n Cluster shutdown on MetroPoint configuration is not supported.
3.
cluster summary
Displays a summary of all clusters and the connectivity between them.
Contexts
All contexts.
In /clusters context, command is summary.
Syntax
cluster summary
Description
The following table shows the fields available in the cluster summary output.
Field Description
Clusters:
Field Description
Islands:
Examples
Display summary for healthy clusters:
Islands:
Island ID Clusters
--------- --------------------
1 cluster-1, cluster-2
Display cluster summary for VPLEX Metro configuration with a inter-cluster link outage:
Islands:
Island ID Clusters
--------- ---------
1 cluster-1
2 cluster-2
Display cluster summary for VPLEX Metro configuration with a cluster expelled:
Islands:
Island ID Clusters
--------- ---------
1 cluster-1
2 cluster-2
See also
l cluster status
cluster unexpel
Allows a cluster to rejoin the VPLEX.
Contexts
All contexts.
In /clusters context, command is unexpel.
Syntax
cluster unexpel
[-c|--cluster] context path
Arguments
Required arguments
[-c|--cluster] context path Cluster to unexpel.
Description
Clears the expelled flag for the specified cluster, allowing it to rejoin the VPLEX.
Examples
To manually unexpel a cluster, do the following:
1. Use the cluster summary command to verify that the cluster is expelled.
Islands:
Island ID Clusters
--------- --------------------
1 cluster-1, cluster-2
2. Use the ll command in the target cluster’s cluster context to display the cluster’s allow-
auto-join attribute setting.
VPlexcli:/> ll /clusters/cluster-1
/clusters/cluster-1:
Attributes:
Name Value
------------------------------ --------------------------------
allow-auto-join true
auto-expel-count 0
auto-expel-period 0
auto-join-delay 0
cluster-id 1
.
.
.
If the cluster’s allow-auto-join attribute is set to true, the cluster automatically rejoins
the system. Skip to step 4.
3. Navigate to the target cluster’s cluster context and use the set command to set the
cluster’s allow-auto-join flag to true. For example:
VPlexcli:/ cd clusters/cluster-1
VPlexcli:/clusters/cluster-1> set allow-auto-join true
4. Use the cluster unexpel command to manually unexpel a cluster, allowing the cluster to
rejoin VPLEX. The syntax for the command is:
For example:
5. Use the cluster summary command to verify all clusters are in one island and working as
expected.
VPlexcli:/>cluster summary
Clusters:
Name Cluster ID TLA Connected Expelled Operational
Status Health State
--------- ---------- -------------- --------- --------
------------------ ------------
cluster-1 1 FNM00091300128 true false
ok ok
cluster-2 2 FNM00091300218 true false
ok ok
Islands:
Island ID Clusters
--------- --------------------
1 cluster-1, cluster-2
See also
l cluster expel
cluster-witness configure
Creates the cluster-witness context for enabling VPLEX Witness functionality and configuration
commands.
Contexts
All contexts.
Syntax
cluster-witness configure
[--verbose]
Arguments
Optional agruments
[--verbose] Provides more output during command execution. This may not have any
effect for some commands.
Description
Cluster Witness is an optional component of VPLEX Metro configurations.
Cluster Witness monitors both clusters and updates the clusters with its guidance, when
necessary. Cluster Witness allows VPLEX to distinguish between inter-cluster link failures versus
cluster failures, and to apply the appropriate detach-rules and recovery policies.
Note: This command must be run on both management servers to create cluster-witness CLI
contexts on the VPLEX.
The following must be true for the command to run successfully:
l The Cluster Witness must be disabled.
l The VPN from this management server to the Cluster Witness server must be established and
functional.
l The Cluster Witness server must be operational and connected.
Note: ICMP traffic must be permitted between clusters for this command to work
properly.
To verify that ICMP is enabled, log in to the shell on the management server and use the ping IP
address command where the IP address is for a director in the VPLEX.
If ICMP is enabled on the specified director, a series of lines is displayed:
VPlexcli:/> ls
clusters/ data-migrations/
distributed-storage/ engines/ management-server/
monitoring/ notifications/ recoverpoint/
security/
VPlexcli:/> cluster-witness configure
VPlexcli:> ls /cluster-witness
Attributes:
Name Value
------------- -------------
admin-state disabled
private-ip-address 128.221.254.3
public-ip-address 10.31.25.45
Contexts:
components
See also
l Dell EMC VPLEX Procedures in SolVe Desktop - "VPLEX Witness: Install and Setup"
l cluster-witness disable
l cluster-witness enable
l configuration cw-vpn-configure
cluster-witness disable
Disables Cluster Witness on both management servers and on Cluster Witness Server.
Contexts
All contexts.
In /cluster-witness context, command is disable.
Syntax
cluster-witness disable
[-f|--force]
[-w|--force-without-server]
Arguments
Note: This command is available only after Cluster Witness has been configured and
cluster-witness CLI context is visible.
Optional
arguments
[-f|--force] Force the operation to continue without confirmation. Allows this command to
be run from non-interactive scripts.
[-w|--force- Force the operation to disable Cluster Witness on both clusters when
without- connectivity to Cluster Witness Server is lost but the two clusters are
server] connected. Use this option when Cluster Witness fails or disconnects from
both clusters and recovery is unlikely to happen soon.
CAUTION Use the --force-without-server option with extreme
care. Use this option to disable Cluster Witness in order to use configured
rule-sets for I/O to distributed volumes in consistency groups.
Note: If Cluster Witness Server becomes reachable when --force-
without-server option is used, the command will also disable the
Cluster Witness Server.
Description
Disables Cluster Witness on both management servers and on the Cluster Witness server.
Allows consistency group rule-sets to dictate I/O behavior to distributed virtual volumes in
consistency groups.
Note: Cluster Witness has no effect on distributed virtual volumes outside of consistency
groups.
CAUTION Use this command from only one management server.
Disabling Cluster Witness does not imply that Cluster Witness components are shut down. If
Cluster Witness is disabled, the clusters stop sending health-check traffic to the Cluster Witness
Server and the Cluster Witness Server stops providing guidance back to the clusters.
Note: If the Cluster Witness Server or connectivity to the Cluster Witness Server will be not
operational for a long period, use the --force-without-server argument. This prevents a
system-wide Data Unavailability of all distributed virtual volumes in consistency groups if an
additional inter-cluster link communication or cluster failure occurs while there is no access to
Cluster Witness Server and Cluster Witness is enabled. Once Cluster Witness Server is
accessible from both management servers, use the cluster-witness enable command to
re-enable the functionality.
Automatic pre-checks ensure that the Cluster Witness configuration is in a state where it can be
disabled. Pre-checks:
l Verify management connectivity between the management servers
l Verify connectivity between management servers and the Cluster Witness Server
l Verify all the directors are up and running
Note: If the --force-without-server option is used, the automatic pre-check to verify
connectivity between management servers and the Cluster Witness Server is not performed.
l Verify connectivity between directors and each management server
l Verify that Cluster Witness is configured on both clusters
l Verify that the meta-volume on both clusters is healthy
Examples
Disable Cluster Witness from the root context:
Disable Cluster Witness from the cluster-witness context when the Cluster Witness Server is
not reachable. In the following example:
l The disable command fails because the Cluster Witness Server is not reachable.
l The disable --force-without-server command disables Cluster Witness.
l The ll /components command displays the state of the Cluster Witness configuration.
VPlexcli:/cluster-witness> disable
WARNING: Disabling Cluster Witness may cause data unavailability in the
event of a disaster. Please consult the VPLEX documentation to confirm
that you would like to disable Cluster Witness. Continue? (Yes/No) y
cluster-witness disable: Evaluation of <<disable>> failed.
cause: Could not disable Cluster Witness.
cause: Cluster Witness cannot be disabled due to
failure of a pre-check.
cause: Unable to communicate with Cluster Witness
Server. Please check the state of the Cluster Witness Server and its
connectivity and try again
See also
l Procedures in SolVe Desktop - "VPLEX Witness: Install and Setup"
l cluster summary
l cluster-witness enable
l vpn status
l cluster-witness configure
l configuration cw-vpn-configure
cluster-witness enable
Enables Cluster Witness on both clusters and on the Cluster Witness Server.
Contexts
All contexts.
In /cluster-witness context, command is enable.
Syntax
cluster-witness enable
Description
Note: This command is available only after Cluster Witness has been configured and
cluster-witness CLI context is visible.
CAUTION Use this command from the management server on only one cluster.
Automatic pre-checks run before the cluster-witness enable command is issued. Pre-
checks verify that the system is in a state that Cluster Witness can be enabled. Pre-checks:
l Verify management connectivity between both the management servers
l Verify connectivity between each management server and the Cluster Witness Server
l Verify connectivity between directors and each management server
l Verify that Cluster Witness CLI context is configured on both clusters
l Verify that a meta-volume is present and healthy on both clusters
l Verify all the directors are healthy
If any of the pre-checks fail, the command displays the cause of failure on the specific component
and warns about possible Data Unavailability risk, if any.
WARNING There is no rollback mechanism. If the enable command fails on some components
and succeeds on others, it may leave the system in an inconsistent state. If this occurs,
consult the troubleshooting section of the SolVe Desktop and/or contact Dell EMC Customer
Support.
The cluster-witness context does not appear in the CLI unless the context has been created
using the cluster-witness configure command. The cluster-witness CLI context
appears under the root context. The cluster-witness context includes the following sub-
contexts:
l /cluster-witness/components/cluster-1
l /cluster-witness/components/cluster-2
l /cluster-witness/components/server
Field Description
Server.
Remote cluster-x hasn't yet
established connectivity with the server - The
cluster has never connected to Cluster Witness
Server.
Cluster-x has been out of
touch from the server for X days, Y secs -
Cluster Witness Server has not received
messages from a given cluster for longer than
60 seconds.
Cluster witness server has
been out of touch for X days, Y secs - Either
cluster has not received messages from Cluster
Witness Server for longer than 60 seconds.
Cluster Witness is not
enabled on component-X, so no diagnostic
information is available - Cluster Witness
Server or either of the clusters is disabled.
Examples
In the following example:
l The ll command verifies that Cluster Witness is configured (the context exists)
l The cd command changes the context to cluster-witness
l The cluster-witness enable command enables VPLEX Witness
l The ll /components/* command displays the components on cluster-1, cluster-2, and the
Cluster Witness server:
VPlexcli:/> ll /cluster-witness
Attributes:
Name Value
------------------ -------------
admin-state disabled
private-ip-address 128.221.254.3
public-ip-address 10.31.25.235
Contexts:
Name Description
---------- --------------------------
components Cluster Witness Components
VPlexcli:/> cd /cluster-witness
VPlexcli:/cluster-witness> cluster-witness enable
VPlexcli:/cluster-witness> ll /components/*
/cluster-witness/components/cluster-1:
Name Value
----------------------- ------------------------------------------------------
admin-state enabled
diagnostic INFO: Current state of cluster-1 is in-contact (last
state change: 0 days, 56 secs ago; last message
from server: 0 days, 0 secs ago.)
id 1
management-connectivity ok
operational-state in-contact
/cluster-witness/components/cluster-2:
Name Value
----------------------- ------------------------------------------------------
admin-state enabled
diagnostic INFO: Current state of cluster-2 is in-contact (last
state change: 0 days, 56 secs ago; last message
from server: 0 days, 0 secs ago.)
id 2
management-connectivity ok
operational-state in-contact
/cluster-witness/components/server:
Name Value
----------------------- ------------------------------------------------------
admin-state enabled
diagnostic INFO: Current state is clusters-in-contact (last state
change: 0 days, 56 secs ago.) (last time of
See also
l Procedures in the SolVe Desktop
l cluster summary
l cluster-witness disable
l vpn status
l cluster-witness configure
l configuration cw-vpn-configure
collect-diagnostics
Collects the two latest core files from each component, logs, and configuration information from
the management server and directors
Contexts
All contexts.
Syntax
collect-diagnostics
--notrace
--nocores
--noperf
--noheap
--noextended
--faster
--local-only
--minimum
--allcores
--large-config
--recoverpoint-only
--out-dir directory
Arguments
Optional arguments
--notrace Do not collect fast trace dump files from the directors.
--nocores Do not collect core files from the directors.
--noperf Do not collect performance sink files.
--noheap Do not dump the management console's heap.
--noextended Omit collection of extended diagnostics. Implies use of --nocores, --
noheap, and --notrace arguments.
--faster Omits some of the more time-consuming operations. Use only when
collect-diagnostics is expected to take very long, for example on large
Description
Collects logs, cores, and configuration information from the management server and directors.
Places the collected output files in the /diag/collect-diagnostics-out directory on the
management server.
Two compressed files are placed in the /diag/collect-diagnostics-out directory:
l tla-diagnostics-extended-timestamp.tar.gz - Contains java heap dump, fast trace
dump, two latest core files, and two latest RecoverPoint kdriver core files (if they exist).
l tla-diagnostics-timestamp.tar.gz - Contains everything else including a
direcotry, /opt/recoverpoint which contains the RecoverPoint splitter logs in a zip file
(vpsplitter.log.xx, vpsplitter.log.periodic_env, and vpsplitter.log.current_env).
Best practice is to collect both files. The extended file is usually large, and thus takes some time to
transfer.
Collect all RecoverPoint diagnostics, including all available RecoverPoint core files, and send the
output to the default directory:
See also
l cluster configdump
l director appdump
l getsysinfo
l sms dump
configuration complete-system-setup
Completes the configuration.
Contexts
All contexts.
Syntax
configuration complete-system-setup
Description
Completes the automated EZ-Setup Wizard.
This command must be run twice: once on each cluster.
Note: Before using this command on either cluster, first use the configuration system-
setup command (on both cluster).
Examples
See also
l configuration connect-remote-directors
l configuration continue-system-setup
l configuration sync-time-clear
l configuration system-setup
configuration configure-auth-service
Configures the authentication service selected by the user.
Contexts
All contexts.
Syntax
configuration configure-auth-service
Description
Configures the selected authentication service.
See the authentication directory-service configure command for a description of
the available authentication services.
Examples
Configure the selected authentication service:
Note: After running this command, run the webserver restart command.
See also
l authentication directory-service configure
l authentication directory-service unconfigure
configuration connect-local-directors
Connects to the directors in the local cluster.
Contexts
All contexts.
Syntax
configuration connect-local-directors
[-f|--force]
Arguments
Optional arguments
[-f|--force] Connect to local directors regardless of current connections.
Description
This command executes connect commands to all local directors.
Use the --force argument if one or more local directors are already connected.
The connections use the director’s default name. For example: director-1-1-A.
Examples
Connect the local directors to the cluster:
Use the --force argument when the directors are already connected:
See also
l configuration connect-remote-directors
l configuration complete-system-setup
l configuration system-setup
configuration connect-remote-directors
Connects to the remote directors after the VPN connection has been established.
Contexts
All contexts.
Syntax
configuration connect-remote-directors
[-c|--engine-count] engine count
[-f|--force]
Arguments
Optional arguments
[-c|--engine-count] engine count Specifies the number of engines present at the remote
site.
[-f|--force] Connect to remote directors regardless of current
connections.
Description
During system setup for a VPLEX Metro or Geo configuration, use the configuration
connect-remote-directors command to connect the local cluster to the directors in the
remote cluster.
Run this command twice: once from the local cluster to connect to remote directors, and once
from the remote cluster to connect to local directors.
Examples
Connect remote directors to the directors in the local cluster:
See also
l configuration connect-local-directors
l configuration complete-system-setup
l configuration system-setup
configuration continue-system-setup
Continues the EZ-Setup Wizard after back-end storage is configured and allocated for the cluster.
Contexts
All contexts.
Syntax
configuration continue-system-setup
Description
This command validates the back-end configuration for the local cluster. The cluster must have its
back-end allocated and configured for this command to succeed.
Use the configuration system-setup command to start the EZ-Setup Wizard to configure the
VPLEX.
Zone the back-end storage to the port WWNs of the VPLEX back-end ports.
After you complete the back-end storage configuration and allocation for the cluster, use this
command to complete the initial configuration.
Examples
See also
l configuration system-setup
configuration cw-vpn-configure
Establishes VPN connectivity between a VPLEX management server and the Cluster Witness
Server and starts the VPN tunnel between them.
Contexts
All contexts.
Syntax
configuration cw-vpn-configure
[-i|--ip-address] public-ip-address
[-c | --cwsHostCert=]host-certificate
-k | --cwsHostKey= hostkey
-f | --force
-h | --help
--verbose
Arguments
Required arguments
[-i|--ip-address] public-ip-address * Valid public IP address of the Cluster Witness
server.
Optional arguments
-h | --help Displays the usage for this command.
--verbose Provides more output during command execution.
This may not have any effect for some commands.
[-c | --cwsHostCert=]host- Specifies the absolute path of the Cluster Witness
certificate server host certificate to import.
-k | --cwsHostKey= hostkey Specifies the absolute path of Cluster Witness
server host key name to import
-f | --force Force the update of the configuration file of the
Cluster Witness server.
Description
The command is interactive and requires inputs to complete successfully.
Note: This command must be run on both management server, precisely on cluster-1 followed
by cluster-2 to establish the VPN between the management serverand the Cluster Witness
server.
A management server authenticates an external client entity based on the Certificate Authority it
trusts. The trust store is used for web/ REST clients over https connections and ssl database for
inter-site VPN connections. The CA trust can be self-signed based on a local CA subject info or a
third party vendor (such as Verisign, Globalsign) signed. Executing the command without -c and -
k options creates the self-signed Cluster Witness server host certificate. signed by the same CA
which is used in establishing the VPN between the twomanagement server. To import the Cluster
Witness server host certificate and key, run the command with --cwsHostCert or -c and --
cwsHostKey and -k options respectively.
Note: The Cluster Witness server certificate is to be signed by the same CA which has signed the
certificates that are used in establishing the VPN between the two management server. In the
case of self-sign, this is taken care by the command itself . In the case of import, please provide
the Cluster Witness server certificate signed by the same CA which has signed the certificates
that are used in establishing the VPN between the two management server.
Prerequisites
Before executing the command ensure the following conditions are met:
l VPLEX Metro setup is successfully completed using EZSetup. This creates a VPN connection
between VPLEX management servers.
l The Cluster Witness server certificate is to be signed by the same CA which has signed the
certificates that are used in establishing the VPN between the two management server.
Note: If this command is run when the VPN is already configured, the following error message
is displayed: VPN connectivity is already established.
configuration cw-change-password
This command updates the Cluster Witness Server password.
Contexts
All contexts.
Syntax
configuration cw-change-password
[-p |--prompt]
[-c |--promptCurrentPassword]
[-f |--force]
-h | --help
--verbose
Arguments
Optional arguments
[-p|--prompt] Prompts for a new Cluster Witness Server password.
[-c |--promptCurrentPassword] Prompts for current password instead of retrieving
current password from the lockbox. This option is
typically used if the Cluster Witness Server password
has been previously changed.
[-f |--force] Forces Cluster Witness Server password update. This
option is used to change the current password, if the
password is not the default password
-h | --help Displays usage for this command.
--verbose Provides more output during command execution. This
may not have any effect for some commands.
Description
This command is used to change the Cluster Witness Server password.
The Cluster Witness Server password is identical to the management server password by default.
It is imperative that the Cluster Witness password be changed to provide system security.
The Dell EMC VPLEX Security Configuration Guide provides recommendations about choosing an
appropriately complex password.
If a customer attempts to change a password when the password is not the default password, the
following informational message is displayed: Cluster Witness default password is
already changed.
configuration cw-vpn-reset
Resets the VPN connectivity between the management server and the Cluster Witness Server.
Contexts
All contexts.
Syntax
configuration cw-vpn-reset
Description
Resets the VPN between the management server on a cluster and the Cluster Witness Server.
WARNING Use this command with EXTREME CARE. This command will erase all Cluster
Witness VPN configurations.
This command should be used only when Cluster Witness is disabled, because it is not providing
guidance to VPLEX clusters at that time.
Using the cw-vpn-reset command when Cluster Witness is enabled causes Cluster Witness to
lose connectivity with the management server and generates a call-home event.
In order to complete, this command requires VPN connectivity between the management server
and the Cluster Witness Server.
Note: Run this command twice: once from each management server.
Examples
From the first cluster:
Successfully removed the connection name and updated the Cluster Witness Server ipsec.conf
file
Successfully transferred the ipsec configuration file to the Cluster Witness Server and
restarted the IPSec process
Successfully removed the certificate files from the Cluster Witness Server
Successfully removed the cluster witness connection name from the Management Server
ipsec.conf file
Successfully restarted the ipsec process on the Management Server
Resetting Cluster Witness Server SSH configuration.
Verifying if the Cluster Witness has been configured on this Management Server...
Verifying if the Cluster Witness has been enabled on this Management Server...
VPN Reset between the Management Server and the Cluster Witness Server is now complete.
The log summary for configuration automation has been captured in /var/log/VPlex/cli/
VPlexconfig.log
The task summary and the commands executed for each automation task has been captured
in /var/log/VPlex/cli/VPlexcommands.txt
The output for configuration automation has been captured in /var/log/VPlex/cli/capture/
VPlexconfiguration-session.txt
See also
l cluster-witness configure
l cluster-witness disable
l cluster-witness enable
configuration enable-front-end-ports
After the meta-volume is created using the EZ-Setup wizard, enable front-end ports using this
command.
Contexts
All contexts.
Syntax
configuration enable-front-end-ports
Description
Completes the initial system configuration using the EZ-Setup Wizard. After configuring the meta-
volume the cluster, use this command to resume setup and enable the front-end ports on the local
cluster.
Prerequisite: The cluster must be configured with a meta-volume and a meta-volume backup
schedule.
Examples
Enable the front-end ports.
See also
l meta-volume create
l configuration metadata-backup
l configuration complete-system-setup
l configuration system-setup
Description
This command:
l Deletes the current call-home notification and system reporting configuration data
l Disables call-home notification
See also
l configuration event-notices-reports config
l configuration event-notices-reports show
l notifications call-home test
l scheduleSYR list
configuration event-notices-reports-show
This command shows call-home notification connection records and system connections based on
the ConnectEMC configuration.
Contexts
All contexts.
Syntax
configuration event-notices-reports-show
[-h | --help]
[--verbose]
Arguments
Optional arguments
-h | --help Displays the usage for this command.
--verbose Provides more output during command execution. This may not have any
effect for some commands.
Description
This command shows call-home notification connection records and system connections based on
the ConnectEMC configuration. There are no limitations.
Connection types are:
See also
l configuration event-notices-reports config
l configuration event-notices-reports show
Optional arguments
[--verbose] Provides more output during command execution. This may not have
any effect for some commands.
Description
Disables periodic director flashdir backups.
Example
Disable flashdir backups.
See also
l configuration flashdir-backup enable
Optional arguments
[-h|--help] Displays command line help.
[--verbose] Provides more output during command execution. This may not have any
effect for some commands.
Description
Enables periodic director flashdir backups.
Example
enable backup of flashdir.
See also
l configuration flasjdir-backup disable
configuration get-product-type
Displays the VPLEX product type (Local or Metro).
Contexts
All contexts.
Syntax
configuration get-product-type
Description
Displays whether the system is a Local or Metro configuration .
Example
Display the configuration type.
See also
l cluster status
l cluster summary
l version
configuration join-clusters
Validates WAN connectivity and joins the two clusters.
Contexts
All contexts.
Syntax
configuration join-clusters
[-i|--remote-ip] remote IP address
[-h|--help]
Arguments
Optional arguments
[-i|--remote-ip] remote IP address Specifies the IP address of the remote server.
Description
This command validates WAN connectivity and joins the two clusters.
Note: This command can be configured as Metro Fibre Channel using the EZ-Setup wizard.
Example
Join clusters at the specified remote IP address:
See also
l cluster add
l configuration continue-system-setup
l configuration system-setup
configuration metadata-backup
Configures and schedules the daily backup of VPLEX metadata.
Contexts
All contexts.
Syntax
configuration metadata-backup
Description
Selects the volumes to use as backup volumes and creates the initial backup of both volumes.
The meta-volume’s backup size should be equal to or greater than the active meta-volume size.
The current requirement is 78G per storage volume.
See the Dell EMC VPLEX Technical Notes for best practices regarding the kind of back-end array
volumes to consider for a meta-volume.
Note: This command must be executed on the management server in which you want to create
the backups.
Runs an interview script that prompts for values to configure and schedule the daily backups of
VPLEX metadata.
l Selects the volumes on which to create the backup
l Updates the VPLEX configuration .xml file (VPlexconfig.xml)
l Creates an initial backup on both selected volumes
l Creates two backup volumes named:
n volume-1_backup_timestamp
n volume-2_backup_timestamp
l Schedules a backup at a time selected by the user
Enter two or more storage volumes, separated by commas.
CAUTION Renaming backup metadata volumes is not supported.
Specify two or more storage volumes. Storage volumes must be:
- unclaimed
- on different arrays
Example
Configure the VPLEX metadata backup schedule:
VPlexcli:/> ls /clusters/cluster-2/system-volumes/
/clusters/cluster-2/system-volumes:
Detroit_LOGGING_VOL_vol Detroit_METAVolume1 Detroit_METAVolume1_backup_2010Dec23_052818
Detroit_METAVolume1_backup_2011Jan16_211344
See also
l configuration remote-clusters clear-addresses
l configuration show-meta-volume-candidates
l configuration system-setup
configuration register-product
Registers the VPLEX product with Dell EMC.
Syntax
configuration register-product
Description
Use this command during installation:
l After configuring the external IP address and host name.
l Before using the configuration system-setup command.
The command runs the product registration wizard. Prompts for the following information:
l Company contact name
l E-mail and phone number
l Mailing address
The command uses the responses to create a file for product registration. A prompt is then
displayed asking how the registration should be sent to Dell EMC. Two methods are available:
n Attach the registration file to an e-mail and send it to Dell EMC.
n Send the registration file to Dell EMC through an SMTP server.
If this option is selected, a prompt for an SMTP server IP address is displayed.
Examples
Register the product.
See also
l configuration continue-system-setup
l configuration system-setup
Optional arguments
[-c |--cluster] cluster * Specifies the cluster whose connectivity configuration is to be
context modified. Typically the cluster above the current context.
[-d | --default] Applies default configuration values. Default configuration pulls
the cluster-address attribute of the active subnets from all remote
clusters. This option does not require --remote-cluster or --
addresses.
* - argument is positional.
Description
Adds one or more address:port configurations for the specified remote-cluster entry for this
cluster.
See the VPLEX procedures in the SolVe Desktop for more information on managing WAN-COM IP
addresses.
Examples
Add a cluster 2 address to cluster 1:
VPlexcli:/> ls /clusters/cluster-1/connectivity/wan-com
/clusters/cluster-1/connectivity/wan-com:
Attributes:
Name Value
------------------------ ---------------------------------------------------
discovery-address 224.100.100.100
discovery-port 10000
listening-port 11000
remote-cluster-addresses cluster-2 [192.168.11.252:11000]
VPlexcli:/> configuration remote-clusters add-addresses -c cluster-1/ -r cluster-2/ -a
10.6.11.252:11000
VPlexcli:/> ls /clusters/cluster-1/connectivity/wan-com
/clusters/cluster-1/connectivity/wan-com:
Attributes:
Name Value
------------------------ ---------------------------------------------------
discovery-address 224.100.100.100
discovery-port 10000
listening-port 11000
remote-cluster-addresses cluster-2 [10.6.11.252:11000, 192.168.11.252:11000]
Add cluster 2 addresses to cluster 1, assuming that the WAN COM port groups for cluster 2 have
their cluster-address properly configured:
VPlexcli:/clusters/cluster-1/connectivity/wan-com> ls
Attributes:
Name Value
------------------------ -----------------------------
discovery-address 224.100.100.100
discovery-port 10000
listening-port 11000
remote-cluster-addresses -
VPlexcli:/clusters/cluster-1/connectivity/wan-com> remote-clusters add-addresses -d
VPlexcli:/clusters/cluster-1/connectivity/wan-com> ls
Attributes:
Name Value
------------------------ ---------------------------------------------------
discovery-address 224.100.100.100
discovery-port 10000
listening-port 11000
remote-cluster-addresses cluster-2 [10.6.11.252:11000, 192.168.11.252:11000]
See also
l configuration remote-clusters clear-addresses
Optional arguments
[-c |--cluster] cluster * Specifies the cluster whose connectivity configuration is to
context be modified. Typically the cluster above the current context.
[-r | --remote-cluster] * Specifies the remote-cluster configuration entry to modify.
cluster context Cannot be the same context specified in --cluster.
[-a | --addresses] Specifies one or more remote ip-address:port-number entries
addresses [, addresses...] to remove for the specified --remote-cluster. If no entry
is specified, all entries are removed for the specified --
remote-cluster.
[-h | --help] Displays the usage for this command.
[--verbose] Provides more output during command execution. This may
not have any effect for some commands.
* - argument is positional.
Description
Clears one, several, or all address:port configurations for the specified remote-cluster entry for
this cluster.
See the VPLEX procedures in the SolVe Desktop for more information on managing WAN-COM IP
addresses.
Examples
Clear a cluster 2 address on cluster 1:
VPlexcli:> ls /clusters/cluster-1/connectivity/wan-com
/clusters/cluster-1/connectivity/wan-com:
Attributes:
Name Value
------------------------ ---------------------------------------------------
discovery-address 224.100.100.100
discovery-port 10000
listening-port 11000
remote-cluster-addresses cluster-2 [10.6.11.252:11000, 192.168.11.252:11000]
VPlexcli:/> configuration remote-clusters clear-addresses -c cluster-1/-r cluster-2/-a
10.6.11.252:11000
VPlexcli:/> ls /clusters/cluster-1/connectivity/wan-com
/clusters/cluster-1/connectivity/wan-com:
Attributes:
Name Value
------------------------ ---------------------------------------------------
discovery-address 224.100.100.100
discovery-port 10000
listening-port 11000
remote-cluster-addresses cluster-2 [192.168.11.252:11000]
See also
l configuration remote-clusters add-addresses
configuration show-meta-volume-candidates
Display the volumes which meet the criteria for a VPLEX meta volume.
Contexts
All contexts.
Syntax
configuration show-meta-volume-candidates
Description
Candidate volumes are:
l Unclaimed
l At least 78 GB capacityIf
CAUTION If you configure the meta volume on a CLARiiON® array, do not configure the meta
volume on the vault drives of the CLARiiON.
Dell EMC recommends the following for meta volumes:
l Read caching should be enabled
l A hot spare meta volume be pre-configured in case of a catastrophic failure of the active meta
volume.
Performance is not critical for meta volumes. The minimum performance allowed is 40 MB/s and
100 4 K IOP/second. Isolate the physical spindles for meta volumes from application workloads.
Availability IS critical for meta volumes. Best practice is to mirror the meta volume across two or
more back-end arrays. Choose the arrays used to mirror the meta volume such that they are not
required to migrate at the same time.
Examples
Show meta volume candidates:
See also
l meta-volume create
l configuration metadata-backup
l configuration system-setup
This command destroys configuration information, thereby requiring user confirmation before
executing. The list of subnets to be cleared is presented for confirmation. You can bypass this
check by specifying the --force option.
The configuration subnet clear command works for all subnet contexts under /
clusters/*/connectivity/.
Subnet properties cannot be modified unless both of the following are true:
l Ports in the port-group are disabled
l Directors in the local cluster are reachable
Examples
Clear all subnets in ip-port-group-3:
VPlexcli:/clusters/cluster-1/connectivity/wan-com/port-groups/ip-port-group-3/
subnet> ls
Name Value
---------------------- -----
cluster-address 192.168.10.10
gateway 192.168.10.1
mtu 1500
prefix 192.168.10.0/255.255.255.0
proxy-external-address 10.10.42.100
remote-subnet-address 192.168.100.10/255.255.255.0
VPlexcli:/clusters/cluster-1/connectivity/wan-com/port-groups/ip-port-group-3/
subnet> clear ./
Clearing subnet for the following port-groups:
Cluster Role Port-Group
--------- ------- ---------------
cluster-1 wan-com ip-port-group-3
Are you sure you want to clear these subnets? (Yes/No) yes
VPlexcli:/clusters/cluster-1/connectivity/wan-com/port-groups/ip-port-group-3/
subnet> ls
Name Value
---------------------- -----
cluster-address -
gateway -
mtu 1500
prefix -
proxy-external-address -
remote-subnet-address -
VPlexcli:/> ls /clusters/cluster-1/connectivity/wan-com/port-groups/ip-port-
group-3/subnet
Name Value
---------------------- -----
cluster-address 192.168.10.10
gateway 192.168.10.1
mtu 1500
prefix 192.168.10.0/255.255.255.0
proxy-external-address 10.10.42.100
remote-subnet-address 192.168.100.10/255.255.255.0
VPlexcli:/> configuration subnet clear /clusters/cluster-1/connectivity/wan-
com/port-groups/ip-port-group-3/subnet/
Clearing subnet for the following port-groups:
Cluster Role Port-Group
--------- ------- ---------------
cluster-1 wan-com ip-port-group-3
Are you sure you want to clear these subnets? (Yes/No) yes
Required arguments
[-s|--subnet] subnet context * Specifies the front-end or back-end iSCSI subnet where a
routing prefix should be added.
Optional arguments
* - argument is positional.
Description
Adds to a front-end or back-end iSCSI subnet, a routing prefix (remote subnet) that should be
accessed through the subnet’s gateway.
Note: This command is valid only on systems that support iSCSI devices.
Examples
Adding two prefixes to the iSCSI port-group 1 subnet:
VPlexcli:/clusters/cluster-1/connectivity/back-end/port-groups/iscsi-port-
group-8/subnet>
Successfully added 1 remote subnets.
See also
l configuration subnet clear
l configuration subnet remote-subnet remove
Required arguments
[-s|--subnet] subnet context * Specifies the front-end or back-end iSCSI subnet where a
routing prefix should be removed.
Optional arguments
* - argument is positional.
Description
Removes from a front-end or back-end iSCSI subnet, a routing prefix (remote subnet) that should
be accessed through the subnet’s gateway.
Note: This command is valid only on systems that support iSCSI devices.
Examples
Removing two prefixes from the iSCSI port-group 1 subnet:
VPlexcli:/clusters/cluster-1/connectivity/back-end/port-groups/iscsi-port-
group-8/subnet> remote-subnets remove ./ 192.168.1.0/255.255.255.0
Successfully removed 1 remote subnets.
See also
l configuration subnet clear
l configuration subnet remote-subnet add
configuration sync-time
Synchronizes the time of the local management server with a remote NTP server. Remote Metro
clusters are synchronized with the local management server.
Contexts
All contexts.
Syntax
configuration sync-time
[-i|--remote-ip] remote-server-IP-address
[-f|--force]
[-h|--help]
[--verbose]
Arguments
Optional arguments
[-i|--remote-ip] remote- Specifies the IP address of the remote NTP server.
server-IP-address
[-f|--force] Skips the verification prompt.
[-h|--help] Displays the command line help for this command.
[--verbose] Provides more output during command execution. This
may not have any effect for some commands.
Description
In a VPLEX Metro configuration, the configuration sync-time command synchronizes the
time between the local management server and a remote management server using NTP.
CAUTION May cause the CLI or SSH session to disconnect. If this occurs, re-log in and
continue system set-up where you left off.
This command synchronizes Cluster 1 with the external NTP server, or synchronizes Cluster 2 with
the local management server on Cluster 1.
Use this command before performing any set-up on the second cluster of a VPLEX Metro
configuration.
Use this command during initial system configuration before using the configuration
system-setup command.
If you do not provide the IP address of the NTP server or the local management server, the
command prompts for the public IP address of the NTP server, or the local management server.
Examples
Running the synchronization time task on cluster-2:
server:10.108.69.121
Syncing time on the management server of a Metro/Geo system could cause the
VPN to require a restart. Please Confirm (Yes: continue, No: exit) (Yes/No)
yes
Shutting down network time protocol daemon (NTPD)..done
PING 10.108.69.121 (10.108.69.121) 56(84) bytes of data.
64 bytes from 10.108.69.121: icmp_seq=1 ttl=64 time=0.166 ms
--- 10.108.69.121 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms
Starting network time protocol daemon (NTPD)..done
executing sudo /etc/init.d/ntp stop
executing /bin/ping 10.108.69.121 -c 1
executing sudo /usr/sbin/sntp -r -P no -u 10.108.69.121
executing sudo /etc/init.d/ntp start
Now running 'vpn status' in case the VPN was affected:
Verifying the VPN status between the management servers...
IPSEC is UP
Remote Management Server at IP Address 10.108.69.121 is reachable
Remote Internal Gateway addresses are reachable
Verifying the VPN status between the management server and the cluster
witness server...
IPSEC is UP
Cluster Witness Server at IP Address 128.221.254.3 is reachable
Synchronize Time task completed.
See also
l configuration sync-time-clear
l configuration sync-time-show
l configuration system-setup
configuration sync-time-clear
Clears the NTP configuration on a management server.
Contexts
All contexts.
Syntax
configuration sync-time-clear
[-h|--help]
[--verbose]
Arguments
Optional arguments
[-h|--help] Displays command line help.
[--verbose] Provides more output during command execution. This may not have any
effect for some commands.
Description
Clears the NTP configuration and restarts the NTP protocol.
This command clears the NTP configuration on the management server of the cluster on which it
is run.
Examples
Clears the NTP configuration:
See also
l configuration sync-time
l configuration sync-time-show
l configuration system-setup
configuration sync-time-show
Displays the NTP configuration of a cluster’s management server.
Contexts
All contexts.
Syntax
configuration sync-time-show
[-h|--help]
[--verbose]
Arguments
Optional arguments
[-h|--help] Displays command line help.
[--verbose] Provides more output during command execution. This may not have any
effect for some commands.
Description
Used to check the management server’s NTP configuration.
Examples
Show that no NTP is configured:
See also
l configuration sync-time
l configuration sync-time-clear
l configuration system-setup
configuration system-reset
Resets the cluster to the manufacturing state.
Contexts
All contexts.
Syntax
configuration system-reset
Description
Resets the cluster to the manufacturing state.
Use this command to cancel all the configuration completed using the EZ-Setup wizard and return
the system to its factory default settings. Any values specified during the configuration session
become the defaults displayed when EZ-Setup is re-run.
Run the configuration system-reset command only on a new cluster.
Do not run this command on a configured cluster.
for this command to execute, no meta-volume may exist.
Examples
Reset the cluster configuration.
See also
l configuration system-setup
configuration system-setup
Starts the EZ-Setup Wizard automated configuration tool.
Contexts
All contexts.
Syntax
configuration system-setup
Description
Configures the VPN and establishes a secure connection between the clusters
Use the exit command any time during the session to exit EZ-Setup. Use
the configuration system-setup command to resume EZ-Setup. The process
restarts from the first step. Any values from the previous session
appear as default values.
Note: To run this command, there must be no meta-volume configured, and no storage
exposed to hosts.
EZ-Setup Wizard automates the following steps:
l Connects to the local directors
l Sets the cluster ID
l Sets the Cluster IP Seed
l Sets the director count
l Commissions the directors
l Forces a service password change
l Configures NTP
l Configures Fibre Channel switch
l Enables COM and back-end ports
l Creates Web server and VPN certificate
l Configures VPN
l Configures/enables call-home
l Configures/enables System Reporting (SYR)
Note: Once the service password is changed, communicate the new password to Dell EMC
service and to the VPLEX System Administrator.
Examples
VPlexcli:/> configuration system-setup
See also
l Dell EMC VPLEX Configuration Guide
l Dell EMC VPLEX Security Configuration Guide
l About cluster IP seed and cluster ID section in the security ipsec-configure command
l configuration continue-system-setup
l configuration sync-time-clear
configuration upgrade-meta-slot-count
Upgrades the slot count of the active meta volume at the given cluster to 64,000 slots.
Context
/clusters/cluster/system-volumes
Syntax
configuration upgrade-meta-slot-count
[-c | --cluster=] cluster
[-d | --storage-volumes=volume [volume, ...]
[-h | --help ]
[--verbose]
[-f | --force]
Arguments
Optional arguments
[-c | --cluster=] The cluster at which to upgrade the slot count of the active meta
cluster volume. When specified from within a /clusters/cluster context,
the value of that context is used as cluster. The -c or --cluster
argument is positional.
[-d | --storage- Creates a temporary meta volume from one or more storage
volumes= volume [, volumes. After the command completes successfully, the command
volume ...] destroys the temporary meta volume. The specified storage
volumes must not be empty, and must be at the implied or specified
cluster.
Type the system IDs for the storage volumes separated by commas.
Specify two or more storage volumes. Storage volumes should be
on different arrays.
[-f | --force] Forces the upgrade to proceed without asking for confirmation.
Description
On the metadata volume, each slot stores header information for each storage volume, extent, and
logging volume. This command upgrades the slot count of the active meta volume at the given
cluster to 64,000 slots.
By default, the oldest meta volume backup at the cluster serves as a temporary meta volume.
If you specify the -d or --storage-volume option, then the command creates the temporary
meta volume from scratch from those disks. The temporary meta volume is active while the
currently-active meta volume is being upgraded. At the end of the process, VPLEX reactivates the
original meta volume and the temporary meta volume becomes a backup again. VPLEX renames
the backup to reflect the new point in time at which it became a backup.
Meta-volumes differ from standard storage volumes in the following ways:
connect
Connects to a director.
Contexts
All contexts.
Syntax
connect
[-o|--host] [host-name|IP address]
--logport port number
--secondary-host [host name|IP address]
--secondary-logport secondary port number
[-n|--name] name
[-t|--type] system type
[-p|--password] password
[-c|--connection-file] filename
[-s|--save-authentication]
--no-prompt
Arguments
Optional arguments
[-o|--host] {host-name|IP * Host name or IP address of the director to which to connect.
address} Default: localhost.
--logport port-number For use by Dell EMC personnel only. A firmware log event port.
Applicable only to test versions of the firmware.
--secondary-host {host- Host name or IP address of the redundant interface on director
name|IP address} to which to connect.
--secondary-logport For use by Dell EMC personnel only. A firmware log event port.
secondary-port-number Applicable only to test versions of the firmware.
[-t|--type] type For use by Dell EMC personnel only. VPLEX can communicate
with its firmware through two styles of interfaces. Tools such
as the VPLEX simulator use the 'legacy' type interface.
[-p|--password] password Set the password for the connection.
[-c|--connection-file] Load a list of connections from file named connections at:
filename service@ManagementServer:/var/log/VPlex/cli on the VPLEX
management server.
* - argument is positional.
Description
Use the connect command to:
l Re-establish connectivity if connectivity is lost to one or more directors.
l Manually re-connect after a power outage if the management server is not able to connect to
the directors.
During normal system setup, connections to directors are established and stored in a
file: /var/log/VPlex/cli/connections.
Use the connect -c command if the entry for the director exists in the connections file.
Note: If the disconnect command is issued for a director, the entry in the connections file for
that director is removed from the connections file.
When a director is connected, the context tree expands with new contexts representing the
director, including:
l A new director context below /engines/engine/directors representing storage and
containing the director’s properties.
l If this is the first connection to a director at that cluster, a new cluster context below /
clusters.
l If this is the first connection to a director belonging to that engine, a new engine context
below /engines.
Use the connect -name name command to name the new context below /engines/engine/
directors. If is omitted, the host name or IP address is used.
Note: name is required if two host addresses are specified.
If the --connection-file argument is used, the specified file must list at least one host address to
connect to on each line with the format:
host|[secondary host][,name]
Sample connections file:
128.221.252.68:5988|128.221.253.68:5988,Cluster_2_Dir_1B
128.221.252.69:5988|128.221.253.69:5988,Cluster_2_Dir_2A
128.221.252.70:5988|128.221.253.70:5988,Cluster_2_Dir_2B
128.221.252.35:5988|128.221.253.35:5988,Cluster_1_Dir1A
128.221.252.36:5988|128.221.253.36:5988,Cluster_1_Dir1B
128.221.252.36:5988,128.221.252.36
128.221.252.35:5988,128.221.252.35
Examples
See also
l disconnect
connectivity director
Displays connections from the specified director through data (non-management) ports.
Contexts
All contexts.
Syntax
connectivity director director
[-d|--storage-volumes]
[-i|--initiators]
[-n|--directors]
[-f|--file] filename
[-s|sort-by][name|wwn|port]
Arguments
Required arguments
director
Optional arguments Director to discover.
[-d|--storage-volumes] Display connectivity from the specified director to storage
volumes.
[-i|--initiators] Display connectivity from the specified director to initiators.
[-n|--directors] Display connectivity from the specified director to other
directors.
[-f|--file] filename Save the output in the specified file. Default: /var/log/
VPlex/cli
[-s|--sort-by] {name| Sort output by one of the following:
wwn|port}
l name - Sort output by storage volume name.
l wwn - Sort output by WorldWide name.
Description
Prints a table of discovered storage volumes, initiators and directors. Lists the ports on which it
discovered each storage volume, initiator and director.
See also
l connectivity show
l connectivity validate-be
Optional arguments
[-h | --help] Displays the usage for this command
[--verbose] Provides additional output during command
execution. This may not have any effect for some
commands.
[-n | --directors] context path , Source director(s) for which connectivity should
context path ... be reported.
[-f | --file] filename Writes the connectivity report to the named file
instead of echoing it. If the file exists, any previous
contents will be lost.
[-d | --show-directors] Shows inter-director connectivity.
[-i | --show-initiators] Shows connected initiators.
Description
Use this command to list the directors, initiators, storage volumes, and the targets that are
connected to a director. Reports the results of the connectivity list storage-volumes,
connectivity list initiators, and the connectivity list directors commands
for each specified director. Unless you specify -d, -i, or -v, all three categories are reported. The
reports are ordered by director, not by report category.
See also
l connectivity list directors
l connectivity list initiators
l connectivity list storage-volumes
Optional arguments
[-h | --help] Displays the usage for this command
[--verbose] Provides additional output during command execution. This
may not have any effect for some commands.
[-n | --directors] context Source director(s) for which connectivity should be reported.
path , context path ...
[-f | --file] filename Writes the connectivity report to the named file instead of
echoing it. If the file exists, any previous contents will be lost.
You can write the output to a file using an absolute path, or
using a path relative to the CLI directory.
[-d | --uuid] Lists the connected directors by UUID instead of by name.
Description
Lists the other directors that are connected to the specified directors. The list includes the
address, protocol, and local port name by which each remote director is connected to the specified
directors.
See also
l connectivity list all
l connectivity list initiators
l connectivity list storage-volumes
Optional arguments
[-h | --help] Displays the usage for this command
[--verbose] Provides additional output during command execution. This
may not have any effect for some commands.
[-n | --directors] context Source director(s) for which connectivity should be reported.
path , context path ...
[-f | --file] filename Writes the connectivity report to the named file instead of
echoing it. If the file exists, any previous contents will be lost.
You can write the output to a file using an absolute path, or
using a path relative to the CLI directory.
[-d | --uuid] Lists the connected directors by UUID instead of by name.
Description
Lists the initiators that are connected to a director. For each director specified, the list includes a
table that reports each initiator's port WWN (FC initiators only) and node WWN (FC) or IQN
(iSCSI), and to which port on the director they are connected.
See also
l connectivity list all
l connectivity list directors
l connectivity list storage-volumes
Optional arguments
[-h | --help] Displays the usage for this command
[--verbose] Provides additional output during command execution. This
may not have any effect for some commands.
[-n | --directors] context Source director(s) for which connectivity should be
path , context path ... reported.
[-f | --file] filename Writes the connectivity report to the named file instead of
echoing it. If the file exists, any previous contents will be
lost. You can write the output to a file by using an absolute
path, or by using a path relative to the CLI directory.
[-s | --sort-by] key The field by which to sort the storage volume information
(name, wwn or port).
[-l | --long-luns] Display LUNs as 16-digit hex-strings instead of as integers.
Description
Lists the storage volumes connected to a director. For each director, the list includes the address,
protocol, and local port name by which each remote director is connected to the specified
directors.
See also
l connectivity list all
l connectivity list directors
l connectivity list initiators
connectivity show
Displays the communication endpoints that can see each other.
Contexts
All contexts.
Syntax
connectivity show
[-p|--protocol[|ib|tcp|udp]
[e|--endpoints] port, port,...
Arguments
Optional arguments
[-p|--protocol] Display endpoints with only the specified protocol. Arguments are
{ib|tcp|udp} case-sensitive, and include:
l ib - InfiniBand. Not supported in the current release. Use the
connectivity director command to display IB protocol connectivity.
l tcp - Transmission Control Protocol.
l udp - UDP-based Data Transfer Protocol.
[-e|--endpoints] List of one or more ports for which to display endpoints. Entries must
port,port... be separated by commas. Default: Display endpoints for all ports.
Description
Displays connectivity, but does not perform connectivity checks. Displays which ports can talk to
each other.
See also
l connectivity director
connectivity validate-be
Checks that the back-end connectivity is correctly configured.
Contexts
All contexts.
Syntax
connectivity validate-be
[-d | --detailed]
[-h | --help]
--verbose
Arguments
Optional arguments
[-h | --help] Displays the usage for this command.
[-d| --detailed] Details are displayed first, followed by the summary.
--verbose Provides more output during command execution. This may not have any
effect for some commands.
Description
This provides a summary analysis of the back-end connectivity information displayed by
connectivity director if connectivity director was executed for every director in the system. It
checks the following:
l All directors see the same set of storage volumes.
l All directors have at least two paths to each storage-volume.
l The number of active paths from each director to a storage volume does not exceed 4.
Note: If the number of paths per storage volume per director exceeds 8 a warning event,
but not a call home is generated. If the number of paths exceeds 16, an error event and a
call-home notification are generated.
On VPLEX Metro systems where RecoverPoint is deployed, run this command on both clusters.
If the connectivity director command is run for every director in the VPLEX prior to running this
command, this command displays an analysis/summary of the back-end connectivity information.
Examples
Entering the connectivity validate-be command without any arguments provides a
summary output as shown.
Cluster cluster-2
0 storage-volumes which are dead or unreachable.
0 storage-volumes which do not meet the high availability requirement for
storage volume paths*.
5019 storage-volumes which are not visible from all directors.
0 storage-volumes which have more than supported (4) active paths from same
director.
*To meet the high availability requirement for storage volume paths each
storage volume must be accessible from each of the directors through 2 or
more VPlex backend ports, and 2 or more Array target ports, and there should
be 2 or more ITLs.
*To meet the high availability requirement for storage volume paths each
storage volume must be accessible from each of the directors through 2 or
more VPlex backend ports, and 2 or more Array target ports, and there should
be 2 or more ITLs.
See also
l connectivity director
l connectivity show
l connectivity validate-local-com
l connectivity validate-wan-com
l health-check
l validate-system-configuration
connectivity validate-local-com
Validates that the actual connectivity over local-com matches the expected connectivity.
Contexts
All contexts.
Syntax
connectivity validate-local-com
[-c|--cluster] context path
[-e|--show-expected]
[-p|--protocol] communication protocol
[-h|--help]
[--verbose]
Arguments
Optional arguments
[-c|--cluster] context-path path of the cluster where local-com should be validated.
Description
Verifies the expected local-com connectivity. This command assembles a list of expected local-
com connectivity, compares it to the actual local-com connectivity, and reports any missing or
extra connections. This command verifies only IP- or Fibre Channel-based local-com connectivity.
Expected connectivity is determined by collecting all ports whose role is local-com and verifying
that each port in a port-group has connectivity to every other port in the same port-group.
When both Fibre Channel and IP ports with role local-com are present, the smaller subset is
discarded and the protocol of the remaining ports is assumed to be the correct protocol.
See also
l connectivity director
l connectivity show
l connectivity validate-be
l connectivity validate-wan-com
l health-check
l validate-system-configuration
connectivity validate-wan-com
Verifies the expected IP and FC WAN COM connectivity.
Contexts
All contexts.
Syntax
connectivity validate-wan-com
[-e|--show-expected]
[-p|--protocol] communication-protocol
Arguments
Optional arguments
[-e|--show- Displays the expected connectivity map instead of comparing it to the
expected] actual connectivity. The map is a list of every port involved in the
WAN COM network and the ports to which it is expected to have
connectivity.
[-p|--protocol] Specifies the protocol used for WAN COM (FC or UDP). If not
communication-protocol specified, the command automatically selects either Fibre Channel or
UDP based on the system’s Fibre Channel or Ethernet WAN COM
ports.
Description
This command assembles a list of expected WAN COM connectivity, compares it to the actual
WAN COM connectivity and reports any discrepancies (i.e. missing or extra connections).
This command verifies IP or Fibre Channel based WAN COM connectivity.
If no option is specified, displays a list of ports that are in error: either missing expected
connectivity or have additional unexpected connectivity to other ports.
The expected connectivity is determined by collecting all ports with role wan-com and requiring
that each port in a port group at a cluster have connectivity to every other port in the same port
group at all other clusters.
When both Fibre Channel and IP ports with role wan-com are present, the smaller subset is
discarded and the protocol of the remaining ports is assumed as the correct protocol.
See also
l connectivity director
l connectivity show
l connectivity validate-be
consistency-group add-virtual-volumes
Adds one or more virtual volume to a consistency group.
Contexts
All contexts.
In /clusters/cluster-n/consistency-groups/group-name context, command is add-
virtual-volumes
Syntax
consistency-group add-virtual-volumes
Required arguments
[-v|--virtual-volumes] * List of one or more comma-separated glob patterns or
virtual-volume,virtual- context paths of the virtual volume to add.
volume,...
[-g|--consistency- * Context path of the consistency group to which to add the
group] consistency-group specified virtual volume. If the current context is a consistency-
group or below, then that consistency group is the default.
Otherwise, this argument is required.
* - argument is positional.
Description
Adds the specified virtual volume to a consistency group. The properties of the consistency group
immediately apply to the added volume.
Note: Only volumes with visibility and storage-at-cluster properties which match
those of the consistency group can be added to the consistency group.
Additionally, you cannot add a virtual volume to a consistency group if the initialization status
of the virtual volume is failed or in-progress.
Maximum # of volumes in a consistency group: 1000
All volumes used by the same application and/or same host should be grouped together in a
consistency group.
If any of the specified volumes are already in the consistency group, the command skips those
volumes, but prints a warning message for each one.
Note: When adding virtual volumes to a RecoverPoint-enabled consistency group, the
RecoverPoint cluster may not note the change for 2 minutes. Wait for 2 minutes between
VPlexcli:/> cd /clusters/cluster-1/consistency-groups/TestCG
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG> consistency-group
list-eligible-virtual-volumes
[TestDDevice-1_vol, TestDDevice-2_vol, TestDDevice-3_vol, TestDDevice-4_vol,
TestDDevice-5_vol]
VPlexcli:/clusters/cluster-2/consistency-groups/TestCG> add-virtual-volumes --
virtual-volumes TestDDevice-2_vol
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG> add-virtual-volumes
TestDDevice-1_vol,TestDDevice-2_vol
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG> ll
Attributes:
Name Value
-------------------
----------------------------------------------------------
active-clusters []
cache-mode synchronous
detach-rule active-cluster-wins
operational-status [(cluster-1,{ summary:: ok, details:: [] }), (cluster-2,{
summary:: ok, details:: [] })]
passive-clusters [cluster-1, cluster-2]
read-only false
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes [TestDDevice-1_vol, TestDDevice-2_vol]
visibility [cluster-1, cluster-2]
Contexts:
Name Description
------------ -----------
advanced -
recoverpoint -
See also
l consistency-group create
l consistency-group list-eligible-virtual-volumes
l consistency-group remove-virtual-volumes
l Dell EMC VPLEX Administration Guide
consistency-group choose-winner
Selects a winning cluster during an inter-cluster link failure.
Contexts
All contexts.
Required arguments
[-c|--cluster] cluster *The cluster on which to roll back and resume I/O.
[-g|--consistency-group] * Context path of the consistency group on which to roll
consistency-group back and resume I/O.
Optional arguments
[-f|--force] Do not prompt for confirmation. Allows this command to
be run using a non-interactive script.
* - argument is positional.
Description
Use the choose-winner command when:
l I/O must be resumed on a cluster during a link outage
l The selected cluster has not yet detached its peer
l The detach-rules require manual intervention
The selected cluster will detach its peer cluster and resume I/O.
CAUTION When the clusters cannot communicate, it is possible to use this command to select
both clusters as the winning cluster (conflicting detach). In a conflicting detach, both clusters
resume I/O independently.
When the inter-cluster link heals in such a situation, manual intervention is required to pick a
winning cluster. The data image of the winning cluster will be used to make the clusters
consistent again. Any changes at the losing cluster during the link outage are discarded.
Do not use this command to specify more than one cluster as the winner.
Examples
Select cluster-2 as the winner for consistency group TestCG:
VPlexcli:/clusters/cluster-2/consistency-groups/TestCG> choose-winner --
cluster cluster-2
WARNING: This can cause data divergence and lead to data loss. Ensure the
other cluster is not serving I/O for this consistency group before
continuing. Continue? (Yes/No) Yes
status summary is suspended (showing that I/O has stopped), and the status details contain
cluster-departure, indicating that I/O has stopped because the clusters can no longer
communicate with one another.
l The choose winner command forces cluster-1 to detach cluster-2.
l The ls command displays the change at cluster-1.
n Cluster-1 status is suspended.
n Cluster-2, is still suspended, cluster-departure.
n Cluster-1 is the winner, so it detached cluster-2.
l I/O at cluster-1 remains suspended, waiting for the administrator.
VPlexcli:/> ll /clusters/cluster-2/consistency-groups/
my_cg1/
/clusters/cluster-2/consistency-groups/my_cg1:
Attributes:
Name Value
-------------------- ---------------------------------------------------------
active-clusters []
cache-mode synchronous
detach-rule no-automatic-winner
operational-status [(cluster-1,{ summary:: suspended, details::
[cluster-departure, rebuilding-across-clusters,
restore-link-or-choose-winner] }), (cluster-2,{ summary::
suspended, details:: [cluster-departure,
restore-link-or-choose-winner] })]
passive-clusters []
read-only false
recoverpoint-enabled false
storage-at-clusters []
virtual-volumes [dr1_read_write_latency_0000_12_vol]
visibility [cluster-1, cluster-2]
Contexts:
Name Description
------------ -----------
advanced -
recoverpoint -
VPlexcli:/clusters/cluster-2/consistency-groups/my_cg1> choose-winner -c
cluster-2
WARNING: This can cause data divergence and lead to data loss. Ensure the other cluster is
not serving I/O for this consistency group before continuing. Continue? (Yes/No) yes
VPlexcli:/clusters/cluster-2/consistency-groups/my_cg1>
ls
Attributes:
Name Value
-------------------- ---------------------------------------------------------
active-clusters []
cache-mode synchronous
detach-rule no-automatic-winner
operational-status [(cluster-1,{ summary:: suspended, details::
[cluster-departure, rebuilding-across-clusters] }),
(cluster-2,{ summary:: ok, details:: [] })]
passive-clusters []
read-only false
recoverpoint-enabled false
storage-at-clusters []
virtual-volumes [dr1_read_write_latency_0000_12_vol]
visibility [cluster-1, cluster-2]
Contexts:
advanced recoverpoint
See also
l consistency-group resume-at-loser
l consistency-group summary
l Dell EMC VPLEX Administration Guide
consistency-group convert-to-local
Converts a distributed consistency group to a local consistency group.
context
All contexts
Syntax
convert-to-local
[-h | --help]
[--verbose]
[[-c | --cluster=]cluster-context]
[-f | --force]
[[-g | --consistency-group=]consistency-group]
Arguments
Optional arguments
-h | --help Displays the usage for this command.
--verbose Provides more output during command execution.
This may not have any effect for some
commands.
-c | --cluster= cluster context Specifies the cluster where all devices in the
consistency group will be local.
-f | --force Forces the command to proceed, bypassing all
user warnings .
-g | --consistency-group=consistency- Specifies the consistency-group to make local.
group
Description
To convert a distributed consistency group to a local consistency group, this command converts all
distributed devices under each virtual volume to local distributed devices. The legs on the specified
cluster become the supporting device of the virtual volumes. All target devices should not be
migration temporary devices and should not be exported to any other cluster.
consistency-group create
Creates and names an empty consistency group.
Contexts
All contexts.
In /clusters/cluster-n/consistency-groups/group-name context, command is
create.
Syntax
consistency-group create
[-n|--name] consistency-group name
[-c|--cluster] cluster
Arguments
Required arguments
[-c|--cluster] cluster Context path of the at which to create the consistency group. If the
current context is a cluster or below, that is the default. Otherwise,
this argument is required.
* - argument is positional.
Description
Creates and names an empty consistency group.
A maximum of 1024 consistency groups can be configured.
Each consistency group can contain up to 1000 .
All consistency groups have configurable properties that determine I/O behavior, including:
l cache mode - synchronous
l visibility - determines which s know about a consistency group. Default is only to the where the
consistency group was created. Modified using the set command.
l storage-at-clusters - tells VPLEX at which the physical storage associated with a
consistency group is located. Modified using the set command.
l local-read-override - whether the volumes in this consistency group use the local read
override optimization. Default is true. Modified using the set command.
l detach-rule - determines the winning when there is an inter- link outage. Modified using the
consistency-group set-detach-rule active-cluster-wins, consistency-
group set-detach-rule no-automatic-winner, and consistency-group set-
detach-rule winner commands.
l auto-resume-at-loser - whether the loser automatically resumes I/O when the inter- link
is repaired after a failure. Default is true. Modified using the set command in /clusters/
cluster-n/consistency-groups/consistency-group-name/advanced context.
VPlexcli:/> ls /clusters/*/consistency-groups/
/clusters/cluster-1/consistency-groups:
test10 test11 test12 test13 test14
test15 test16 test5 test6 test7 test8
test9 vs_RAM_c1wins vs_RAM_c2wins vs_oban005 vs_sun190
/clusters/cluster-2/consistency-groups:
.
.
.
VPlexcli:/> cd /clusters/cluster-1/consistency-groups/
VPlexcli:/clusters/cluster-1/consistency-groups> consistency-group create --
name TestCG --cluster cluster-1
VPlexcli:/clusters/cluster-1/consistency-groups> ls
TestCG test10 test11 test12 test13
test14 test15 test16 test5 test6
test7 test8 test9 vs_RAM_c1wins vs_RAM_c2wins
vs_oban005 vs_sun190
VPlexcli:/clusters/cluster-1/consistency-groups> ls TestCG
/clusters/cluster-1/consistency-groups/TestCG:
Attributes:
Name Value
------------------- --------------------------------------------
active-clusters []
cache-mode synchronous
detach-rule -
operational-status [(cluster-1,{ summary:: ok, details:: [] })]
passive-clusters []
recoverpoint-enabled true
storage-at-clusters []
virtual-volumes []
visibility [cluster-1]
Contexts:
advanced recoverpoint
See also
l consistency-group add-virtual-volumes
l consistency-group destroy
l consistency-group remove-virtual-volumes
l Dell EMC VPLEX Administration Guide
consistency-group destroy
Destroys the specified empty consistency groups.
Context
All contexts.
In /clusters/cluster-n/consistency-groups/group-name context, command is
destroy.
Syntax
consistency-group destroy
[-g|--consistency-group] consistency-group, consistency-group, ...
--force
Arguments
Required arguments
[-g|--consistency-group] * List of one or more comma-separated context paths of
consistency-group, consistency- the consistency groups to destroy.
group, ...
Optional arguments
[-f|--force] Force the operation to continue without confirmation.
Allows this command to be run using a non-interactive
script.
* - argument is positional.
Description
Destroys the specified consistency groups.
All clusters where the consistency group is visible must be operational in order for the consistency
group to be destroyed.
All clusters where the consistency group has storage-at-clusters must be operational in order for
the consistency group to be destroyed.
Examples
Destroy the specified consistency group:
See also
l consistency-group create
l consistency-group remove-virtual-volumes
l Dell EMC VPLEX Administration Guide
consistency-group list-eligible-virtual-volumes
Displays the virtual volumes that are eligible to be added to a specified consistency group.
Contexts
All contexts.
Syntax
consistency-group list-eligible-volumes
[-g|consistency-group] consistency-group
Arguments
Required arguments
[-g|--consistency- The consistency group for which the eligible virtual volumes shall
group] consistency-group be listed.
If the current context is a consistency group or is below a
consistency group, that consistency group is the default.
Otherwise, this argument is required.
Description
Displays eligible virtual volumes that can be added to a consistency group. Eligible virtual volumes:
l Must not be a logging volume
l Have storage at every cluster in the storage-at-clusters property of the target
consistency group
l Are not members of any other consistency group
l Have no properties (detach rules, auto-resume) that conflict with those of the consistency
group. That is, detach and resume properties of either the virtual volume or the consistency
group must not be set.
l Have the initialization status as sucess.
Examples
List eligible virtual volumes from the target consistency group context:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG2> list-eligible-
virtual-volumes
[dr1_C12_0000_vol, dr1_C12_0001_vol, dr1_C12_0002_vol, dr1_C12_0003_vol,
dr1_C12_0004_vol, dr1_C12_0005_vol, dr1_C12_0006_vol, dr1_C12_0007_vol,
dr1_C12_0008_vol, dr1_C12_0009_vol, dr1_C12_0010_vol, dr1_C12_0011_vol,
dr1_C12_0012_vol, dr1_C12_0013_vol, dr1_C12_0014_vol, dr1_C12_0015_vol,
dgc_p2z_test_vol, vmax_DR1_C1_r1_0000_12_vol, vmax_DR1_C1_r0_0000_12_vol,
.
.
.
See also
l consistency-group add-virtual-volumes
l consistency-group remove-virtual-volumes
l consistency-group summary
l Dell EMC VPLEX Administration Guide
consistency-group remove-virtual-volumes
Removes one or more virtual volumes from the consistency group.
Contexts
All contexts.
In /clusters/cluster-n/consistency-groups/group-name context, command is
remove-virtual-volumes.
Syntax
consistency-group remove-virtual-volumes
[-v|--virtual-volumes] virtual-volume, virtual-volume, ...
[-g|--consistency-group] context path
--force
Arguments
Required arguments
[-v|--virtual- *Glob pattern or a list of one or more comma-separated context
volumes] virtual- paths of the virtual volumes to remove from the consistency
volume,virtual-volume,... group.
[-g|--consistency- *Context path of the consistency group from which to remove
group] context path the specified virtual volume. If the current context is a
consistency-group or is below, then that consistency group is the
default. Otherwise, this argument is required.
--force Do not ask for confirmation. Allows this command to be run using
a non-interactive script.
* - argument is positional.
Description
Removes one or more virtual volumes from the consistency group.
If the pattern given to --virtual-volumes argument matches volumes that are not in the
consistency group, the command skips those volumes, and prints a warning message for each one.
VPlexcli:/> ls /clusters/cluster-1/consistency-groups/TestCG
/clusters/cluster-1/consistency-groups/TestCG:
------------------------------- ----------------------------------------------
.
.
.
virtual-volumes [dr1_C12_0919_vol, dr1_C12_0920_vol,
dr1_C12_0921_vol, dr1_C12_0922_vol]
visibility [cluster-1, cluster-2]
.
.
.
VPlexcli:/> consistency-group remove-virtual-volumes /clusters/cluster-1/virtual-volumes/
dr1_C12_0920_vol --consistency-group /clusters/cluster-1/consistency-groups/TestCG
VPlexcli:/> ls /clusters/cluster-1/consistency-groups/TestCG
/clusters/cluster-1/consistency-groups/TestCG:
Name Value
------------------------------- ----------------------------------------------
.
.
.
storage-at-clusters [cluster-1, cluster-2]
synchronous-on-director-failure -
virtual-volumes [dr1_C12_0919_vol, dr1_C12_0921_vol,
dr1_C12_0922_vol]
.
.
.
See also
l consistency-group create
l consistency-group destroy
l Dell EMC VPLEX Administration Guide
consistency-group resolve-conflicting-detach
Select a winning cluster on a consistency group on which there has been a conflicting detach.
Contexts
All contexts.
In /clusters/cluster-n/consistency-groups/group-name context, command is
resolve-conflicting-detach.
Syntax
consistency-group resolve-conflicting-detach
[-c|--cluster] cluster
[-g|--consistency-group consistency-group
[-f|--force]
Arguments
Required arguments
[-c|--cluster] cluster - * The cluster whose data image will be used as the source for
resynchronizing the data images on both clusters.
[-g|--consistency-group] consistency-group - * The consistency group on which to resolve the
conflicting detach.
Optional arguments
[-f|--force] - Do not prompt for confirmation. Allows this command to be run using a non-
interactive script.
* - argument is positional.
Description
CAUTION This command results in data loss at the losing cluster.
During an inter-cluster link failure, an administrator may permit I/O to continue at both clusters.
When I/O continues at both clusters:
l The data images at the clusters diverge.
l Legs of distributed volumes are logically separate.
When the inter-cluster link is restored, the clusters learn that I/O has proceeded independently.
I/O continues at both clusters until the administrator picks a winning cluster whose data image will
be used as the source to synchronize the data images.
Use this command to pick the winning cluster. For the distributed volumes in the consistency
group:
l I/O at the losing cluster is suspended (there is an impending data change)
l The administrator stops applications running at the losing cluster.
l Any dirty cache data at the losing cluster is discarded
l The legs of distributed volumes rebuild, using the legs at the winning cluster as the rebuild
source.
When the applications at the losing cluster are shut down, use the consistency-group
resume-after-data-loss-failure command to allow the system to service I/O at that
cluster again.
Example
Select cluster-1 as the winning cluster for consistency group “TestCG” from the TestCG context:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG> resolve-conflicting-
detach
This will cause I/O to suspend at clusters in conflict with cluster
cluster-1, allowing you to stop applications at those clusters. Continue?
(Yes/No) yes
Select cluster-1 as the winning cluster for consistency group “TestCG” from the root context:
In the following example, I/O has resumed at both clusters during an inter-cluster link outage.
When the inter-cluster link is restored, the two clusters will come back into contact and learn that
they have each detached the other and carried on I/O.
l The ls command shows the operational-status as ok, requires-resolve-
conflicting-detach at both clusters.
l The resolve-conflicting-detach command selects cluster-1 as the winner.
Cluster-2 will have its view of the data discarded.
I/O is suspended on cluster-2.
l The ls command displays the change in operational status.
n At cluster-1, I/O continues, and the status is ok.
n At cluster-2, the view of data has changed and so I/O is suspended pending the
consistency-group resume-at-loser command.
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name Value
-------------------- -----------------------------------------
active-clusters [cluster-1, cluster-2]
cache-mode synchronous
detach-rule no-automatic-winner
operational-status [(cluster-1,{ summary:: ok, details:: [requires-resolve-
conflicting-detach] }),
(cluster-2,{ summary:: ok, details:: [requires-resolve-
conflicting-detach] })]
passive-clusters []
read-only false
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes [dd1_vol, dd2_vol]
visibility [cluster-1, cluster-2]
Contexts:
advanced recoverpoint
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> resolve-conflicting-
detach -c cluster-1
This will cause I/O to suspend at clusters in conflict with cluster
cluster-1, allowing you to stop applications at those clusters. Continue?
(Yes/No) Yes
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name Value
-------------------
----------------------------------------------------------
active-clusters [cluster-1, cluster-2]
cache-mode synchronous
detach-rule no-automatic-winner
operational-status [(cluster-1,{ summary:: ok, details:: [] }),
(cluster-2,{ summary:: suspended, details:: [requires-
resume-at-loser] })]
passive-clusters []
read-only false
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes [dd1_vol, dd2_vol]
visibility [cluster-1, cluster-2]
Contexts:
advanced recoverpoint
See also
l consistency-group resume-at-loser
l Dell EMC VPLEX Administration Guide
consistency-group resume-at-loser
If I/O is suspended due to a data change, resumes I/O at the specified cluster and consistency
group.
Contexts
All contexts (at the losing cluster).
In /clusters/cluster-n/consistency-groups/group-name context, command is
resume-at-loser.
Syntax
consistency-group resume-at-loser
[-c|--cluster] cluster
[-s|--consistency-group]consistency-group
[-f|--force]
Arguments
Required arguments
[-c|--cluster] cluster * The cluster on which to roll back and resume I/O.
Optional arguments
* - argument is positional.
Description
During an inter-cluster link failure, you can permit I/O to resume at one of the two clusters: the
“winning” cluster.
I/O remains suspended on the “losing” cluster.
When the inter-cluster link heals, the winning and losing clusters re-connect, and the losing cluster
discovers that the winning cluster has resumed I/O without it.
Unless explicitly configured otherwise (using the auto-resume-at-loser property), I/O
remains suspended on the losing cluster. This prevents applications at the losing cluster from
experiencing a spontaneous data change.
The delay allows the administrator to shut down applications.
After stopping the applications, you can use this command to:
l Resynchronize the data image on the losing cluster with the data image on the winning cluster,
l Resume servicing I/O operations.
You can then safely restart the applications at the losing cluster.
Without the --force option, this command asks for confirmation before proceding, since its
accidental use while applications are still running at the losing cluster could cause applications to
misbehave.
Examples
VPlexcli:/clusters/cluster-2/consistency-groups/TestCG> resume-at-loser
This may change the view of data presented to applications at cluster
cluster-2. You should first stop applications at that cluster. Continue? (Yes/
No) Yes
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name Value
-------------------
----------------------------------------------------------
active-clusters [cluster-1, cluster-2]
cache-mode synchronous
detach-rule no-automatic-winner
operational-status [(cluster-1,{ summary:: ok, details:: [] }),
(cluster-2,{ summary:: suspended, details:: [requires-
resume-at-loser] })]
passive-clusters []
read-only false
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes [dd1_vol, dd2_vol]
visibility [cluster-1, cluster-2]
Contexts:
advanced recoverpoint
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> resume-at-loser -c
cluster-2
See also
l consistency-group choose-winner
l consistency-group resume-after-rollback
l Dell EMC VPLEX Administration Guide
Required arguments
[-g|--consistency-group] The consistency groups on which to apply the no-
consistency-group, consistency- automatic-winner detach rule.
group, ...
Optional arguments
[-f|--force] Force the operation to continue without confirmation.
Allows this command to be run from non-interactive
scripts.
Description
Applies the no-automatic-winner detach rule to one or more specified consistency groups.
Note: This command requires user confirmation unless you use the --force argument.
This detach rule dictates no automatic detaches occur in the event of an inter-cluster link failure.
In the event of a cluster failure or departure, this rule-set results in I/O being suspended at all
clusters whether or not VPLEX Witness is deployed. To resume I/O, use either the
consistency-group choose-winner or consistency-group resume-after-
rollback commands to designate the winning cluster.
Note: When RecoverPoint is deployed, it may take up to 2 minutes for the RecoverPoint
cluster to take note of changes to a VPLEX consistency group. Wait for 2 minutes after
changing the detach rule for a VPLEX consistency group before creating or changing a
RecoverPoint consistency group.
Examples
Set the detach-rule for a single consistency group from the group’s context:
Set the detach-rule for two consistency groups from the root context:
See also
l consistency-group choose-winner
l consistency-group resume-after-rollback
l consistency-group set-detach-rule active-cluster-wins
l consistency-group set-detach-rule winner
l Dell EMC VPLEX Administration Guide
[-f|--force]
Arguments
Required arguments
[-c|--cluster] cluster- The cluster that will be the winner in the event of an inter-cluster
id link failure.
[-d|--delay] seconds The number of seconds after an inter-cluster link fails before the
winning cluster detaches. Valid values for the delay timer are:
l 0 - Detach occurs immediately after the failure is detected.
l number - Detach occurs after the specified number of seconds
have elapsed. There is no practical limit to the number of
seconds, but delays longer than 30 seconds won't allow I/O to
resume quickly enough to avoid problems with most host
applications.
Optional arguments
[-g|--consistency- The consistency groups on which to apply the winner detach rule.
group] consistency-group,
consistency-group, ...
[-f|--force] Force the operation to continue without confirmation. Allows this
command to be run from non-interactive scripts.
Description
Applies the winner detach rule to one or more specified synchronous consistency groups.
Note: This command requires user confirmation unless the --force argument is used.
In the event of a cluster failure or departure, this rule-set results in I/O continuing on the selected
cluster only. I/O will be suspended at all other clusters. If VPLEX Witness is deployed it will
overrides this selection if the selected cluster has failed.
Note: When RecoverPoint is deployed, it may take up to 2 minutes for the RecoverPoint
cluster to take note of changes to a VPLEX consistency group. Wait for 2 minutes after
changing the detach rule for a VPLEX consistency group before creating or changing a
RecoverPoint consistency group.
Examples
Set the detach-rule for a single consistency group from the group’s context:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG> set-detach-rule
winner --cluster cluster-1 --delay 5s
Set the detach-rule for two consistency groups from the root context:
See also
l consistency-group set-detach-rule active-cluster-wins
l consistency-group set-detach-rule no-automatic-winner
l Dell EMC VPLEX Administration Guide
consistency-group summary
Displays a summary of all the consistency groups with a state other than OK.
Contexts
All contexts.
Syntax
consistency-group summary
Description
Displays all the consistency groups with a state other than 'OK' and the consistency groups at the
risk of a rollback.
Example
Display a summary of unhealthy consistency groups:
See also
l consistency-group create
l consistency-group destroy
l Dell EMC VPLEX Administration Guide
date
Displays the current date and time in Coordinated Universal Time (UTC).
Contexts
All contexts.
Syntax
date
Examples
VPlexcli:/> date
Tue Jul 20 15:57:55 UTC 2010director ping
describe
Describes the attributes of the given context.
Contexts
All contexts with attributes.
Syntax
describe
[-c|--context] context-path
Arguments
Optional arguments
Examples
In the following example, the ll command displays information about a port, and the describe
command with no arguments displays additional information.
VPlexcli:/clusters/cluster-2/exports/ports/P000000003CB001CB-B1-FC01> ll
Name Value
------------------------ ------------------
director-id 0x000000003cb001cb
discovered-initiators []
.
.
.
VPlexcli:/clusters/cluster-2/exports/ports/P000000003CB001CB-B1-FC01> describe
Attribute Description
------------------------ --------------------------------------------------
director-id The ID of the director where the port is exported.
discovered-initiators List of all initiator-ports visible from this port.
.
.
.
Use the describe --context command to display information about the specified context:
device attach-mirror
Attaches a mirror as a RAID 1 child to another (parent) device, and starts a rebuild to synchronize
the mirror.
Contexts
All contexts.
Syntax
device attach-mirror
[-d|--device]
{context-path|device-name}
[-m|--mirror]{context-path|mirror-name}
[-r|--rule-set] rule-set
[-f|--force]
Arguments
Required arguments
[-d|--device] * Name or context path of the device to which to attach the mirror. Does
context-path or not have to be a top-level device. If the device name is used, verify that
device-name the name is unique throughout the VPLEX, including local devices on
other clusters.
Optional arguments
[-m|--mirror] * Name or context path of the mirror to detach. Does not need to be a
context-path or top-level device. If the name of a device is used, ensure the device name
mirror-name is not ambiguous, For example, ensure that the same device name is not
used by local devices in different clusters.
[-r|--rule-set] Rule-set to apply to the distributed device that is created when a mirror
rule-set is added to a local device.
[-f|--force] When --force is set, do not ask for confirmation when attaching a
mirror. Allows this command to be run using a non-interactive script. If
the --force argument is not used, prompts for confirmation in two
circumstances when the mirror is remote and the parent device must be
transformed into a distributed device.
* - argument is positional.
Description
If the parent device is a RAID 0 or RAID C, it is converted to a RAID 1.
If the parent device and mirror device are from different clusters, a distributed device is created.
A storage-volume extent cannot be used as a mirror if the parent device is a distributed-device, or
if the parent device is at a different cluster than the storage-volume extent.
If you do not specify the --rule-set argument, VPLEX assigns a default rule-set to the
distributed device as follows:
l If the parent device has a volume, the distributed device inherits the rule-set of the (exported)
parent.
l If the parent device does not have a volume, the cluster that is local to the management server
is assumed to be the winner.
Once determined, VPLEX displays a notice as to which rule-set the created distributed-device has
been assigned.
When attaching a remote mirror to a local device, or when attaching a new mirror to a distributed
device, both operations consume slots. Both scenarios result in the same following error message:
Refer to the troubleshooting section of the VPLEX procedures in the SolVe Desktop for
instructions on increasing the number of slots.
Note: If the RAID 1 device is added to a consistency group, the consistency group’s detach rule
overrides the device’s detach rule.
Use the rebuild status command to display the rebuild’s progress.
l The rule set that will be applied to the new distributed device potentially allows conflicting
detaches.
Homogeneous array requirement for thin volumes
To preserve thinness of the new RAID-1 device where the parent device is created on a thin
volume and is thin-capable, the mirror device must be created from the same storage-array-family
as the parent device. If the user tries to attach a mirror leg from a dissimilar array-family, the
command displays a warning that the thin-capability of the RAID-1 device will be lost and it can
render the virtual volume to be thin disabled. The following is an example of the warning message:
VPlexcli:/>
Example
Attach a mirror without specifying a rule-set (allow VPLEX to select the rule-set):
Attach a mirror:
See also
l consistency-group set-detach-rule winner
l device detach-mirror
l rebuild status
device collapse
Collapses a one-legged device until a device with two or more children is reached.
Contexts
All contexts.
Syntax
device collapse
[-d|--device] [context-path|device-name]
Arguments
Required arguments
* - argument is positional.
Description
If a RAID 1 device is left with only a single child (after removing other children), use the device
collapse command to collapse the remaining structure. For example:
If RAID 1 device “A” has two child RAID 1 devices “B” and “C”, and child device “C” is removed, A
is now a one-legged device, but with an extra layer of abstraction:
A
|
B
../ \..
Use device collapse to remove this extra layer, and change the structure into:
A
../ \..
device detach-mirror
Removes (detaches) a mirror from a RAID-1 device.
Contexts
All contexts.
Syntax
device detach-mirror
[-d|--device] [context-path|device-name]
[-m|--mirror] [context-path|mirror-name]
[-s|--slot] slot-number
[-i|--discard]
[-f|--force]
Arguments
Required arguments
[-d|--device] * Name or context path of the device from which to detach the
context-path or device- mirror. Does not have to be a top-level device. If the device name is
name used, verify that the name is unique throughout the system, including
local devices on other clusters.
Optional arguments
[-m|--mirror] * Name or context path of the mirror to detach. Does not have to be a
context-name or mirror- top-level device. If the device name is used, verify that the name is
name unique throughout the VPLEX, including local devices on other
clusters.
[-s|--slot] slot- Slot number of the mirror to be discarded. Applicable only when the
number --discard argument is used.
[-i|--discard] When specified, discards the detached mirror. The data is not
discarded.
[-f|--force] Force the mirror to be discarded. Must be used when --discard
argument is used. The --force argument is set for detaching an
unhealthy
mirror it is discarded even if --discard flag is not set.
* - argument is positional.
Description
Use this command to detach a mirror leg from a RAID 1 device.
Figure 1 RAID device and virtual volume: before detach mirror
If the RAID device supports a virtual volume, and you don't use the --discard argument the
command:
l Removes the mirror (child device) from the RAID 1 parent device.
l Makes the detached child a top-level device.
l Creates a new virtual volume on top the new device and prefixes the name of the new device
with the name of the original device.
If the RAID device supports a virtual volume, and you use the --discard argument, the
command:
l Removes the mirror (child device) from the RAID 1 parent device.
l Makes the detached child a top-level device.
l Creates no new virtual.
l Detaches the mirror regardless of its current state and does not guarantee data consistency.
Figure 3 Devices and virtual volumes: after detach mirror - with discard
Examples
VPlexcli:/distributed-storage/distributed-devices> ll
Name Status Operational Health State Auto Rule WOF
Transfer
------------------------- ------- Status ----------- Resume Set Group Size
------------------------ ------- ----------- ------------- ------ Name Name --------
------------------ ----- ------- ----------- ------------- ------ ----- ----- --------
ESX_stretched_device running ok ok true colin - 2M
bbv_temp_device running ok ok true colin - 2M
dd_source_device running ok ok true colin - 2M
ddt running ok ok true colin - 2M
dev_test_dead_leg_2 running stressed major-failure - colin - 2M
windows_big_drive running ok ok true colin - 2M
.
.
.
VPlexcli:/distributed-storage/distributed-devices> ll /dev_test_dead_leg_2_DD/distributed-
device-components/
/distributed-storage/distributed-devices/dev_test_dead_leg_2_DD/distributed-device-components:
Name Cluster Child Fully Operational Health ..
---------------------- ------- Slot Logged Status State....
--------------------- --------- ----- ------ ----------- ------
dev_test_alive_leg_1 cluster-1 1 true ok ok
dev_test_dead_leg_2 cluster-2 0 true error critical-failure
VPlexcli:/distributed-storage/distributed-devices> device detach-mirror --slot 0 --discard --
force --device /distributed-storage/distributed-devices/dev_test_dead_ leg_2
VPlexcli:/distributed-storage/distributed-devices> ll
Name Status Operational Health Auto Rule WOF Transfer
----------------------------- ------- Status State Resume Set Group Size
----------------------------- ------- ----------- ------ ------ Name Name --------
----------------------------- ------- ----------- ------ ------ ----- ----- --------
ESX_stretched_device running ok ok true colin - 2M
bbv_temp_device running ok ok true colin - 2M
dd_source_device running ok ok true colin - 2M
ddt running ok ok true colin - 2M
dev_test_dead_leg_2_DD running ok ok - colin - 2M
windows_big_drive running ok ok true colin - 2M
.
.
.
See also
l device attach-mirror
Optional arguments
[-f|--force] Forces the operation to continue without confirmation.
[-h|--help] Displays command line help.
[--verbose] Provides more output during command execution. This may not have any
effect for some commands.
Description
Mirror isolation provides a mechanism to automatically unisolate mirrors that were previously
isolated. When mirror isolation feature is enabled, disabling mirror auto-unisolation will prevent the
system from automatically unisolating any isolated mirrors whose underlying storage-volume’s
performance is now in the acceptable range.
For the option to manually unisolate the mirror, follow the troubleshooting procedure for VPLEX in
the SolVe Desktop.
Examples
Shows the result when the command is executed on all clusters when mirror isolation is disabled:
Shows the command executed with the --force option, when the mirror-isolation feature is
disabled:
will prevent the system from automatically unisolating the underlying storage-
volumes once their performance is in the acceptable range. You can manually
unisolate the mirror by following the troubleshooting procedure.
Auto-unisolation is disabled on clusters cluster-1,cluster-2.
Shows auto-unisolation was not disabled because the feature is not supported:
See also
l device mirror-isolation auto-unisolation enable
l device mirror-isolation disable
l device mirror-isolation enable
l device mirror-isolation show
l Dell EMC VPLEX Administration Guide
l Dell EMC VPLEX Procedures in SolVe Desktop
Arguments
Optional arguments
[--verbose] Provides more output during command execution. This may not have any
effect for some commands.
Description
This command enables auto mirror unisolation.
Mirror isolation provides a mechanism to automatically unisolate mirrors that were previously
isolated. When mirror isolation is enabled, auto-unisolation allows the system to automatically
unisolate the underlying storage-volumes once their performance is in the acceptable range.
Examples
Shows auto-unisolation enabled when mirror isolation is disabled on both clusters:
Shows auto-unisolation enabled when mirror isolation is disabled on one of the clusters:
Shows auto-unisolation enable operation failed because one cluster is not available:
Shows auto-unisolation enable operation failed because the meta volume is not ready:
See also
l device mirror-isolation auto-unisolation disable
l device mirror-isolation disable
l device mirror-isolation enable
l device mirror-isolation show
l Dell EMC VPLEX Administration Guide
Optional arguments
[-c|--clusters] context-path Specifies the list of clusters on which to disable mirror
[, context-path...] isolation.
[-f|--force] Forces the operation to continue without confirmation.
[-h|--help] Displays command line help.
[--verbose] Provides more output during command execution. This
may not have any effect for some commands.
Description
A RAID 1 mirror leg built upon a poorly performing storage volume can bring down the performance
of the whole RAID 1 device and increase I/O latencies to the applications using this device. VPLEX
prevents I/Os to these poorly performing mirror legs to improve the RAID 1 performance. This
behavior or feature is known as mirror isolation.
When disabling the mirror isolation feature on one or more clusters, this command prints a warning
and asks for confirmation.
Note: This command disables the mirror isolation feature and prevents VPLEX from improving
the performance of a RAID 1 device containing a poorly performing mirror leg. This command
should only be used if redundancy is desired over RAID 1 performance improvement.
Examples
Disable mirror isolation on all clusters:
Attempt to disable mirror-isolation on both clusters and succeeded on cluster 1, but failed on
cluster 2 because the feature is not supported:
See also
l device mirror-isolation auto-unisolation disable
l device mirror-isolation auto-unisolation enable
l device mirror-isolation enable
l device mirror-isolation show
l Dell EMC VPLEX Administration Guide
Optional arguments
Description
A RAID 1 mirror leg built on a poorly performing storage volume can bring down the performance of
the whole RAID 1 device and increase I/O latencies to the applications using this device. VPLEX
prevents I/Os to these poorly performing mirror legs to improve the RAID 1 performance. This
behavior or feature is known as mirror isolation.
Note: This command enables the mirror isolation feature and should only be used if RAID 1
performance improvement is desired over redundancy.
Examples
Enable mirror isolation on all clusters:
Or
Attempt to enable mirror-isolation on both clusters and succeeded on cluster 1, but failed on
cluster 2 because the feature is not supported:
See also
l device mirror-isolation auto-unisolation disable
l device mirror-isolation auto-unisolation enable
l device mirror-isolation disable
l device mirror-isolation show
l Dell EMC VPLEX Administration Guide
Optional arguments
[-c|--clusters] context-path [, Specifies the list of clusters on which to show mirror
context-path...] isolation configuration parameters.
Description
Used to display all the configuration parameters related to mirror isolation for the specified
clusters.
The current configuration parameters supported are:
If a value for any configuration parameter cannot be retrieved for the cluster, it may be because
the feature is not supported or there was a command failure.
Examples
Shows the mirror isolation configuration parameters on all clusters:
Or
See also
l device mirror-isolation auto-unisolation disable
l device mirror-isolation auto-unisolation enable
l device mirror-isolation disable
l device mirror-isolation enable
l Dell EMC VPLEX Administration Guide
device resume-link-down
Resumes I/O for devices on the winning island during a link outage.
Contexts
All contexts.
Syntax
device resume-link-down
[-c|--cluster] context path
[-r|--devices] context path
[-a|--all-at-island]
[-f|--force]
Arguments
Optional arguments
[-r|--devices] Name or context path of the devices for which to resume I/O. They
context path or device- must be top-level devices.
name
[-a|--all-at- Resume I/O on all devices on the chosen winning cluster and the
island] clusters with which it is communicating.
Description
Used when the inter-cluster link fails. Allows one or more suspended mirror legs to resume I/O
immediately.
For example, used when the peer cluster is the winning cluster but is known to have failed
completely.
Resumes I/O on the specified cluster and the clusters it is in communication with during a link
outage.
Detaches distributed devices from those clusters that are not in contact with the specified cluster
or detaches local devices from those clusters that are not in contact with the local cluster.
WARNING The device resume-link-down command causes I/O to resume on the local
cluster regardless of any rule-sets applied to the device. Verify that rules and any manual
detaches do not result in conflicting detaches (cluster-1 detaching cluster-2, and cluster-2
detaching cluster-1). Conflicting detaches will result in lost data on the losing cluster, a full
rebuild, and degraded access during the time of the full rebuild.
When the inter-cluster link fails in a VPLEX Metro configuration, distributed devices are suspended
at one or more clusters. When the rule-set timer expires, the affected cluster is detached.
Alternatively, use the device resume-link-down command to detach the cluster immediately
without waiting for the rule-set timer to expire.
WARNING Verify that rules and any manual detaches do not result in conflicting detaches
(cluster-1 detaching cluster-2, and cluster-2 detaching cluster-1).
Conflicting detaches result in lost data on the losing cluster, a full rebuild, and degraded access
during the time of the full rebuild.
Only one cluster should be allowed to continue for each distributed device. Different distributed
devices can have different clusters continue.
Use the ll /distributed-storage/distributed-devices/device command to display
the rule set applied to the specified device.
Use the ll /distributed-storage/rule-sets/rule-set/rules command to display the
detach timer for the specified rule-set.
Examples
See also
l device resume-link-up
l ds dd declare-winner
device resume-link-up
Resumes I/O on suspended top level devices, virtual volumes, or all virtual volumes in the VPLEX.
Contexts
All contexts.
Syntax
device resume-link-up
[-r|--devices] context path,context path...
[-v|--virtual-volumes] context path,context path...
[-a|--all]
[-f|--force]
Arguments
Optional arguments
[-r|--devices] context List of one or more context paths or names of the devices for
path, context path... which to resume I/O. They must be top-level devices. If the
device name is used, verify that the name is unique throughout
the VPLEX, including local devices on other clusters.
Description
Use this command after a failed link is restored, but I/O is suspended at one or more clusters.
Usually applied to the mirror leg on the losing cluster when auto-resume is set to false.
During a WAN link outage, after cluster detach, the primary cluster detaches to resume operation
on the distributed device.
If the auto-resume property of a remote or distributed device is set to false and the link has
come back up, use the device resume-link-up command to manually resume the second
cluster.
Example
Resume I/O on two specified devices:
See also
l device mirror-isolation disable
device resurrect-dead-storage-volumes
Resurrect the thin-aware storage-volumes supporting the target devices that are marked dead.
Contexts
Any
Syntax
device resurrect-dead-storage-volumes
[-h|--help]
[--verbose]
[-r|--devices=]device [,device[,device]]
Arguments
Optional arguments
-h | --help Displays the usage for this command.
--verbose Provides more output during command execution. This
may not have any effect for some commands.
Required arguments
-r|--devices=device *Specifies the devices to resurrect dead supporting
[,device[,device]] storage-volumes on. The device name can include wild
card symbols.
* Argument is positional.
Description
This command is used for storage volumes that do not auto-resurrect after they receive an Out Of
Space error on a write command and become hardware dead. This scenario should only happen on
an XtremIO storage volume that is used as a VPLEX mirror leg. After resolving the underlying issue
that lead to an out of space error, use this command to resume I/O for supporting storage-
volumes that have been marked dead. The target devices may be of any geometry, local or
distributed. This command executes storage-volume resurrect for all dead-storage-
volumes of a device. This scenario should only happen on an XtremIO storage volume that is used
as a VPLEX mirror leg.
director appcon
Runs the application console on Linux systems.
Contexts
All
Syntax
director appcon
--xterm-opts options
[-t|--targets] target-glob,target-glob...
--timeout seconds
--show-plan-only
Arguments
Optional arguments
[-t|--targets] List of one or more glob patterns. Operates on the specified targets.
target-glob, target- Globs may be a full path glob, or a name pattern. If only a name pattern is
glob... supplied, the command finds allowed targets whose names match.
Entries must be separated by commas.
Omit this argument if the current context is at or below the target
context.
--timeout seconds Sets the command timeout. Timeout occurs after the specified number
of seconds multiplied by the number of targets found.
Default: 180 seconds per target.
0: No timeout.
--show-plan-only Shows the targets that will be affected, but the actual operation is not
performed. Recommended when the --targets argument is used.
Description
Applicable only to Linux systems.
Opens the hardware application console for each director in a separate window.
Examples
Display the available targets:
See also
l director appstatus
director appdump
Downloads an application dump from one or more boards.
Contexts
All contexts.
In /engines/engine/directors context, command is appdump.
Syntax
director appdump
[-d|--dir] directory
[-s|--no-timestamp]
[-o|--overwrite]
[-c|--include-cores]
[-z|--no-zip]
[-p|--no-progress]
[-t|--targets] target-glob,target-glob...
--timeout seconds
--show-plan-only
Arguments
Optional arguments
[-z|--no-zip] Turns off the packaging of dump files into a compressed zip file.
[-t|--targets] List of one or more glob patterns. Operates on the specified targets.
target-glob, target-glob... Globs may be a full path glob or a name pattern. If only a name
pattern is supplied, the command finds allowed targets whose
names match. Entries must be separated by commas.
Omit this argument if the current context is at or below the target
context.
--timeout seconds Sets the command timeout. Timeout occurs after the specified
number of seconds multiplied by the number of targets found.
Default: 180 seconds per target.
0: No timeout.
--show-plan-only Shows the targets that will be affected, but the actual operation is
not performed. Recommended when the --targets argument is
used.
Description
Used by automated scripts and by Dell EMC Customer Support to help troubleshoot problems.
The hardware name and a timestamp are embedded in the dump filename. By default, the name of
the dump file is:
hardware name-YYYY.MM.DD-hh.mm.ss.zip.
Note: Timeout is automatically set to 0 (infinite) when dumping core.
Examples
Show the targets available for the appdump procedure:
See Also
l cluster configdump
l collect-diagnostics
l getsysinfo
l smsdump
director appstatus
Displays the status of the application on one or more boards.
Contexts
All contexts.
In /engines/engine/directors context, command is appstatus.
Syntax
director appstatus
[-t|--targets] target-glob, target-glob...
--timeout seconds
--show-plan-only
Arguments
Optional arguments
[-t|--targets] List of one or more glob patterns. Operates on the specified targets.
target-glob, target- Globs may be a full path glob or a name pattern. If only a name pattern is
glob... supplied, the command finds allowed targets whose names match. Entries
must be separated by commas.
Omit this argument if the current context is at or below the target
context.
--timeout seconds Sets the command timeout. Timeout occurs after the specified number of
seconds multiplied by the number of targets found.
Default: 180 seconds per target.
0: No timeout.
--show-plan- Shows the targets that will be affected, but the actual operation is not
only performed. Recommended when the --targets argument is used.
Description
Used by automated scripts and by Dell EMC Customer Support to help troubleshoot problems.
Examples
VPlexcli:/engines/engine-1-1/directors> appstatus
For /engines/engine-1-1/directors/Cluster_1_Dir1B:
Application Status Details
-------------------- ------- -------
00601610672e201522-2 running -
For /engines/engine-1-1/directors/Cluster_1_Dir1A:
Application Status Details
------------------- ------- -------
00601610428f20415-2 running -
See also
l director appcon
l director appdump
director commission
Starts the director’s participation in the cluster.
Contexts
All contexts.
In /engines/engine/directors context, command is commission.
Syntax
director commission
[-n|--director] director
[-t|--timeout] seconds
[-a|--apply-cluster-settings]
[-f|--force]
Arguments
Required arguments
Optional arguments
[a|--apply-cluster- Add this director to a running cluster and apply any cluster-
settings] specific settings. Use this argument when adding or replacing a
director in an existing VPLEX.
* - argument is positional.
Description
In order to participate in a cluster, a director must be explicitly commissioned. Uncommissioned
directors can boot but do not participate in any cluster activities.
Use the version -a command to display the firmware version for all directors in the cluster.
The director commission command fails if the director's firmware version is different than
the already commissioned directors, unless the --force argument is used.
Examples
Add a director to a running cluster using the default timeout (60 seconds):
See also
l director decommission
l version
director decommission
Decommissions a director. The director stops participating in cluster activities.
Contexts
All contexts.
In /engines/engine/directors context, command is decommission.
Syntax
director decommission
[-n|--director] director
Arguments
Required arguments
[-n|--director] director T he director to de-commission.
Description
This command removes the director from participating in the VPLEX, and initializes it to only a
partial operational state. The director is no longer a replication target and its front-end ports are
disabled.
Then it reboots the director.
Examples
See also
l director commission
l director forget
l director shutdown
director fc-port-stats
Displays/resets Fibre Channel port statistics for a specific director.
Contexts
All contexts.
In /engines/engine/directors context, command is fc-port-stats director.
In context, command is fc-port-stats
Syntax
director fc-port-stats
[-d|--director] director
[-o|--role] role
[-r|--reset]
Arguments
Required
arguments
[-d|-- Context path of the director for which to display FC statistics. Not required
director] if the current context is /engines/engine/directors/director.
director
Optional
arguments
[-o|--role] Filter the ports included in the reply by their role. If no role is specified, all
role ports at the director are included. This argument is ignored if --reset is
specified. Roles include:
l back-end - Filter on ports used to access storage devices that the
system itself does I/O to.
l front-end - Filter on ports used to make storage available to hosts.
l inter-director-communication - Filter on ports used to
communicate with other directors.
l local-com - Filter on ports used to communicate with other directors
at the same cluster.
l management - Filter on ports used to communicate with the
management server.
l wan-com - Filter on ports used to communicate with other clusters.
[-r|--reset] Reset the statistics counters of all ports at the specified director. If you
specify this argument, the command ignores the --role argument.
Description
Displays statistics generated by the driver for FibreChannel ports at the specified director and
optionally with the specified role, or resets those statistics.
Run this command from the /engines/engine/directors/director context to display the
Fibre Channel statistics for the director in the current context.
Examples
Display a director’s Fibre Channel port statistics from the root context:
Reset the port statistics counters on a director’s Fibre Channel ports from the root context:
Display a director’s Fibre Channel port statistics from the director context:
VPlexcli:/engines/engine-1-1/directors/director-1-1-A> fc-port-stat
Results for director 'director-2-1-A' at Fri Feb 10 16:10:15 MST 2012:
Port: A1-FC00 A1-FC01 A1-FC02 A1-FC03 A3-FC00 A3-FC01
Frames:
- Discarded: 0 0 0 0
0 0
- Expired: 0 0 0 0 0 0
- Bad CRCs: 0 0 0 0 0 0
- Encoding Errors: 0 0 0 0 0 0
- Out Of Order: 0 0
0 0 0 0
- Lost: 0 0
0 0 0 13
Requests:
- Accepted: 0 0 0 0 7437 7437
- Rejected: 0 0 0 0 0 0
- Started: 0 0 0 0 7437 7437
- Completed: 0 0 0 0 7437 7437
- Timed-out: 0 0 0 0 0 0
Tasks:
- Received: 0 0 0 0 7437 7437
- Accepted: 0 0 0 0 7437 7437
- Rejected: 0 0 0 0 0 0
- Started: 0 0 0 0 7437 7437
- Completed: 0 0 0 0 7437 7437
- Dropped: 0 0 0 0 0 0
See also
l monitor stat-list
Optional arguments
Description
Show firmware status and version for one or more directors.
Field Description
Marked for next reboot no - The software in this bank will not be used
the next time the director reboots.
yes - The software in this bank will be used
the next time the director reboots.
Example
Show firmware banks for two specified directors:
See also
l version
director forget
Removes a director from the VPLEX.
Contexts
All contexts.
Syntax
director forget
[-n|--director] director uuid
Arguments
Required arguments
Description
Removes the specified director from the context tree. Deletes all information associated with the
director.
Examples
In the following example:
l The ll command in engines/engine/directors context displays director IDs.
l The director forget command instructs VPLEX to delete all records pertaining to the
specified director.
VPlexcli:/engines/engine-1-1/directors> ll
Name Director ID Cluster Commissioned Operational
Communication
--------------- ------------------ ID ------------ Status
Status
--------------- ------------------ ------- ------------ -----------
-------------
Cluster_1_Dir1A 0x000000003ca00147 1 true ok
ok
Cluster_1_Dir1B 0x000000003cb00147 1 true ok
ok
VPlexcli:/engines/engine-1-1/directors> director forget --director
0x000000003ca00147
See also
l director commission
l director decommission
director passwd
Changes the access password for the specified director.
Contexts
All contexts.
In /engines/engine/directors/director context, command is passwd.
Syntax
director passwd
[-n|--director] director
[-c|--current-password] current-password
[-p|--new-password] new-password
Arguments
Required arguments
[-p|--new-password] new-password The new access password to set for the specified
director.
Description
Changes the password for a specified director.
director ping
Displays the round-trip latency from a given director to the target machine, excluding any VPLEX
overhead.
Contexts
All contexts.
In /engines/engine/directors context, command is ping.
Syntax
director ping
[-i|--ip-address] ip-address
[-n|--director] director
[-w|--wait] [1 - 2147483647]
Arguments
Required arguments
Optional arguments
Description
ICMP traffic must be permitted between clusters for this command to work properly.
To verify that ICMP is enabled, log in to the shell on the management server and use the ping IP-
address command where the IP address is for a director in the VPLEX.
If ICMP is enabled on the specified director, a series of lines is displayed:
See also
l director tracepath
director shutdown
Starts the orderly shutdown of a director’s firmware
Contexts
All contexts.
In /engines/engine/directors context, command is shutdown.
Syntax
director shutdown
[-f|--force]
[-n|--director] context-path
Arguments
Required arguments
Optional arguments
* - argument is positional.
Description
Shuts down the director firmware.
Note: Does not shut down the operating system on the director.
See also
l cluster shutdown
l director commission
director tracepath
Displays the route taken by packets from a specified director to the target machine.
Contexts
All contexts.
In /engines/engine/directors context, command is tracepath.
Syntax
director tracepath
[-i|--ip-address] ip-address
[-n|--director] director
Arguments
Required arguments
[-i|--ip-address] IP- The target's IP address. This address is one of the Ethernet WAN
address ports on another director. Use the ll port-group command to
display the Ethernet WAN ports on all directors.
Optional arguments
[-n|--director] The name of the director from which to perform the operation. Can
director be either the director's name (for example director-1-1-A) or an IP
address.
Description
Displays the hops, latency, and MTU along the route from the specified director to the target at
the specified IP address.
The number of hops does not always correlate to the number of switches along the route. For
example, a switch with a fire wall on each side is counted as two hops.
The reported latency at each hop is the round-trip latency from the source hop.
The MTU reported at each hop is limited by the MTU of previous hops and therefore not
necessarily the configured MTU at that hop.
CAUTION If the target machine does not respond properly, the traceroute might stall. Run
this command multiple times.
See also
l director ping
director uptime
Prints the uptime information for all connected directors.
Contexts
All contexts.
In engines/engine/directors context, command is uptime.
Syntax
director uptime
Description
Uptime measures the time a machine has been up without any downtime.
Examples
Shows director uptime:
See also
l cluster shutdown
l director firmware show-banks
dirs
Displays the current context stack.
Contexts
All contexts.
Syntax
dirs
Description
The stack is displayed from top to bottom, in left to right order.
Examples
VPlexcli:/> dirs
[/]
VPlexcli:/> cd /engines/engine-1-1/
VPlexcli:/engines/engine-1-1> dirs
[/engines/engine-1-1]
VPlexcli:/engines/engine-1-1> cd /directors/
VPlexcli:/engines/engine-1-1/directors> dirs
[/engines/engine-1-1/directors]
See also
l tree
disconnect
Disconnects one or more connected directors.
Contexts
All contexts.
Syntax
disconnect
[-n|--directors] context-path, context-path...
Arguments
Required arguments
Description
Stops communication from the client to the remote directors and frees up all resources that are
associated with the connections.
CAUTION Removes the entry in the connections file for the specified directors.
This command is used in various procedures in the EMC VPLEX Troubleshooting Guide.
Examples
dm migration cancel
Cancels an existing data migration.
Contexts
All contexts.
In all data-migration (device or extent) contexts, command is migration cancel.
In data-migrations/extent-migrations context, command is cancel.
Syntax
dm migration cancel
[-m|--migrations] context-path,context-path...
[-f|--force]
Arguments
Required arguments
[-m|--migrations] * List of one or more migrations to cancel. Entries must be separated by
commas.
Optional arguments
[-f|--force] Forces the cancellation of the specified migrations.
* - argument is positional.
Description
Use the dm migration cancel --force --migrations context-path command to cancel
a migration.
Specify the migration by name if that name is unique in the global namespace. Otherwise, specify a
full context path.
Migrations can be canceled in the following circumstances:
l The migration is in progress or paused. The command stops the migration, and frees any
resources it was using.
l The migration has not been committed. The command returns source and target devices or
extents to their pre-migration state.
A migration cannot be canceled if it has been committed.
To remove the migration record from the context tree, see the dm migration move command.
Example
Cancel a migration from device-migration context:
See also
l dm migration commit
l dm migration pause
l dm migration remove
l dm migration resume
l dm migration start
dm migration clean
Cleans a committed data migration.
Contexts
All contexts.
In /data-migrations context, command is migration clean.
In /data-migrations/device-migrations context, command is clean.
In /data-migrations/extent-migrations context, command is clean.
Syntax
dm migration clean
[-m|--migrations] context-path,context-path...
[-f|--force]
[-e|--rename-target]
Arguments
Required arguments
Optional arguments
[-e|--rename-target] For device migrations only, renames the target device after the
source device. If the target device is renamed, the virtual
volume on top of it is also renamed if the virtual volume has a
system-assigned default name.
* - argument is positional.
Description
For device migrations, cleaning dismantles the source devices down to its storage volumes. The
storage volumes no longer in use are unclaimed.
For device migrations only, use the --rename-target argument to rename the target device
after the source device. If the target device is renamed, the virtual volume on top of it is also
renamed if the virtual volume has a system-assigned default name.
Without renaming, the target devices retain their target names, which can make the relationship
between volume and device less evident.
For extent migrations, cleaning destroys the source extent and unclaims the underlying storage
volume if there are no extents on it.
Examples
See also
l dm migration cancel
l dm migration commit
l dm migration pause
l dm migration remove
l dm migration resume
l dm migration start
dm migration commit
Commits a completed data migration allowing for its later removal.
Contexts
All contexts.
In /data-migrations context, command is migration commit.
In /data-migrations/extent-migrations context, command is commit.
In /data-migrations/device-migrations context, command is commit.
Syntax
dm migration commit
[-m|--migrations] context-path,context-path...
[-f|--force]
Arguments
Required arguments
* - argument is positional.
Description
The migration process inserts a temporary RAID 1 structure above the source device/extent with
the target device/extent as an out-of-date leg of the RAID 1. The migration can be understood as
the synchronization of the out-of-date leg (the target).
After the migration is complete, the commit step detaches the source leg of the RAID 1 and
removes the RAID 1.
The virtual volume, device or extent is identical to the one before the migration except that the
source device/extent is replaced with the target device/extent.
A migration must be committed in order to be cleaned.
CAUTION Verify that the migration has completed successfully before committing the
migration.
Examples
Commit a device migration:
See also
l dm migration cancel
l dm migration pause
l dm migration remove
l dm migration resume
l dm migration start
dm migration pause
Pauses the specified in-progress or queued data migrations.
Contexts
All contexts.
In /data-migrations context, command is migration pause.
In /data-migrations/extent-migrations context, command is pause.
In /data-migrations/device-migrations context, command is pause.
Syntax
dm migration pause
[-m|--migrations] context-path,context-path...
Arguments
Required arguments
[-m|--migrations] context- * List of one or more migrations to pause. Entries must
path,context-path... be separated by commas.
* - argument is positional.
Description
Pause an active migration to release bandwidth for host I/O during periods of peak traffic.
Specify the migration by name if that name is unique in the global namespace. Otherwise, specify a
full pathname.
Use the dm migration resume command to resume a paused migration.
Exsample
Pause a device migration:
See also
l dm migration cancel
l dm migration commit
l dm migration remove
l dm migration resume
l dm migration start
dm migration remove
Removes the record of canceled or committed data migrations.
Contexts
All contexts.
In /data-migrations context, command is migration remove.
In /data-migrations/extent-migrations context, command is remove.
In /data-migrations/device-migrations context, command is remove.
Syntax
dm migration remove
[-m|--migrations] context-path,context-path...
[-f|--force]
Arguments
Required arguments
* - argument is positional.
Description
Before a migration record can be removed, it must be canceled or committed to release the
resources allocated to the migration.
Example
Remove a migration:
See also
l dm migration cancel
l dm migration commit
l dm migration pause
l dm migration resume
l dm migration start
dm migration resume
Resumes a previously paused data migration.
Contexts
All contexts.
In /data-migrations context, command is migration resume.
In /data-migrations/extent-migrations context, command is resume.
In /data-migrations/device-migrations context, command is resume.
Syntax
dm migration resume
[-m|--migrations] context-path,context-path...
Arguments
Required arguments
Description
Pause an active migration to release bandwidth for host I/O during periods of peak traffic.
Use the dm migration resume command to resume a paused migration.
Example
Resume a paused device migration:
See also
l dm migration cancel
l dm migration commit
l dm migration pause
l dm migration remove
l dm migration start
dm migration start
Starts the specified migration.
Contexts
All contexts.
In /data-migrations context, command is migration start.
in /data-migrations/extent-migrations context, command is start.
in /data-migrations/device-migrations context, command is start.
Syntax
[-n|--name] migration-name...
[-f|--from] {source-extent|source-device}
[-t|--to] {target-extent|target-device}
[-s|--transfer-size] value
--paused
--force
Arguments
Required arguments
[-n|--name] * Name of the new migration. Used to track the migration’s progress,
migration-name... and to manage (cancel, commit, pause, resume) the migration.
[-f|--from] * The name of source extent or device for the migration. Specify the
{source-extent|source- source device or extent by name if that name is unique in the global
device} namespace. Otherwise, specify a full pathname.
If the source is an extent, the target must also be an extent. If the
source is a device, the target must also be a device.
[-t|--to] {target- * The name of target extent or device for the migration. Specify the
extent|target-device} target device or extent by name if that name is unique in the global
namespace. Otherwise, specify a full pathname.
Optional arguments
--force Do not ask for confirmation. Allows this command to be run using a
non-interactive script.
* - argument is positional.
Description
Starts the specified migration. If the target is larger than the source, the extra space on the target
is unusable after the migration. If the target is larger than the source, a prompt to confirm the
migration is displayed.
Up to 25 local and 25 distributed migrations (rebuilds) can be in progress at the same time. Any
migrations beyond those limits are queued until an existing migration completes.
Extent migrations - Extents are ranges of 4K byte blocks on a single LUN presented from a single
back-end array. Extent migrations move data between extents in the same cluster. Use extent
migration to:
l Move extents from a “hot” storage volume shared by other busy extents,
l De-fragment a storage volume to create more contiguous free space,
l Support technology refreshes.
Start and manage extent migrations from the extent migration context:
VPlexcli:/> cd /data-migrations/extent-migrations/
VPlexcli:/data-migrations/extent-migrations>
Note: Extent migrations are blocked if the associated virtual volume is undergoing expansion.
See the virtual-volume expand command.
Device migrations - Devices are RAID 0, RAID 1, or RAID C built on extents or other devices.
Devices can be nested; a distributed RAID 1 can be configured on top of two local RAID 0 devices.
Device migrations move data between devices on the same cluster or between devices on different
clusters. Use device migration to:
l Migrate data between dissimilar arrays
l Relocate a hot volume to a faster array
This command can fail on a cross-cluster migration if there is not a sufficient number of meta
volume slots. See the troubleshooting section of the VPLEX procedures in the SolVe Desktop for a
resolution to this problem.
Start and manage device migrations from the device migration context:
VPlexcli:/> cd /data-migrations/device-migrations/
VPlexcli:/data-migrations/device-migrations>
When running the dm migration start command across clusters, you might receive the
following error message:
See the troubleshooting section of the VPLEX procedures in the SolVe Desktop for instructions on
increasing the number of slots.
Prerequisites for target devices/extents
The target device or extent of a migration must:
l Be the same size or larger than the source device or extent
If the target is larger in size than the source, the extra space cannot be utilized. For example, if
the source is 200 GB, and the target is 500 GB, only 200 GB of the target can be used after a
migration. The remaining 300 GB cannot be claimed.
l Not have any existing volumes on it.
See the Dell EMC VPLEX Administration Guide for detailed information on data migration.
See also
l batch-migrate create-plan
l batch-migrate start
l dm migration cancel
l dm migration commit
l dm migration pause
l dm migration remove
l dm migration resume
drill-down
Displays the components of a view, virtual volume or device, down to the storage-volume context.
Contexts
All contexts.
Syntax
drill-down
[-v|--storage-view] context-path,context-path...
[-o|--virtual-volume] context-path,context-path...
[-r|--device] context-path,context-path...
Arguments
Required arguments
[-o|--virtual-volume] List of one or more virtual volumes to drill down. Entries must
context-path,context-path... be separated by commas. Glob style pattern matching is
supported.
[-r|--device] context- List of one or more devices to drill down. Entries must be
path,context-path... separated by commas. Glob style pattern matching is
supported.
Description
Displays the components of the specified object.
To display a list of available objects, use the drill-down object-type command followed by the
<TAB> key, where object type is storage-view, device, or virtual-volume.
Examples
Display the components of a virtual volume:
See also
l tree
ds dd convert-to-local
To convert a distributed device to a local device, this command detaches the leg that is not on the
specified cluster.
Context
All contexts
Syntax
ds dd convert-to-local
[-h | --help]
[-v | --verbose]
[[-c | --cluster = ]cluster-context]
[-f | --force]
[[-d | --distributed-device=] distributed device]
Arguments
Optional arguments
-h | --help Displays the usage for this command. --verbose Provides
more output during command execution. This may not have
any effect for some commands.
-c | --cluster= cluster context Specifies the context path of the cluster where the
distributed device will be local. If the device is exported to
any cluster it must be the chosen cluster.
The remaining leg becomes the supporting device of the virtual volume. The target device should
NOT be migration temporary device and should not be exported to any other cluster than the
specified cluster. For distributed devices that are part of a consistency-group please refer to the
connsistency-group convert-local command.
ds dd create
Creates a new distributed-device.
Contexts
All contexts.
Syntax
ds dd create
[-n|name] name
[-d|--devices] context-path [,contextpath,...]
[-l|--logging-volumes] context-path [,context-path,...]
[-r| rule-set] rule-set
[-s|--source-leg] context-path
[-f|--force]
Arguments
Required arguments
[-n|--name] name * The name of the new distributed device. Must be unique across
the VPLEX.
[-d|--devices] * List of one or more local devices that will be legs in the new
context-path [, context- distributed device.
path,...]
[-l|--logging- List of one or more logging volumes to use with this device. If no
volume] context-path [, logging volume is specified, a logging volume is automatically
context-path,...] selected from any available logging volume that has sufficient space
for the required entries. If no available logging volume exists, an
error message is returned.
Optional arguments
[-r|--rule-set] rule- The rule-set to apply to the new distributed device. If the --rule-
set set argument is omitted, the cluster that is local to the
management server is assumed to be the winner in the event of an
inter-cluster link failure.
[-s|--source-leg] Specifies one of the local devices to use as the source data image
context-path for the new device. The command copies data from the source-leg
to the other legs of the new device.
[-f|--force] Forces a rule-set with a potential conflict to be applied to the new
distributed device.
* - argument is positional.
Description
The new distributed device consists of two legs; local devices on each cluster.
WARNING Without --source-leg, a device created by this command does not initialize its
legs, or synchronize the contents of the legs. Because of this, consecutive reads of the same
block may return different results for blocks that have never been written. Host reads at
different clusters are almost certain to return different results for the same unwritten block,
unless the legs already contain the same data. Do not use this command without --source-
leg unless you plan to initialize the new device using host tools
CAUTION Use this command only if the resulting device will be initialized using tools on the
host.
Do not use this command if one leg of the resulting device contains data that must be
preserved. Applications using the device may corrupt the pre-existing data.
To create a device when one leg of the device contains data that must be preserved, use the
device attach-mirror command to add a mirror to the leg. The data on the leg will be copied
automatically to the new mirror.
The individual local devices may include any underlying type of storage volume or geometry (RAID
0, RAID 1, or RAID C), but they should be the same capacity.
If a distributed device is configured with local devices of different capacities:
l The resulting distributed device is only as large as the smaller local device
l The leftover capacity on the larger device is not available
To create a distributed device without wasting capacity, choose local devices on each cluster with
the same capacity.
The geometry of the new device is automatically RAID 1.
Each cluster in the VPLEX can contribute a maximum of one component device to the new
distributed device.
This command can fail if there is not a sufficient number of meta volume slots. See the
troubleshooting section of the VPLEX procedures in the SolVe Desktop for a resolution to this
problem.
CAUTION If there is pre-existing data on a storage-volume, and the storage-volume is not
claimed as being application consistent, converting an existing local RAID device to a
distributed RAID using the ds dd create command will not initiate a rebuild to copy the
data to the other leg. Data will exist at only one cluster. To prevent this, do one of the
following:
1. Claim the disk with data using the application consistent flag
2. Create a single-legged RAID 1 or RAID 0 and add a leg using the device attach-mirror
command.
Use the set command to enable/disable automatic rebuilds on the distributed device. The rebuild
setting is immediately applied to the device.
l set rebuild-allowed true starts or resumes a rebuild if mirror legs are out of sync.
l set rebuild-allowed false stops a rebuild in progress.
When set to true, the rebuild continues from the point where it was halted. Only those portions
of the device that have not been rebuilt are affected. The rebuild does not start over.
Examples
In the following example, the ds dd create command creates a new distributed device with the
following attributes:
l Name: ExchangeDD
l Devices:
n /clusters/cluster-2/devices/s6_exchange
n /clusters/cluster-1/devices/s8_exchange
l Logging volumes:
n /clusters/cluster-1/system-volumes/cluster_1_loggingvol
n /clusters/cluster-2/system-volumes/cluster_2_loggingvol
l Rule-set: rule-set-7a
In the following example, the ds dd create command creates a distributed device, and with
the default rule-set:
See also
l device attach-mirror
l ds dd destroy
l local-device create
ds dd declare-winner
Declares a winning cluster for a distributed-device that is in conflict after a link outage.
Contexts
All contexts.
In /distributed-storage/distributed-device context, command is declare-winner.
In /distributed-storage context, command is dd declare-winner.
Syntax
ds dd declare-winner
[
-c|--cluster] context-path
[-d|--distributed-device] context-path
[-f|--force]
Arguments
Required arguments
[-c|--cluster] context-path * Specifies the winning cluster.
[-d|--distributed-device] context- Specifies the distributed device for which to
path declare a winning cluster.
[-f|--force] Forces the declare-winner command to be
issued.
* - argument is positional.
Description
If the legs at two or more clusters are in conflict, use the ds dd declare-winner command to
declare a winning cluster for a specified distributed device.
Examples
VPlexcli:/distributed-storage/distributed-devices> ds dd declare-winner --
distributed-device DDtest_4 –-cluster cluster-2 --force
See also
l ds dd create
ds dd destroy
Destroys the specified distributed-device(s).
Contexts
All contexts.
Syntax
ds dd destroy
[-d|--distributed-device] context-path, context-path,...
[-f|--force]
Arguments
Required arguments
* - argument is positional.
Description
In order to be destroyed, the target distributed device must not host virtual volumes.
Examples
See also
l ds dd create
ds dd remove-all-rules
Removes all rules from all distributed devices.
Contexts
All contexts.
Syntax
ds dd remove-all-rules
[
-f|--force]
Arguments
Optional arguments
Description
From any context, removes all rules from all distributed devices.
WARNING There is NO undo for this procedure.
Examples
VPlexcli:/distributed-storage/distributed-devices/dd_23> remove-all-rules
All the rules in distributed-devices in the system will be removed. Continue?
(Yes/No) yes
See also
l ds rule destroy
l ds rule island-containing
l ds rule-set copy
l ds rule-set create
l ds rule-set destroy
l ds rule-set what-if
ds dd set-log
Allocates/unallocates segments of a logging volume to a distributed device or a component of a
distributed device.
Contexts
All contexts.
Syntax
ds dd set-log
Required arguments
[-d|--distributed- One or more distributed devices for which segments of the specified
devices] context-path, logging volume are allocated/unallocated.
context-path... All components of the distributed-device are included.
Optional arguments
Description
Logging volumes keep track of 4 k byte blocks written during an inter-cluster link failure. When the
link recovers, VPLEX uses the information in logging volumes to synchronize the mirrors.
WARNING If no logging volume is allocated to a distributed device, a full rebuild of the
deviceoccurs when the inter-cluster link is restored after an outage.
Do not change a device’s logging volume unless the existing logging-volume is corrupted or
unreachable, or to move the logging volume to a new disk.
Use the ds dd set-log command only to repair a corrupted logging volume or to transfer
logging to a new disk.
Use the --distributed-devices argument to allocate/unallocate segments on the specified
logging volume to the specified device.
Use the --distributed-devices-component argument to allocate/unallocate segments on
the specified logging volume to the specified device component.
Note: Specify either distributed devices or distributed device components. Do not mix devices
and components in the same command.
If the logging volume specified by the --logging-volume argument does not exist, it is created.
Use the --cancel argument to delete the log setting for a specified device or device component.
This command can fail if there is not a sufficient number of meta volume slots. See the
troubleshooting procedures for VPLEX in the SolVe Desktop for a resolution to this problem.
Examples
Allocate segments of a logging volume to a distributed device:
VPlexcli:/distributed-storage/distributed-devices/TestDisDevice> ds dd set-
log --distributed-devices TestDisDevice --logging-volumes /clusters/cluster-2/
system-volumes/New-Log_Vol
VPlexcli:/distributed-storage/distributed-devices/TestDisDevice> ds dd set-
log --distributed-devices TestDisDevice --cancel
Attempt to cancel a logging volume for a distributed device that is not fully logged:
WARNING Issuing the cancel command on a distributed device that is not fully logged results
in a warning message.
VPlexcli:/distributed-storage/distributed-devices/dr1_C12_0249> ds dd set-log --
distributed-devices dr1_C12_0249 --cancel
WARNING: This command will remove the logging segments from distributed device
'dr1_C12_0249'.
If a distributed device is not fully logged, it is vulnerable to full rebuilds
following
inter-cluster WAN link failure or cluster failure.
It is recommended that the removed logging-segments be restored as soon as possible.
See also
l logging-volume create
ds rule destroy
Destroys an existing rule.
Contexts
All contexts.
In /distributed-storage context, command is rule destroy.
Syntax
ds rule destroy
[
-r|--rule] rule
Arguments
Required arguments
Description
A rule-set contains rules. Use the ll command in the rule-set context to display the rules in the
rule-set.
Examples
Use the ds rule destroy command to destroy a rule in the rule set.
VPlexcli:/distributed-storage/rule-sets/ruleset_recreate5/rules> ll
RuleName RuleType Clusters ClusterCount Delay Relevant
-------- ----------------- --------- ------------ ----- --------
rule_1 island-containing cluster-1 2 10s true
VPlexcli:/distributed-storage/rule-sets/ruleset recreate5/rules> rule destroy
rule_1
See also
l ds rule island-containing
l ds rule-set copy
l ds rule-set create
l ds rule-set destroy
l ds rule-set what-if
ds rule island-containing
Adds a island-containing rule to an existing rule-set.
Contexts
All contexts.
In /distributed-storage context, command is rule island-containing.
Syntax
ds rule island-containing
[-c|--clusters] context-path,context-path...
[-d|--delay] delay
[-r|rule-set] context path
Arguments
Required arguments
* - argument is positional.
Description
Describes when to resume I/O on all clusters in the island containing the specified cluster.
Example
In the following example, the rule island-containing command creates a rule that dictates:
1. VPLEX waits for 10 seconds after a link failure and then:
2. Resumes I/O to the island containing cluster-1,
3. Detaches any other islands.
See also
l ds dd remove-all-rules
l ds rule destroy
l ds rule-set copy
l ds rule-set create
l ds rule-set destroy
l ds rule-set what-if
ds rule-set copy
Copy an existing rule-set.
Contexts
All contexts.
In /distributed-storage/rule-sets context, command is copy.
In /distributed-storage context, command is rule-set copy.
Syntax
ds rule-set copy
[-s|--source] rule-set
[-d|--destination] new-rule-set
Arguments
Required arguments
[-s|--source] rule-set * Source rule-set.
[-d|--destination] new-rule-set The destination rule-set name.
* - argument is positional.
Description
Copies an existing rule-set and assigns the specified name to the copy.
Example
VPlexcli:/distributed-storage/rule-sets> ll
Name PotentialConflict UsedBy
------------------ -----------------
----------------------------------------
TestRuleSet false
VPlexcli:/distributed-storage/rule-sets> rule-set copy --source TestRuleSet --
destination CopyOfTest
VPlexcli:/distributed-storage/rule-sets> ll
Name PotentialConflict UsedBy
------------------ -----------------
----------------------------------------
CopyOfTest false
TestRuleSet false
See also
l ds dd remove-all-rules
l ds rule destroy
l ds rule island-containing
l ds rule-set create
l ds rule-set destroy
l ds rule-set what-if
ds rule-set create
Creates a new rule-set with the given name and encompassing clusters.
Contexts
All contexts.
In /distributed-storage/rule-sets context, command is create.
In /distributed-storage context, command is rule-set create.
Syntax
ds rule-set create
[-n|--name] rule-set
Arguments
Required arguments
[-n|--name] rule-set Name of the new rule-set.
Examples
Create a rule-set:
See also
l ds dd remove-all-rules
l ds rule destroy
l ds rule island-containing
l ds rule-set copy
l ds rule-set create
l ds rule-set destroy
l ds rule-set what-if
l set
ds rule-set destroy
Destroys an existing rule-set.
Contexts
All contexts.
In /distributed-storage/rule-sets context, command is destroy.
In /distributed-storage context, command is rule-set destroy.
Syntax
ds rule-set destroy
[-r|--rule-set] rule-set
Arguments
Required arguments
[-r|--rule-set] rule-set Name of the rule-set to destroy.
Description
Deletes the specified rule-set. The specified rule-set can be empty or can contain rules.
Before deleting a rule-set, use the set command to detach the rule-set from any virtual volumes
associated with the rule-set.
Examples
Delete a rule-set:
VPlexcli:/distributed-storage/rule-sets/TestRuleSet> ll
Attributes:
Name Value
------------------ ------------------------
key ruleset_5537985253109250
potential-conflict false
used-by dd_00
VPlexcli:/distributed-storage/rule-sets/TestRuleSet> cd //distributed-
storage/distributed-devices/dd_00
VPlexcli:/distributed-storage/distributed-devices/dd_00>set rule-set-name
""
Removing the rule-set from device 'dd_00' could result in data being
unavailable during a WAN link outage. Do you wish to proceed ? (Yes/No)
yes
VPlexcli:/distributed-storage/distributed-devices/dd_00>ds rule-set
destroy TestRuleSet
See also
l ds dd remove-all-rules
l ds rule destroy
l ds rule island-containing
l ds rule-set copy
l ds rule-set create
l ds rule-set what-if
l set
ds rule-set what-if
Tests if/when I/O is resumed at individual clusters, according to the current rule-set.
Contexts
All contexts.
In /distributed-storage/rule-sets context, command is what-if.
In /distributed-storage context, command is rule-set what-if.
Syntax
ds rule-set what-if
[-i|--islands] “cluster-1,cluster-2”
[-r|--rule-set] context-path
Arguments
Required arguments
Description
This command supports only two clusters and one island.
Examples
Test a rule-set:
See also
l ds dd remove-all-rules
l ds rule destroy
l ds rule island-containing
l ds rule-set copy
l ds rule-set create
l ds rule-set destroy
ds summary
Display summary information about distributed devices.
Contexts
All contexts.
In /distributed-storage context, command is summary.
Syntax
ds summary
Description
Displays summarized information for all distributed-devices.
Displays more detailed information for any device with a health-state or operational-
status other than ok, and a service-status other than
running
.
Displays devices per cluster, and calculates total and free capacity.
Use the --verbose argument to display additional information about unhealthy volumes in each
consistency group.
Field Description
Field Description
Field Description
Field Description
Field Description
requires-resume-at-loser -
Displayed on the losing side when the
inter-cluster link heals after an
outage. After the inter-cluster link is
restored, the losing cluster discovers
that its peer was declared the winner
and resumed I/O. Use the
consistency-group resume-at-
loser command to make the view of
data consistent with the winner, and
to resume I/O at the loser.
restore-link-or-choose-
winner - I/O is suspended at all
clusters because of a cluster
departure, and cannot automatically
resume. This can happen if:
n There is no detach-rule
n If the detach-rule is 'no-
automatic-winner', or
n If the detach-rule cannot fire because
its conditions are not met.
For example, if more than one cluster
is active at the time of an inter-cluster
link outage, the 'active-
cluster-wins' rule cannot take
effect. When this detail is present, I/O
will not resume until either the inter-
cluster link is restored, or the user
intervenes to select a winning cluster
with the consistency-group
choose-winner command.
unhealthy-devices - I/O has
stopped in this consistency group
because one or more volumes is
unhealthy and cannot perform I/O.
will-rollback-on-link-
down - If there were a link-down
now, the winning cluster would have
to roll back the view of data in order
to resume I/O.
Examples
Display summary information when no devices are unhealthy:
VPlexcli:/distributed-storage> ds summary
Slot usage summary:
VPlexcli:/> ds summary
Slot usage summary:
Total 912 slots used by distributed device logging segments.
Distributed Volumes (not in Consistency Groups) Unhealthy Summary:
Device Name Health State Operational Status Service Status
----------- ------------- ------------------ -------------------
DR10 major-failure stressed cluster-unreachable
Distributed volumes (in consistency groups) unhealthy summary:
CG Name Cache Mode Number of Cluster Operational
Status Details
---------------- ------------ Unhealthy --------- Status
-------------------------------------------
---------------- ------------ Vols --------- ----------------
-------------------------------------------
---------------- ------------ ------------- --------- ----------------
-------------------------------------------
AA_ACW_Cluster12 synchronous 9 cluster-1 unknown []
cluster-2 suspended
[cluster-departure,
restore-link-or-choose-winner]
AP_ACW_Cluster1 synchronous 10 cluster-1 unknown []
cluster-2 suspended
[cluster-departure,
restore-link-or-choose-winner]
AP_ACW_Cluster2 synchronous 5 cluster-1 unknown []
cluster-2 suspended
[cluster-departure,
restore-link-or-choose-winner]
Distributed devices health summary:
Total 25 devices, 25 unhealthy.
Cluster summary:
Cluster cluster-2 : 25 distributed devices.
Cluster cluster-1 : 25 distributed devices.
Capacity summary:
0 devices have some free capacity.
0B free capacity of 500G total capacity.
Distributed volume summary:
Total 24 distributed devices in consistency groups,
24 unhealthy.
Total 1 distributed devices not in consistency
groups, 1 unhealthy.
Use the --verbose argument to display detailed information about unhealthy volumes in each
consistency group:
restore-link-or-choose-winner]
AP_ACW_Cluster1 synchronous 10 cluster-1 unknown []
cluster-2 suspended
[cluster-departure,
restore-link-or-choose-winner]
AP_ACW_Cluster2 synchronous 5 cluster-1 unknown []
cluster-2 suspended
[cluster-departure,
restore-link-or-choose-winner]
Distributed volumes (in consistency groups) unhealthy details:
CG Name Unhealthy Vols
----------------
------------------------------------------------------------------------------
-----------------------
AA_ACW_Cluster12 ['DR11_vol', 'DR12_vol', 'DR13_vol', 'DR14_vol',
'DR15_vol', 'DR16_vol', 'DR17_vol', 'DR18_vol',
'DR19_vol']
AP_ACW_Cluster1 ['DR20_vol', 'DR21_vol', 'DR22_vol', 'DR23_vol',
'DR24_vol', 'DR25_vol', 'DR6_vol', 'DR7_vol',
'DR8_vol', 'DR9_vol']
AP_ACW_Cluster2 ['DRa_12_vol', 'DRb_12_vol', 'DRc_12_vol', 'DRd_12_vol',
'DRe_12_vol']
Distributed devices health summary:
Total 25 devices, 25 unhealthy.
Cluster summary:
Cluster cluster-2 : 25 distributed devices.
Cluster cluster-1 : 25 distributed devices.
Capacity summary:
0 devices have some free capacity.
0B free capacity of 500G total capacity.
Distributed volume summary:
Total 24 distributed devices in consistency groups,
24 unhealthy.
Total 1 distributed devices not in consistency
groups, 1 unhealthy.
luster cluster-1 : 25 distributed devices.
.
See also
l export port summary
esrs import-certificate
Fetches the security certificate from a remote system and imports it to the local keystore. It
resets the CLI process.
Contexts
All contexts.
Syntax
esrs import-certificate
-h | --help
--verbose
-f | --force
Arguments
Optional arguments
-h |--help Displays the usage for this command.
--verbose Provides more output during command execution. This may not have any
effect for some commands.
-f |--force Bypasses input if none is provided.
Description
This command runs a interview script to import the SRSv3 security certificate.
Note: In Metro systems, run this command on both the management servers.
Examples
Import ESRS certificate.
-----Certificate Details-----
Certificate fingerprints:
MD5: C4:E6:AB:51:D0:0C:6C:0C:99:98:25:CC:75:3C:09:72
SHA1: 8B:5D:EE:71:0B:38:DB:57:A3:B6:F2:DE:8E:71:0E:97:BB:10:EF:27
SHA256:
B5:B6:8A:0B:FE:D8:0C:D9:CF:E3:9A:E3:3D:CD:81:6E:74:7C:CA:72:ED:0B:06:04:8E:F8:
24:53:B2:E6:6E:A5
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 3
Extensions:
See also
l esrs register
l esrs un-register
l esrs status
esrs register
Registers the SRSv3 on the cluster.
Contexts
All contexts.
Syntax
esrs register
[-u|--username] user name
[-t|--password] {type
[-i|--ip-address] {ip_address
-h | --help
--verbose
Arguments
Required arguments
[-u |--username] * Username of the user authorized to configure SRSv3 gateway on
the VPLEX cluster.
[-p |--password] * The password to authenticate the user.
* - argument is positional.
Description
Adds information about a SRSv3 gateway to VPLEX. Used by VPLEX for enabling License Usage
Data Transfer.
Note: In Metro systems, run this command on both the management servers.
Examples
Register ESRS.
See also
l esrs un-register
l esrs status
l esrs import-certificate
esrs status
Displays the SRSv3 status on the cluster.
Contexts
All contexts.
Syntax
esrs status
-h | --help
--verbose
Arguments
Optional arguments
-h |--help Displays the usage for this command.
--verbose Provides more output during command execution. This may not have any
effect for some commands.
Description
Displays the connectivity status between VPLEX and SRSv3.
Note: In Metro systems, run this command on both the management servers.
Examples
ESRS status.
See also
l esrs register
l esrs un-register
l esrs import-certificate
esrs un-register
Unregister the SRSv3 on the cluster.
Contexts
All contexts.
Syntax
esrs un-register
-h | --help
--verbose
Arguments
Optional arguments
-h |--help Displays the usage for this command.
--verbose Provides more output during command execution. This may not have any
effect for some commands.
Description
Removes information about SRSv3 gateway from VPLEX.
Note: In Metro systems, run this command on both the management servers.
Examples
Unregister ESRS.
See also
l esrs register
l esrs status
l esrs import-certificate
event-test
Verifies that the management server can receive events from a director.
Contexts
All contexts.
Syntax
event-test
[-n|--directors] context-path,context-path...
[-c|--clusters] context-path,context-path...
[-l|--level] level
[-o|--component] component
[-m|--message] “message”
Arguments
Required arguments
[-l|--level] level Level of the event. Must be one of the following:
emergency - System is unusable.
alert - Immediate action is required.
critical - Critical condition detected.
error - Significant error condition detected.
warning - Warning condition is detected.
notice - Normal, but significant condition.
info - Information messages.
debug - Detailed event information used by Dell EMC for
debugging.
Optional arguments
[-d|--directors] One or more directors from which to send the test event.
context-path, context-
path...
[-c|--clusters] One or more clusters from which to send the test event. Events are
context-path,context- sent from every director in the specified cluster(s).
path...
[-o|--component] Text to include in the component portion of the test message.
[-m|--message] Text of message to send in event test, enclosed in quotes. This text
“message” is written to the firmware log prefixed by “EVENT-TEST”.
Description
Tests the logging path from one or more directors to the management server.
Every component in the software that runs on the director, logs messages to signify important
events. Each logged message/event is transferred from the director to the management server
and into the firmware log file.
Use this command to verify this logging path is ok. Specify what level of event should be
generated. Optionally, specify the text to appear in the component portion of the test message.
Check the appropriate firmware log for the event created by this command.
Example
In the following example:
l The event test command creates an alert event for the specified director.
l The exit command exits the CLI.
l The tail command displays the firmware log for the director.
exec
Executes an external program.
All contexts.
Syntax
exec command
Description
The program can be executed with zero or more arguments.
Note: The correct syntax for program names and arguments depends on the host system.
Example
To display the date and time on Director-1-1-A:
exit
Exits the shell.
Contexts
All contexts.
Syntax
exit
[-e|--exit-code] exit-code
[-s|--shutdown]
Arguments
Optional arguments
[-e|--exit-code] exit- Returns the specified value when the shell exits. If no exit code is
code specified, then 0 is returned.
[-s|--shutdown] When running in server mode, shuts down the shell instead of
closing the socket. No effect if not running in server mode.
Description
If the shell is not embedded in another application, the shell process will stop.
Example
VPlexcli:/> exit
Connection closed by foreign host.
service@ManagementServer:~>
Optional arguments
[-t|--timeout] seconds The maximum number of seconds to wait for the front-end
fabric discovery operation to complete. Default: 300. Range: 1-
3600.
[-w|--wait] seconds The maximum number of seconds to wait for a response from
the fabric discovery. Default: 10. Range: 1- 3600.
[-c|--cluster] context- Discover initiator ports on the specified cluster.
path
Description
Initiator discovery finds unregistered initiator-ports on the front-end fabric and determines the
associations between the initiator ports and the target ports.
Use the ll command in initiator-ports context to display the same information for small
configurations (where timeout does not occur)
Use the export initiator-port discovery command for large configurations in which ls
command might encounter timeout limits.
Example
Discover initiator ports on another cluster:
P000000003CA001CB-A1-FC00,
P000000003CB000E6-B1-FC00,
P000000003CB001CB-B1-FC00
.
.
.
See also
l export initiator-port register
Arguments
Required arguments
[-i|--initiator- * Name to assign to the registered port. Name must be unique in the
port] initiator-port system. Command fails if the specified name is already in use.
[-p|--port] port * Port identifier. For Fibre Channel initiators, a WWN pair as follows:
portWWN|nodeWWN. nodeWWN is optional. Each WWN is either '0x'
followed by one or more hex digits, or an abbreviation using the format:
string:number[,number]. Following are four examples:
0xd1342a|0xd1342b
hyy1:194e,4|hyy1:194e
0xd1342a
hyy1:194e,4
Optional arguments
[-c|--cluster] Cluster on which the initiator port is registered.
context-path
[-t|--type] {type} Type of initiator port. If no type is specified, the default value is used.
l hpux - Hewlett Packard UX
l sun-vcs - Sun Solaris
l aix - IBM AIX
l recoverpoint - Dell EMC RecoverPoint
l ibm-d910 - IBM Series D910
l default - If no type is specified.
* - argument is positional.
Description
Use the ll command in /engines/engine/directors/director/hardware /ports/
port context to display portWWNs and nodeWWNs.
Registers an initiator-port and associates it with a SCSI address. For Fibre Channel, the SCSI
address is represented by a WWN pair.
See also
l export initiator-port discovery
l export initiator-port unregister
l export target-port renamewwns
l set
Required arguments
[-f|--file] file * The host declaration file path name.
Optional arguments
[-c|--cluster] cluster-context * The cluster at which to create the view.
[-p|--ports] port,port... List of port names. If omitted, all ports at the cluster will be
used. Entries must be separated by commas.
* - argument is positional.
Description
Reads host port WWNs (with optional node WWNs) and names from a host declaration file.
Creates a view, registering each port WWN /name pair as an initiator port in that view.
The host description file contains one line for each port on the host in the following format:
port WWN [|node WWN] port-name
Hosts must be registered in order to be exported (added to a storage view). Registering consists
of naming the initiator and listing its ports WWN/GUID.
Each port of a server’s HBA/HCA must be registered as a separate initiator.
See also
l export initiator-port discovery
l export initiator-port unregister
Arguments
Optional arguments
* - argument is positional.
Description
Displays a list of target port logins for the specified initiator ports.
Example 2 Example
Shows target port logins for all the initiator ports in VPLEX:
------------ -------------------------------
-------------------------
cluster-2 initiator_22 None
See also
l export initiator-port discovery
l export initiator-port register
l export initiator-port register-host
l export initiator-port unregister
Required arguments
[-i|--initiator-port] initiator-port [, * One or more initiator ports to remove. Entries
initiator-port...] must be separated by commas.
Optional arguments
[-f|--force] Destroys the initiator-ports even if they are in use.
* - argument is positional.
Example
See also
l export initiator-port register
Optional arguments
[-c|--clusters] cluster [, cluster,...] Display unhealthy ports for only the specified
cluster(s).
[-h|--help] Displays command line help.
[--verbose] Displays the names of the unhealthy volumes exported
on each port.
Description
Prints a summary of the views and volumes exported on each port, and a detailed summary of the
unhealthy ports.
In the root context, displays information for all clusters.
In /cluster context or below, displays information for only the current cluster.
Example
Display port health for a specified cluster:
See also
l ds summary
l export storage-view summary
l extent summary
l local-device summary
l storage-volume summary
l virtual-volume provision
Required arguments
Optional arguments
* - argument is positional.
Description
Select ports from two different directors so as to maximize redundancy.
Example
Add the initiator iE_209_hba0 to the view named Dell_209_view:
See also
l export storage-view create
l export storage-view removeinitiatorport
Required arguments
Optional arguments
* - argument is positional.
Description
Use the ll /clusters/cluster/exports/ports command to display ports on the cluster.
Example
VPlexcli:/clusters/cluster-1/exports/storage-views/TestStorageView> export
storage-view addport --ports P000000003CB00147-B0-FC03
See also
l export storage-view create
l export storage-view removeport
[-v|--view] context-path
[-o|--virtual-volumes] virtual-volume, virtual-volume...
[-f|--force]
Arguments
Required arguments
[-o|--virtual- * List of one or more virtual volumes or LUN-virtual-volume pairs.
volumes] virtual- Entries must be separated by commas.
volume,virtual-volume ... LUN-virtual-volume pairs must be enclosed in parentheses (). Virtual
volumes and LUN-virtual-volume pairs can be typed on the same
command line.
When only virtual volumes are specified, the next available LUN is
automatically assigned by VPLEX.
Optional arguments
[-v|--view] context- View to add the specified virtual volumes to.
path
[-f|--force] Force the virtual volumes to be added to the view even if they are
already in use, if they are already assigned to another view, or if
there are problems determining the view's state. Virtual volumes that
already have a LUN in the view will be re-mapped to the newly-
specified LUN.
* - argument is positional.
Description
Add the specified virtual volume to the specified storage view. Optionally, specify the LUN to
assign to the virtual volume. Virtual volumes must be in a storage view in order to be accessible to
hosts.
When virtual volumes are added using only volume names, the next available LUN number is
automatically assigned.
Virtual-volumes and LUN-virtual-volume pairs can be specified in the same command line. For
example:
r0_1_101_vol,(2,r0_1_102_vol),r0_1_103_vol
To modify the LUN assigned to a virtual volume, specify a virtual volume that is already added to
the storage view and provide a new LUN.
Note: You cannot add a virtual volume to a storage view if the initialization status of the virtual
volume is failed or in-progress.
Example
Add a virtual volume Symm1254_7BF_1_vol to the storage view E_209_view:
VPlexcli:/clusters/cluster-1/exports/storage-views/TestStorageView> ll
Name Value
------------------------ --------------------------------------------------
controller-tag -
initiators []
operational-status stopped
port-name-enabled-status [P000000003CA00147-A1-FC01,true,suspended,
P000000003CB00147-B0-FC01,true,suspended]
ports [P000000003CA00147-A1-FC01, P000000003CB00147-B0-
FC01]
virtual-volumes
[(0,TestDisDevice_vol,VPD83T3:6000144000000010a0014760d64cb325,16G)]
VPlexcli:/clusters/cluster-1/exports/storage-views/TestStorageView> export
storage-view addvirtualvolume (5,TestDisDevice_vol) --force
WARNING: Volume 'TestDisDevice_vol' already has LUN 0 in this view; remapping
to LUN 5.
VPlexcli:/clusters/cluster-1/exports/storage-views/TestStorageView> ll
Name Value
------------------------ --------------------------------------------------
controller-tag -
initiators []
operational-status stopped
port-name-enabled-status [P000000003CA00147-A1-FC01,true,suspended,
P000000003CB00147-B0-FC01,true,suspended]
ports [P000000003CA00147-A1-FC01, P000000003CB00147-B0-
FC01]
virtual-volumes
[(5,TestDisDevice_vol,VPD83T3:6000144000000010a0014760d64cb325,16G)]
Add a virtual volume to a view using the --force option from the root context:
See also
l export storage-view checkconfig
l export storage-view create
l export storage-view removevirtualvolume
l virtual-volume create
l virtual-volume re-initialize
See also
l export storage-view create
l export storage-view find
l export storage-view map
l export storage-view show-powerpath-interfaces
Required arguments
[-n|--name] name * Name of the new view. Must be unique
throughout VPLEX.
[-p|--ports] context-path,context- * List of one or more ports to add to the view.
path...
Optional arguments
[-c|--cluster] context-path The cluster to create the view on.
* - argument is positional.
Description
A storage view is a logical grouping of front-end ports, registered initiators (hosts), and virtual
volumes used to map and mask LUNs. Storage views are used to control host access to storage.
For hosts to access virtual volumes, the volumes must be in a storage view. A storage view
consists of:
l One or more initiators. Initiators are added to a storage view using the export storage-
view addinitiatorport command.
l One or more virtual volumes. Virtual volumes are added to a storage view using the export
storage-view addvirtualvolume command.
l One or more front-end ports. Ports are added to a storage view using the export storage-
view addport command.
CAUTION The name assigned to the storage view must be unique throughout the VPLEX.
In VPLEX Metro configurations, the same name must not be assigned to a storage view on
the peer cluster.
Example
Create a view named E_209_view for front-end ports A0 and B0:
See also
l export storage-view addport
l export storage-view addinitiatorport
l export storage-view addvirtualvolume
l export storage-view destroy
Required arguments
[-v|--view] context-path ... * Storage view to destroy.
Optional arguments
[-f|--force] Force the storage view to be destroyed even if it is in use.
* - argument is positional.
Description
Destroys the specified storage view.
Example
See also
l export storage-view create
l export storage-view removeinitiatorport
l export storage-view removeport
Optional arguments
[-c|--cluster] cluster Cluster to search for views.
[-v|--volume] volume Find the views exporting the specified volume. Identify the
volume by name, VPD83 identifier, or a name pattern with
wildcards.
[-l|--lun] LUN Find the views exporting the specified LUN number.
[-i|--initiator-port] initiator Find the views including the specified initiator. May
contain wildcards.
[-f|--free-lun] - Find the next free LUN number for all views.
Description
This command is most useful for configurations with thousands of LUNs, and a large number of
views and exported virtual volumes.
Example
Find the next available LUN numbers on cluster 1:
Find the views exported by initiators whose name starts with “Lico”:
See also
l export initiator-port discovery
l export storage-view find-unmapped-volumes
l export storage-view map
l export storage-view summary
Required arguments
[-c|--cluster] cluster Cluster for which to display unexported storage volumes.
Description
Displays unexported virtual volumes in the specified cluster.
Displays the remote (on the other cluster) virtual volumes which are unexported.
See also
l export storage-view addvirtualvolume
l export-storage-view removevirtualvolume
Required arguments
[-v|--views] view,view... * List of one or more storage views to map. Entries must be
separated by commas. May contain wildcards.
Optional arguments
[-f|--file] file Name of the file to send the output to. If no file is specified, output
is to the console screen.
* argument is positional.
Example
Display unhealthy storage volumes for a specified storage view:
.
.
See also
l export storage-view find-unmapped-volumes
l export storage-view find
l export storage-view summary
Required arguments
Optional arguments
* - argument is positional.
Description
Use the ll /clusters/cluster/exports/storage-views/storage-view command to
display the initiator ports in the specified storage view.
Example
Remove an initiator port from /clusters/cluster/exports/storage-views/storage-
view context:
VPlexcli:/clusters/cluster-1/exports/storage-views /LicoJ009>
removeinitiatorport -i LicoJ009_hba1
See also
l export storage-view addinitiatorport
l export storage-view removeport
Required arguments
Optional arguments
* - argument is positional.
Description
Use the ll /clusters/cluster/exports/storage-views/storage-view command to
display the ports in the specified storage view
Example
Remove a port from /clusters/cluster/exports/storage-views/storage-view
context:
VPlexcli:/clusters/cluster-1/exports/storage-views/LicoJ009> removeport -p
P000000003CA00147-A0-FC02
See also
l export storage-view addport
l export storage-view destroy
Required arguments
[-o|--virtual-volumes] * List of one or more virtual volumes to be removed from the
volume,volume ... view. Entries must be separated by commas.
Optional arguments
[-f|--force] Force the virtual volumes to be removed from the view even
if the specified LUNs are in use, the view is live, or some of
the virtual volumes do not exist in the view.
[-v|--view] context-path View from which to remove the specified virtual volumes.
* - argument is positional.
Description
Use the ll /clusters/cluster/exports/storage-views/storage-view command to
display the virtual volumes in the specified storage view
Example
Delete a virtual volume from the specified storage view, even though the storage view is active:
VPlexcli:/clusters/cluster-1/exports/storage-views> removevirtualvolume --
view E209_View --virtual-volume (1,test3211_r0_vol) --force
WARNING: The storage-view 'E209_View' is a live storage-view and is exporting
storage through the following initiator ports:
'iE209_hba1_b', 'iE209_hba0'. Performing this operation may affect hosts'
storage-view of storage. Proceeding anyway.
See also
l export storage-view addvirtualvolume
Arguments
Optional arguments
[-c|--cluster] context-path The cluster at which to show the PowerPath interface
mapping.
See also
l export storage-view checkconfig
l export storage-view find
l export storage-view map
l export storage-view summary
Optional arguments
[-c|--cluster] cluster, List of clusters. Entries must be separated by commas.
cluster... Display information only for storage views on the specified
clusters.
Description
At the root level, displays information for all clusters.
At the /clusters/cluster context and below, displays information only for views in the
cluster in that context.
Example
Display storage view summary for a specified cluster (no unhealthy views):
See also
l export port summary
l export storage-view checkconfig
l export storage-view map
l export storage-view show-powerpath-interfaces
l storage-volume summary
Syntax
export target-port renamewwns
[-p|--port] context-path
[-w|--wwns] wwns
Arguments
Required arguments
[-w|--wwns] wwns A WWN pair separated by “|”:
portWWN|nodeWWN
Each WWN is either '0x' followed by one or more hexadecimal digits
or an abbreviation, in the following format:
string:number[,number]
For example,
0xd1342a|0xd1342b
hyy1:194e,4|hyy1:194e
0xd1342a
hyy1:194e,4
Optional arguments
[-p|--port] context- - Target port for which to rename the WWN pair.
path
Description
Use the ll command in /clusters/cluster/export/port context to display portWWNs and
nodeWWNs.
CAUTION Disable the corresponding Fibre Channel port before executing this command.
Example
See also
l export initiator-port discovery
extent create
Creates one or more storage-volume extents.
Contexts
All contexts.
Syntax
extent create
[-s|--size] size
[-o|--block-offset] integer
[-n|--num-extents] integer
[-d|--storage-volumes] storage-volume,storage-volume...
Arguments
Required arguments
[-s|--size] size The size of each extent, in bytes. If not specified, the largest
available contiguous range of 4K byte blocks on the storage
volume is used to create the specified number of extents.
* - argument is positional.
Description
An extent is a slice (range of 4K byte blocks) of a storage volume. An extent can use the entire
capacity of the storage volume, or the storage volume can be carved into a maximum of 128
extents.
Extents are the building blocks for devices.
If the storage volume is larger than the virtual volume, create an extent the size of the desired
virtual volume. Do not create smaller extents, and then use different RAIDs to concatenate or
stripe the extents.
If the storage volume is smaller than the virtual volume, create a single extent per storage volume,
and then use devices to concatenate or stripe these extents into a larger device.
This command can fail if there is not a sufficient number of meta volume slots. See the
troubleshooting section of the VPLEX procedures in the SolVe Desktop for a resolution to this
problem.
Examples
In the following example:
l The ll -p **/storage-volumes command displays a list of all storage volumes.
l The cd command changes the context to the storage-volume context on cluster-1.
l The extent create command creates an extent from two claimed 16 GB storage volumes.
VPlexcli:/> ll -p **/storage-volumes
VPlexcli:/>cd /clusters/cluster-1/storage-elements/storage-volumes
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> extent create
Symm1723_1DC,Symm1723_1E0
The following example shows creating two extents on top of a thin-capable storage volume (with
the restriction that a thick extent will be created):
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes>
The following example shows creating an extent that is smaller than the supporting storage volume
(with the restriction that a thick extent will be created):
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes>
See also
l extent create
l extent destroy
extent destroy
Destroys one or more storage-volume extents.
Contexts
All contexts.
Syntax
extent destroy
[-f|--force]
[-s|--extents] context-path,context-path...
Arguments
Required arguments
Optional arguments
* - argument is positional.
Description
Destroys the specified extents.
Example
Destroy an extent:
See also
l extent create
extent summary
Displays a list of a cluster's unhealthy extents.
Contexts
All contexts.
In /clusters/cluster/storage-elements/extents context, command is summary.
Syntax
extent summary
[-c|--clusters] cluster,cluster...
Arguments
Optional arguments
Description
Displays a cluster's unhealthy extents (if any exist), the total number of extents by use, and
calculates the total extent capacity for this cluster.
An unhealthy extent has a non-nominal health state, operational status or I/O status.
If the --clusters argument is not specified and the command is executed at or below a specific
cluster's context, information is summarized for only that cluster. Otherwise, the extents of all
clusters are summarized.
Field Description
Extent Summary
Field Description
See also
l ds summary
l export port summary
l export storage-view summary
l local-device summary
l storage-volume summary
l virtual-volume provision
find
Finds all the contexts matching a pattern and returns a set contexts matching supplied pattern.
Contexts
All contexts.
Syntax
find
[-c | --contexts] = pattern [, pattern ...]
[-h | --help]
[--verbose]
Arguments
Required arguments
[-c | --contexts] = pattern [, Pattern for matching contexts you want to find.
pattern ...]
Optional arguments
[-h | --help] Displays the usage for this command
[--verbose] Provides additional output during command
execution. This may not have any effect for some
commands.
Description
Use this command to find all contexts matching a pattern. When invoked interactively, the
command prints the contexts to the screen.
See Searching the context tree for more information about the find command and related
examples.
front-end-performance-stats start
Starts the collection of the read and write statistics with the I/O size and the logical block
addressing (LBA) information on the VPLEX virtual volumes through periodic polling.
Contexts
All contexts.
Syntax
front-end-performance-stats start
Arguments
Optional arguments
-h | --help Displays the usage for this command.
--verbose Provides more output during command execution. This might not have any
effect for some commands.
Description
Starts the collection of the read and write statistics with the I/O size and the logical block
addressing (LBA) information on the VPLEX virtual volumes through periodic polling. This
command starts generating the performance data, which helps resolve I/O performance issues
with VPLEX. The statistics are available in the fe_perf_stats_<timestamp>.log file
at /var/log/VPlex/cli/.
Note: Run this command on each cluster to collect the front-end performance statistics. After
you run this command, the system continues to collect the front-end performance statistics
until you run the front-end-performance-stats stop command.
See also
l front-end-performance-stats stop
l front-end-performance-stats status
front-end-performance-stats status
Displays the status of front-end performance statistics collection.
Contexts
All contexts.
Syntax
front-end-performance-stats status
Arguments
Optional arguments
-h | --help Displays the usage for this command.
--verbose Provides more output during command execution. This might not have any
effect for some commands.
Description
Provides the status of the front-end performance statistics collection. The details include the
name of the log file that contains the statistics, the period for which the statistics collection has
been running, the time when the directors were polled for information for the last time, and the
number of errors that occurred per director in the last two hours.
Note: Run this command on each cluster to view the status of the front-end performance
statistics collection.
See also
l front-end-performance-stats start
l front-end-performance-stats stop
front-end-performance-stats stop
Stops the front-end performance statistics collection.
Contexts
All contexts.
Syntax
front-end-performance-stats stop
Arguments
Optional arguments
-h | --help Displays the usage for this command.
--verbose Provides more output during command execution. This might not have any
effect for some commands.
Description
Stops the front-end performance statistics collection. The statistics are available in the
fe_perf_stats_<timestamp>.log file at /var/log/VPlex/cli/.
Note: Run this command on each cluster to stop the front-end performance statistics
collection.
See also
l front-end-performance-stats start
l front-end-performance-stats status
getsysinfo
Returns information about the current system.
Contexts
All contexts.
Syntax
getsysinfo
--output path-name
-linux
Arguments
Optional arguments
--output path-name Location and name of the output file. Default: /var/log/
VPlex/cli/YY-sysinfo.txt
--linux Use this if the management server is running on a Linux system. Disables
the scsi tests since Linux systems lack a scsi command.
Description
Display information and send the output to a file.
The information is written in TCL format.
Field Description
Treating this tower like version D4 Denotes the system is Release 4.0 or later.
Ignore this line.
nn ports - unknown system type The getsysinfo script looked for hardware
prior to Release 4.0 and did not find it.
System does NOT have comtcp enabled Communication protocol used on Ethernet
ports for connections to other clusters prior
to Release 4.0. Ignore this line.
Example
Display information and send the output to a file:
See also
l cluster summary
l director firmware show-banks
l manifest version
l version
health-check
Displays a report indicating overall hardware/software health.
Contexts
All contexts.
Syntax
health-check
[-m|--highlevel]
[-f|--full]
--configuration
--back-end
--front-end
--limits
--cache
--consistency-group
--wan
--cluster_witness
Arguments
Optional arguments
[-m|--highlevel] Checks for major subcomponents with error conditions. Warnings are
ignored. Used for instantaneous, high level view of the health of the
VPLEX.
Default behavior if no other argument is specified.
Description
High level view of the health of the VPLEX.
Consolidates information from the following commands:
l version
l cluster status
l cluster summary
l connectivity validate-be
l connectivity validate-wan-com
l ds summary
l export storage-view summary
l virtual-volume summary
l storage-volume summary
l ll /clusters/**/system-volumes/
Example
Run a high-level (default) health check on a VPLEX Metro:
VPlexcli:/> health-check
Product Version: 5.1.0.00.00.10
Clusters:
---------
Cluster Cluster Oper Health Connected Expelled
Name ID State State
--------- ------- ----- -------- --------- --------
cluster-1 1 ok degraded True False
cluster-2 2 ok ok True False
cluster-1 Transition/Health Indications:
Device initializing
20 unhealthy Devices or storage-volumes
Meta Data:
----------
Cluster Volume Volume Oper Health
Active
cluster-2 - - - - -
- -/- -
**This command is only able to check the health of the local
cluster(cluster-1)'s RecoverPoint
configuration, therefore if this system is a VPLEX Metro or VPLEX Geo repeat
this command on the remote
cluster to get the health of the remote cluster's RecoverPoint configuration.
Array Aware:
------------
Cluster Name Provider Address Connectivity Registered Total
Arrays Storage Pool
------------ -------- ------------- ------------ ---------- ------------
Hopkinton dsvea125 10.108.64.125 connected 2 13
Hopkinton dsvea123 10.108.64.123 connected 2 29
Providence dsvea124 10.108.64.124 connected 3 21
Error
Cluster cluster-1:
There are 8 storage volumes running in degraded mode.
Array: EMC-CLARiiON-APM00114102495
There are 8 storage volumes running in degraded mode.
First 4 storage volumes in degraded mode are:
VPD83T3:600601601dd028007a09da1b6427e111 is degraded ['degraded-
timeout', 'degraded-read-write-latencies']
VPD83T3:600601601dd028007fc9ec0e6427e111 is degraded ['degraded-read-
write-latencies']
VPD83T3:600601601dd0280080c9ec0e6427e111 is degraded ['degraded-
timeout', 'degraded-write-latency']
VPD83T3:600601601dd0280083c9ec0e6427e111 is degraded ['degraded-write-
latency']
Output to /home/service/vafadm/cli/health_check_full_scan.log
VPlexcli:health-check --limits
Product Version: 6.1.1.00.00.04
Product Type: Metro
WAN Connectivity Type: FC
Hardware Type: VS2
Cluster Size: 2 engines
Cluster TLA:
cluster-1: FNM00121500305
cluster-2: FNM00121300045
See also
l cluster status
l validate-system-configuration
help
Displays help on one or more commands.
Contexts
All contexts.
Syntax
help
[-i|--interactive]
[-G|--no-global]
[-n|--no-internal]
Arguments
Optional arguments
[-G|--no-global] Suppresses the list of global commands for contexts other than root
context.
[-n|--internal] Include commands that are normally used for low-level debugging and
development.
Description
If an argument is marked as required, it is always required. Additional arguments may be required
depending on the context in which the command is executed.
Example
Display only commands specific to the current context:
VPlexcli:/clusters/cluster-1> help -G
Commands inherited from parent contexts:
add cacheflush configdump expel forget shutdown summary unexpel
Commands specific to this context and below:
status verify
VPlexcli:/clusters/cluster-1> help -i
Welcome to Python 2.2! This is the online help utility.
.
.
.
help> topics
Here is a list of available topics. Enter any topic name to get more help.
ASSERTION DYNAMICFEATURES NONE TRACEBACKS
ASSIGNMENT ELLIPSIS NUMBERMETHODS TRUTHVALUE
.
.
.
help> EXPRESSIONS
------------------------------------------------------------------------
5.14 Summary
The following table summarizes the operator precedences in Python, from
lowest precedence (least binding) to highest precedence (most binding).
.
.
.
history
Displays or clears the command history list.
Contexts
All contexts.
Syntax
history
[-c|--clear]
[-n|--number] number
Arguments
Optional arguments
[-n|--number] number Displays only the last number commands in the history list.
Example
Display the last 8 commands executed in this CLI session:
VPlexcli:/> history 8
492 ll
493 cd d
494 cd device-migrations/
495 ll
496 cd
497 ds summary
498 export storage-view checkconfig
499 history 8
Syntax
iscsi chap back-end add-credentials
[-u|--username] username
[-s|--secret] secret
[-t|--targets] target [, target...]
[-c|--cluster] cluster-context
[--secured]
[-h|--help]
[--verbose]
Arguments
Required arguments
[-t|--targets] target [, * Specifies the IQNs or IP addresses of the targets for which to
target...] configure credentials.
[--secured] Prevents the secret from being stored as plain text in the log
files.
Optional arguments
[--verbose] Provides more output during command execution. This may not
have any effect for some commands.
* - argument is positional.
Description
Adds one or more configuration credentials for back-end CHAP.
Note: This command is valid only on systems that support iSCSI devices.
Example
Add back-end CHAP credentials to targets on cluster 1:
See also
l iscsi chap back-end disable
l iscsi chap back-end enable
l iscsi chap back-end list-credentials
l iscsi chap back-end remove-credentials
l iscsi chap back-end remove-default-credential
l iscsi chap back-end set-default-credential
Syntax
iscsi chap back-end disable
[-c|--cluster] cluster-context
[-h|--help]
[--verbose]
Arguments
Optional arguments
[-c|--cluster] cluster- Specifies the context path of the cluster at which back-end
context CHAP should be disabled.
Description
Disables back-end CHAP. Once CHAP is disabled on the back-end, all configured credentials will
be unused, but not deleted.
WARNING You will not be able to login to storage arrays that enforce CHAP if you disable
back-end CHAP that was previously configured.
Note: This command is valid only on systems that support iSCSI devices.
Example
Disable back-end CHAP on cluster 1:
See also
l iscsi chap back-end add-credentials
l iscsi chap back-end enable
l iscsi chap back-end list-credentials
l iscsi chap back-end remove-credentials
l iscsi chap back-end remove-default-credential
l iscsi chap back-end set-default-credential
Syntax
iscsi chap back-end enable
[-c|--cluster] cluster-context
[-h|--help]
[--verbose]
Arguments
Optional arguments
[-c|--cluster] cluster- Specifies the context path of the cluster at which back-end
context CHAP should be enabled.
Description
Enables back-end CHAP. Enabling CHAP on the back-end allows VPLEX to log in securely to
storage arrays that also have CHAP configured. To use CHAP once it is enabled, all storage arrays
must have added CHAP credentials or have set a default CHAP credential .
Note: This command is valid only on systems that support iSCSI devices.
Example
Enable back-end CHAP on cluster 1:
See also
l iscsi chap back-end add-credentials
l iscsi chap back-end disable
l iscsi chap back-end list-credentials
l iscsi chap back-end remove-credentials
l iscsi chap back-end remove-default-credential
l iscsi chap back-end set-default-credential
Syntax
iscsi chap back-end list-credentials
[-c|--cluster] cluster-context
[-h|--help]
[--verbose]
Arguments
Optional arguments
[-c|--cluster] cluster- Specifies the context path of the cluster for which the
context configured back-end CHAP credentials should be listed.
[--verbose] Provides more output during command execution. This may not
have any effect for some commands.
Description
Lists all configured back-end CHAP credentials.
Note: This command is valid only on systems that support iSCSI devices.
Example
List configured back-end CHAP credentials for cluster 1:
See also
l iscsi chap back-end add-credentials
l iscsi chap back-end disable
l iscsi chap back-end enable
l iscsi chap back-end remove-credentials
l iscsi chap back-end remove-default-credential
l iscsi chap back-end set-default-credential
Syntax
iscsi chap back-end remove credentials
[-t|--targets] target [, target...]
[-c|--cluster] cluster-context
[-h|--help]
[--verbose]
Arguments
Required arguments
Optional arguments
[-c|--cluster] cluster- Specifies the context path of the cluster at which the back-
context end CHAP credential should be removed.
* - argument is positional.
Description
Removes one or more configuration credentials for back-end CHAP.
Note: This command is valid only on systems that support iSCSI devices.
Example
Remove a configured back-end CHAP credential on a specified IP address on cluster 1:
See also
l iscsi chap back-end add-credentials
l iscsi chap back-end disable
l iscsi chap back-end enable
l iscsi chap back-end list-credentials
l iscsi chap back-end remove-credentials
l iscsi chap back-end remove-default-credential
l iscsi chap back-end set-default-credential
Syntax
iscsi chap back-end remove-default-credential
[-c|--cluster] cluster-context
[-h|--help]
[--verbose]
Arguments
Optional arguments
[-c|--cluster] cluster- Specifies the context path of the cluster at which the default
context back-end CHAP credential should be removed.
[--verbose] Provides more output during command execution. This may not
have any effect for some commands.
Description
Removes a default credential for back-end CHAP configuration.
Note: This command is valid only on systems that support iSCSI devices.
Example
Remove default credentials for back-end CHAP from ports on cluster 1:
See also
l iscsi chap back-end add-credentials
l iscsi chap back-end disable
l iscsi chap back-end enable
l iscsi chap back-end list-credentials
l iscsi chap back-end remove-credentials
l iscsi chap back-end set-default-credential
Syntax
iscsi chap back-end set-default-credential
[-u|--username] username
[-s|--secret] secret
[-c|--cluster] cluster-context [--secured]
[-h|--help]
[--verbose]
Arguments
Required arguments
[-u|--username] * Specifies the username to use in the configured CHAP
username credentials.
[-s|--secret] secret * Specifies the secret to use in the configured CHAP credentials.
The secret must be between 12 and 255 characters long, and can
be composed of all printable ASCII characters.
[--secured] Prevents the secret from being stored as plain text in the log files.
Optional arguments
[-c|--cluster] cluster- Specifies the context path of the cluster at which the default
context back-end CHAP credential should be set.
[-h|--help] Displays command line help.
[--verbose] Provides more output during command execution. This may not
have any effect for some commands.
* - positional argument
Description
Sets a default credential for back-end CHAP configuration. When back-end CHAP is enabled,
VPLEX will log into its targets using the default credential. If a specific target has a separate
credential configured, VPLEX will use that separate credential when attempting to log into the
target.
Note: This command is valid only on systems that support iSCSI devices.
Example
Set a default credential for back-end CHAP configuration on cluster 1:
See also
l iscsi chap back-end add-credentials
l iscsi chap back-end disable
l iscsi chap back-end enable
l iscsi chap back-end list-credentials
l iscsi chap back-end remove-credentials
l iscsi chap back-end remove-default-credential
Syntax
iscsi chap front-end add-credentials
[-u|--username] username
[-s|--secret] secret
[-i|--initiators] initiator [, initiator...]
[-c|--cluster] cluster-context
[--secured]
[-h|--help]
[--verbose]
Arguments
Required arguments
[-i|--initiators] initiator * Specifies the IQNs of the initiators for which to configure
[, initiator] credentials.
[--secured] Prevents the secret from being stored as plain text in the log
files.
Optional arguments
[-c|--cluster] cluster- Specifies the context path of the cluster at which the front-
context end CHAP credentials should be added.
* - argument is positional.
Description
Adds one or more configuration credentials for front-end CHAP.
Note: This command is valid only on systems that support iSCSI devices.
Example
Add configuration credentials for front-end CHAP to initiators on cluster 1:
See also
l iscsi chap front-end disable
l iscsi chap front-end enable
l iscsi chap front-end list-credentials
l iscsi chap front-end remove-credentials
l iscsi chap front-end remove-default-credential
l iscsi chap front-end set-default-credential
Syntax
iscsi chap front-end disable
[-c|--cluster] cluster-context
[-h|--help]
[--verbose]
Arguments
Optional arguments
[-c|--cluster] cluster- Specifies the context path of the cluster at which front-end
context CHAP should be disabled.
[--verbose] Provides more output during command execution. This may not
have any effect for some commands.
Description
Disables front-end CHAP. Once CHAP is disabled on the front-end all configured credentials will
be unused, but not deleted.
Note: This command is valid only on systems that support iSCSI devices.
Example
Disable front-end CHAP on cluster 1:
See also
l iscsi chap front-end add-credentials
l iscsi chap front-end enable
l iscsi chap front-end list-credentials
l iscsi chap front-end remove-credentials
l iscsi chap front-end remove-default-credential
l iscsi chap front-end set-default-credential
Syntax
iscsi chap front-end enable
[-c|--cluster] cluster-context
[-h|--help]
[--verbose]
Arguments
Optional arguments
[-c|--cluster] cluster- Specifies the context path of the cluster at which front-end
context CHAP should be enabled.
Description
Enables front-end CHAP. Enabling CHAP on the front-end will allow initiators to log in securely. To
use CHAP once it is enabled, all initiators must have added CHAP credentials or have set a default
CHAP credential .
WARNING Initiators will no longer be able to log-in once CHAP is enabled if they do not have a
credential configured.
Note: This command is valid only on systems that support iSCSI devices.
Example
Enable front-end CHAP on cluster 1:
See also
l iscsi chap front-end add-credentials
l iscsi chap front-end disable
l iscsi chap front-end list-credentials
l iscsi chap front-end remove-credentials
l iscsi chap front-end remove-default-credential
l iscsi chap front-end set-default-credential
Syntax
iscsi chap front-end list-credentials
[-c|--cluster] cluster-context [-h|--help]
[--verbose]
Arguments
Optional arguments
[-c|--cluster] cluster- Specifies the context path of the cluster for which the
context configured front-end CHAP credentials should be listed.
[--verbose] Provides more output during command execution. This may not
have any effect for some commands.
Description
Lists all configured front-end CHAP credentials.
Note: This command is valid only on systems that support iSCSI devices.
Example
List all configured front-end CHAP credentials for cluster 1:
See also
l iscsi chap front-end add-credentials
Syntax
iscsi chap front-end remove-credentials
[-i|--initiators] initiator [, initiator]
[-c|--cluster] cluster-context
[-h|--help]
[--verbose]
Arguments
Required arguments
Optional arguments
[-c|--cluster] cluster-context Specifies the context path of the cluster at which the
front-end CHAP credentials should be removed.
* - positional argument
Description
Removes one or more configuration credentials for front-end CHAP.
Note: This command is valid only on systems that support iSCSI devices.
Example
Remove front-end CHAP credentials from initiators on cluster 1:
See also
l iscsi chap front-end add-credentials
l iscsi chap front-end disable
l iscsi chap front-end enable
l iscsi chap front-end list-credentials
l iscsi chap front-end remove-default-credential
l iscsi chap front-end set-default-credential
Syntax
iscsi chap front-end remove-default-credential
[-c|--cluster] cluster-context
[-h|--help]
[--verbose]
Arguments
Optional arguments
[-c|--cluster] cluster- Specifies the context path of the cluster at which the default
context front-end CHAP credential should be removed.
[-h|--help] Displays command line help.
[--verbose] Provides more output during command execution. This may not
have any effect for some commands.
Description
Remove a default credential for front-end CHAP configuration.
Note: This command is valid only on systems that support iSCSI devices.
Example
Remove a default front-end CHAP credential configuration from cluster 1:
See also
l iscsi chap front-end add-credentials
Syntax
iscsi chap front-end set-default-credential
[-u|--username] username
[-s|--secret] secret
[-c|--cluster] cluster-context
[--secured]
[-h|--help]
[--verbose]
Arguments
Required arguments
[-u|--username] * Specifies the username to use in the configured CHAP
username credentials.
[-s|--secret] secret * Specifies the secret to use in the configured CHAP credentials.
The secret must be between 12 and 255 characters long, and can
be composed of all printable ASCII characters.
[--secured] Prevents the secret from being stored as plain text in the log files.
Optional arguments
[-c|--cluster] cluster- Specifies the context path of the cluster at which the default
context front-end CHAP credential should be set.
[-h|--help] Displays command line help.
[--verbose] Provides more output during command execution. This may not
have any effect for some commands.
* - positional argument
Description
Sets a default credential for front-end CHAP configuration. When front-end CHAP is enabled, any
initiator without a separate credential will be able to login using this default credential. A specific
initiator that has a separate credential configured will not be able to login using the default
credential.
Note: This command is valid only on systems that support iSCSI devices.
Example
Set a default credential for front-end CHAP configuration on cluster 1:
See also
l iscsi chap front-end add-credentials
l iscsi chap front-end disable
l iscsi chap front-end enable
l iscsi chap front-end list-credentials
l iscsi chap front-end remove-credentials
l iscsi chap front-end remove-default-credential
iscsi check-febe-connectivity
Checks the front-end and back-end configurations for iSCSI connectivity.
Contexts
All contexts.
Note: This command is not supported in VPLEX.
Syntax
iscsi check-febe-connectivity
[-c|--cluster] cluster-context
[-h|--help]
[--verbose]
Arguments
Optional arguments
[-c|--cluster] cluster- Specifies the context path of the cluster at which connectivity
context should be checked.
[-h|--help] Displays command line help.
[--verbose] Provides more output during command execution. This may
not have any effect for some commands.
Description
The command checks whether the front-end and back-end configurations for iSCSI connectivity
are valid and meet best practices for VPLEX iSCSI configuration.
Note: This command is valid only on systems that support iSCSI devices.
Example
Shows port groups with overlapping subnets:
See also
l iscsi sendtargets add
l iscsi sendtargets list
l iscsi sendtargets rediscover
l iscsi sendtargets remove
Syntax
iscsi isns add
[-s|--sockets] server-sockets [, server-sockets,...]
[-c|--cluster] cluster-context
[-h|--help]
Arguments
Required arguments
[-s|--sockets] server-sockets *Specifies the IP address(es), with optional :port, of the
iSNS server(s). If unspecified, the port defaults to 3205.
Optional arguments
[-c|--cluster] cluster-context Context path of the cluster on which the iSNS server should
be added.
* - argument is positional.
Description
Adds one or more server addresses to the list of iSNS servers.
Note: This command is only valid on systems that support iSCSI devices.
Example
Adding two iSNS servers with IP addresses of 192.168.100.2 and 192.168.101.2:
Add two iSNS server addresses with specified ports to the list of iSNS servers.
Attempt to add two iSNS servers that are already configured with the given IP addresses.
See also
l iscsi isns list
l iscsi isns remove
Syntax
iscsi isns list
[-c|--cluster] cluster-context
[-h|--help]
Arguments
Optional Arguments
[-c|--cluster] cluster-context Context path of the cluster for which existing iSNS servers
should be listed.
Description
Displays a list of configured iSNS servers on the specified cluster.
Note: This command is only valid on systems that support iSCSI devices. If a server is not
configured on all directors, a warning message indicates the servers are missing from specific
directors.
Example
List the iSNS servers on cluster-1.
10.10.101.2 3260
Warning: Some directors are missing iSNS Servers:
Director Address Port
---------- ----------- ----
director-2 10.10.101.4 3260
director-2 10.10.101.2 3260
See also
l iscsi isns add
l iscsi isns remove
Syntax
iscsi isns remove
[-s|--sockets] server-sockets [, server-sockets,...]
[-c|--cluster] cluster-context
[-h|--help]
Arguments
Required arguments
Optional arguments
[-c|--cluster] cluster-context Context path of the cluster from which the iSNS server
should be removed.
* - argument is positional.
Description
Removes one or more server addresses from the list of iSNS servers.
Note: This command is only valid on systems that support iSCSI devices.
Example
Remove two iSNS servers with IP addresses of 192.168.100.2 and 192.168.101.2:
Attempt to remove two iSNS servers that have already been removed.
See also
l iscsi isns add
l iscsi isns list
Syntax
iscsi sendtargets add
[-s|--sockets] sendtarget-sockets [, sendtarget-sockets,...]
[-c|--cluster] cluster-context
[-h|--help]
Arguments
Required arguments
Optional arguments
* - argument is positional.
Description
After an iSCSI target portal is added, discovery takes place immediately. Thereafter, periodic
discoveries are automatically performed at 30-minute intervals.
Note: This command is only valid on systems that support iSCSI devices.
Example
Add two target portals with IP addresses of 192.168.100.2 and 192.168.101.2:
Add two target portals with IP addresses and specified ports, 192.168.100.2:3000 and
192.168.101.2:3001.
Attempt to add two target portals that are already configured with the given IP addresses.
See also
l iscsi check-febe-connectivity
l iscsi sendtargets list
l iscsi sendtargets rediscover
l iscsi sendtargets remove
Syntax
iscsi sendtargets list
[-c|--cluster] cluster-context
[-h|--help]
[--force]
Arguments
Optional arguments
[-c|--cluster] cluster-context Context path of cluster for which existing target portals
should be listed. Defaults to local cluster if not specified.
[-h|--help] Displays command line help.
Description
Lists the iSCSI target portals available on at least one director. If a portal is not available on all
directors, a warning message indicates which portals are missing from which directors.
Note: This command is only valid on systems that support iSCSI devices.
Example
List target portals with -c option:
See also
l iscsi check-febe-connectivity
l iscsi sendtargets add
l iscsi sendtargets rediscover
l iscsi sendtargets remove
Syntax
iscsi sendtargets rediscover
[-c|--cluster] cluster-context
[-h|--help]
Arguments
Optional arguments
[-c|--cluster] cluster-context Context path of cluster for which existing target portals
should be rediscovered. Defaults to local cluster if not
specified.
Description
Issues a rediscovery on all sendtargets on the cluster.
Note: This command is only valid on systems that support iSCSI devices.
Example
Rediscover all sendtargets on cluster 1:
See also
l iscsi check-febe-connectivity
Syntax
iscsi sendtargets remove
[-s|--sockets] sendtarget-sockets [, sendtarget-sockets ,...]
[-c|--cluster] cluster-context
[-h] --help]
Arguments
Required arguments
[-s|--sockets] sendtarget- * Specifies the IP address(es), with optional :port, of the
sockets iSCSI sendtargets. If unspecified, the port defaults to
3260.
Optional arguments
[-c|--cluster] cluster-context Context path of the cluster from which the target portals
should be removed.
[-h|--help] Displays command line help.
* - argument is positional.
Description
Displays the list of iSCSI targets that are discovered on the cluster.
Note: This command is valid only on systems that support iSCSI devices.
Example
Remove two target portals with IP addresses of 192.168.100.2 and 192.168.101.2:
Remove two target portals with IP addresses and specified ports, 192.168.100.2:3000 and
192.168.101.2:3001.
Attempt to remove two target portals that are not configured with the given IP addresses.
See also
l iscsi check-febe-connectivity
l iscsi sendtargets add
l iscsi sendtargets list
l iscsi sendtargets rediscover
Syntax
iscsi targets list
[-c|--cluster] cluster-context
[-h|--help]
Arguments
Optional arguments
[-c|--cluster] cluster-context Context path of cluster for which existing targets should be
listed. Defaults to local cluster if not specified.
[-h|--help] Displays command line help.
Description
Displays the list of iSCSI targets that are discovered on the cluster.
Note: This command is valid only on systems that support iSCSI devices.
Example
List iSCSI targets for cluster 1:
See also
l iscsi targets logout
Syntax
iscsi targets logout
[-t|--targets] target [, target...]
[-c|--cluster] cluster-context
[-h|--help]
Arguments
Required arguments
[-t|--targets] target [, target...] * Specifies the IQNs of the iSCSI targets.
Optional arguments
[-c|--cluster] cluster-context Context path of cluster from which the target portals
should be logged out.
[-h|--help] Displays command line help.
* - argument is positional.
Description
Logs out of iSCSI targets.
Note: This command is valid only on systems that support iSCSI devices.
Example
Log out of a specified target on cluster 1:
See also
l iscsi targets list
license install
Installs license to use the product feature.
Contexts
All contexts.
Syntax
license install
[-l|--license-file] license-file
[--delete-file]
[-h|--help]
[--verbose]
Arguments
Required Arguments
[-l|--license-file] Specifies the path to the license file, including the license file
license-file name to be validated and installed.
Optional Arguments
[--delete-file] Deletes the license file at the specified path after successful
installation.
[-h|--help] Displays the usage for this command.
[--verbose] Provides more output during command execution. This may not
have any effect for some commands.
Description
The license install command validates and installs the license specified by the given license
file. The license file must have been previously copied to the management server on which this
command is run.
See Also
l license reset
l license show
license reset
Removes any installed license for the product.
Contexts
All contexts.
Syntax
license reset
[-f|--force]
[-h|--help]
[--verbose]
Arguments
Optional Arguments
Description
Note: Before executing license reset, confirm with customer support to ensure resetting
is necessary.
The license reset command removes license for all installed product features on the local
cluster.
If you do not specify the -f option, the license will be removed after a command line prompt, such
as Are you sure you wish to remove the installed product license? (Yes/
No).
When the license is removed, the message License removed. is displayed.
See Also
l license install
l license show
license show
Displays details about the installed license or license specified by the given license file.
Contexts
All contexts.
Syntax
license show
[-l|--license-file] license-file
[-r|--raw]
[-h|--help]
[--verbose]
Arguments
Optional Arguments
[-l|--license-file]license- Specifies the path to the license file whose details are to be
file shown.
[-r|--raw] Displays the raw (unformatted) license contents.
[-h|--help] Displays the usage for this command.
[--verbose] Provides more output during command execution. This may
not have any effect for some commands.
Description
The license show command displays license features that are installed on the local cluster. For
example, an unlicensed product would return a message similar to License is not
installed. A licensed product returns output containing information about various features, if
installed (for example, migration and capacity).
This command also validates and displays usage of licensed features. In addition, the command
displays warnings if usage exceeds the license capacity. The following attributes provide more
details about the feature usage quantity.
Note: Usage Intelligence is displayed only for the VPLEX_LOCAL_CAPACITY and the
VPLEX_METRO_CAPACITY licenses.
l Usage : Displays number part of feature usage quantity.
l Usage Unit : Displays unit part of feature usage quantity.
For example, you may see warnings indicating that usage of a particular feature has exceeded its
licensed capacity, as in the following output:
WARNING:
The local capacity usage '6TB' has exceeded the licensed capacity '5TB'
The metro capacity usage '2TB' has exceeded the licensed metro capacity '1TB'
See Also
l license install
l license reset
local-device create
Creates a new local-device.
Contexts
All contexts.
Syntax
local-device create
[-d|--stripe-depth] depth
[-n|name] name
[-g|--geometry] {raid-0|raid-1|raid-c}
[-e|extents] context-path,context-path...
[-s|--source-leg] context-path
--force
Arguments
Required arguments
[-n|--name] * Name for the new device. Must be unique across all clusters. Devices on
name different clusters that have the same name cannot be combined into a
distributed device.
Note: If this device will have another device attached (using the
device attach-mirror command to create a RAID-1), the name
of the resulting RAID-1 is the name given here plus a timestamp.
Names in VPLEX are limited to 63 characters. The timestamp
consumes 16 characters. Thus, if this device is intended as the parent
device of a RAID-1, the device name must not exceed 47 characters.
[-g|--geometry] * Geometry for the new device. Valid values are raid-0, raid-1, or
{raid-0|raid-1| raid-c.
raid-c} CAUTION Use this command to create a RAID 1 device only if:
- None of the legs contains data that must be preserved
- The resulting device will be initialized using tools on the host
- The resulting device will be added as a mirror to another device
[-e|--extents] * List of one or more claimed extents to be added to the device. Can also
context- be other local devices (to create a device of devices).
path,context-path...
Optional arguments
[-d|--stripe- Required if --geometry is raid-0. Stripe depth must be:
depth] depth
[-s|--source- When geometry argument is raid-1, picks one of the extents specified
leg] context-path by the --extents argument to be used as the source data image for the
new device. The command copies data from the --source-leg to the
other legs of the new device.
[-f|--force] Create a RAID 1 device even if no --source-leg is specified.
* - argument is positional.
Description
A device is configured from one or more extents in a RAID 1, RAID 0, or concatenated RAID C
configuration.
The block sizes of the supporting extents must be the same (4 K bytes) and determine the local-
device block size.
When creating a device with RAID 1 geometry, this command prints a warning and asks for
confirmation.
WARNING If the --source-leg argument is not specified, this command does not initialize
or synchronize the legs of a RAID 1 device. Because of this, a RAID 1 device created by this
command does not guarantee that consecutive reads of the same block return the same data if
the block has never been written.
To create a RAID 1 device when one leg of the device contains data that must be preserved, use
the --source-leg argument or the device attach-mirror command to add a mirror to the
leg.
By default, automatic device rebuilds are enabled on all devices. For configurations with limited
bandwidth between clusters, it may be useful to disable automatic rebuilds.
Use the set command to enable/disable automatic rebuilds on the distributed device. The rebuild
setting is immediately applied to the device.
l Set rebuild-allowed to true to start or resume a rebuild if the mirror legs are out of sync.
l Set rebuild-allowed set to false to stop any rebuild in progress.
When automatic rebuild is re-enabled on a device where it has been disabled, the rebuild starts
again from the place where it stopped.
Examples
In the following example, the local-device create command creates a RAID-1 device from 2
extents; extent_lun_1_1 and extent_lun_2_1 in which:
l extent_lun_2_1 is the same size or larger than extent_lun_1_1
l extent_lun_1_1 is the source leg of the new device
VPlexcli:/clusters/cluster-1/storage-elements/extents> ll
Name StorageVolume Capacity Use
--------------------- ------------- -------- -------
.
.
.
extent_Symm1852_AAC_1 Symm1852_AAC 16G claimed
extent_Symm1852_AB0_1 Symm1852_AB0 16G claimed
extent_Symm1852_AB4_1 Symm1852_AB4 16G claimed
extent_Symm1852_AB8_1 Symm1852_AB8 16G claimed
VPlexcli:/clusters/cluster-1/storage-elements/extents> local-device
create --name TestDevCluster1 --geometry raid-1 --extents /clusters/
cluster-1/storage-elements/extents/extent_Symm1852_AAC_1,/clusters/
cluster-1/storage- elements/extents/extent_Symm1852_AB0_1
VPlexcli:/clusters/cluster-2/storage-elements/extents> cd
VPlexcli:/> ll -p **/devices
/clusters/cluster-1/devices:
Name Operational Health Block Block Capacity Geometry
Visibility Transfer Virtual
--------------- Status State Count Size -------- --------
---------- Size Volume
--------------- ----------- ------ ------- ----- -------- --------
---------- -------- ---------
TestDevCluster1 ok ok 4195200 4K 16G raid-1
local 2M -
base0 ok ok 262144 4K 1G raid-0
local - base0_vol
base1 ok ok 262144 4K 1G raid-0
local - base1_vol
In the above example if both the extents were thin-capable and from same storage array
family, the RAID-1 would be thin-capable too. The virtual volume created on top of such a
device can be thin-enabled.
l Note: The virtual volume must be built on top of a local RAID 0 device or a RAID 1 device. If
you try to create a RAID C local-device with multiple children, or a device that incorporates
multiple extents, the created local device is not thin-capable.
VPlexcli:/clusters/cluster-1/storage-elements/extents> local-device
create --geometry raid-c -e
extent_TOP_101_1, extent_TOP_102_1 --name myLocalDevice
VPlexcli:/clusters/cluster-1/storage-elements/extents>
See also
l device attach-mirror
l local-device destroy
l local-device summary
local-device destroy
Destroys existing local-devices.
Contexts
All contexts.
Syntax
local-device destroy
[-f|--force]
[-d|--devices] context-path,context-path...
Arguments
Required arguments
[-d|--devices] context-path,context- * List of one or more device(s) to destroy.
path...
Optional arguments
[-f|--force] Force the destruction of the devices without asking
for confirmation.
* - argument is positional.
Description
The device must not be hosting storage or have a parent device.
Example
See also
l local-device create
l local-device summary
local-device summary
Displays unhealthy local devices and a summary of all local devices.
Contexts
All contexts.
In /clusters/cluster/devices context, command is summary.
Syntax
local-device summary
[-c|--clusters] cluster,cluster...
Arguments
Optional arguments
[-c|--clusters] cluster,cluster... Display information only for the specified clusters.
Description
Displays unhealthy local devices and a summary of all local devices. Unhealthy devices have non-
nominal health state, operational status, or service-status.
If the --clusters argument is not specified and the command is executed at or below a /
clusters/cluster context, information for only that cluster is displayed.
Field Description
Health
devices Number of devices in the cluster.
Capacity
devices w/ space Of the total number of devices in the cluster,
the number with available space.
Field Description
Example
Display local devices for a specified cluster:
See also
l ds summary
l export port summary
l export storage-view summary
l extent summary
l storage-volume summary
l virtual-volume provision
Arguments
Optional arguments
[-s|--source] id ID of the source log to be filtered. Use the log source list command to
display the list of source logs and their IDs.
[-t|--threshold] Severity of the events to write to the new log. Messages are
[<|>|=]0 - 7 categorized into 8 severities (0 - 7), with 0 being the most severe:
7 - debug (debug-level messages)
6 - info (informational messages)
5 - notice (normal but significant messages)
4 - warning (warning messages)
3 - err (error messages)
2 - crit (critical messages)
1 - alert (messages that must be handled immediately)
0 - emerg (messages notifying the system as unusable)
Default modifier is>.
Description
Log filters define criteria for the destination of specific log data. A filter is placed in an ordered list,
and filters see received events in the order they sit in the list (shown by the log filter list
command).
By default, filters consume received events so that a matching filter stops the processing of the
event. Use the --no-consume argument to create a filter that allows processing of matching
events to continue.
Example
Filter out (hide) all messages with the string test in them:
Filter all messages into the events log generated by the logserver component with the string Test:
See also
l log filter destroy
l log filter list
Required arguments
[-f|--filter] filter ID of filter to delete.
Description
The filter is removed from the filter stack.
Use the log filter list command to display the filters configured on the system, and
associated IDs of those filters.
Example
See also
l log filter create
l log filter list
Syntax
log filter list
Description
The number printed beside each filter serves as both an identifier for the log filter destroy
command as well as the order in which each respective filter will see an event.
Example
See also
l log filter create
l log filter destroy
Required arguments
[-s|--source] host:port * IP address and port of the log source to be added. IP
addresses of the VPLEX hardware components are listed in the
VPLEX Installation and Setup Guide.
[-p|--password] password
Optional arguments The password to use for authenticating to the source.
[-f|--failover-source] IP address and port of the failover source to be added.
host:port
* argument is positional.
Description
CAUTION For use by Dell EMC personnel only.
Example
See also
l log source destroy
l log source list
Required arguments
[-s|--source] IP address and port of the log source to destroy. IP addresses of the
host:port VPLEX hardware components are listed in the VPLEX Installation
and Setup Guide.
Description
CAUTION For use by Dell EMC personnel only.
Example
See also
l log source create
l log source list
See also
l log filter create
l log source create
logging-volume add-mirror
Adds a logging volume mirror.
Contexts
All contexts.
Syntax
logging-volume add-mirror
[-v|--logging-volume] logging-volume
[-m|--mirror] {name|context-path}
Arguments
Optional arguments
[-v|--logging-volume] Logging volume to which to add the mirror.
logging-volume
[-m|--mirror] {name|context- The name or context path of the device or storage-volume
path} extent to add as a mirror. Must be top-level device or a
storage-volume extent.
See also
l logging-volume create
l logging-volume destroy
logging-volume create
Creates a new logging volume in a cluster.
Contexts
All contexts.
Syntax
logging-volume create
[-n|--name] name
[-g|--geometry {raid-0 |raid-1}
[-e|--extents] context-path,context-path...
[-d|--stripe-depth] depth
Arguments
Required arguments
[-n|--name] name * Name for the new logging volume.
[-g|--geometry] * Geometry for the new volume.
{raid-0|raid-1}
[-e|--extents] * List of one or more storage-volume extents to use to create the
context-path,context- logging volume. Must not be empty, and must contain storage-volume
path... extents that are all at the specified cluster. Entries must be separated
by commas.
Optional arguments
[-d|--stripe- Required if --geometry is raid-0. Stripe depth must be:
depth] depth
l Greater than zero, but not greater than the number of blocks of
the smallest element of the RAID 0 device being created
l A multiple of 4 K bytes
A depth of 32 means 128 K (32 x 4 K) is written to the first disk, then
the next 128 K is written to the next disk.
Best practice regarding stripe depth is to follow the best practice of
the underlying array.
Concatenated RAID devices are not striped.
* - argument is positional.
Description
Creates a logging volume. The new logging volume is immediately available for use with
distributed-devices.
A logging volume is required on each cluster in VPLEX Metro configurations. Each logging volume
must be large enough to contain one bit for every page of distributed storage space
(approximately 10 GB of logging volume space for every 160 TB of distributed devices).
Logging volumes experience a large amount of I/O during and after link outages. Best practice is to
stripe each logging volume across many disks for speed, and to have a mirror on another fast disk.
To create a logging volume, first claim the storage volumes that will be used, and create extents
from those volumes.
l Use the ll /clusters/cluster/storage-elements/storage-volumes command to
display the available storage volumes on the cluster.
l Use the storage-volume claim -n storage-volume_name command to claim one or
more storage volumes.
l Use the extent create -d storage-volume_name, storage-volume_name command to
create an extent to use for the logging volume.
Repeat this step for each extent to be used for the logging volume.
Field Description
Field Description
Field Description
/components context
/segments context
Field Description
Example
--------------------------------------------
-------------------------------------------------------- -------- -------
--------------------------------------------
allocated-c1_dr1ActC1_softConfig_CHM_C1_0000 1084 17
allocated for
c1_dr1ActC1_softConfig_CHM_C1_0000
allocated-c1_dr1ActC1_softConfig_CHM_C1_0001 1118 17
allocated for
c1_dr1ActC1_softConfig_CHM_C1_0001
.
.
.
allocated-r0_deviceTgt_C2_CHM_0001 2077 17
allocated for r0_deviceTgt_C2_CHM_0001
allocated-r1_mirrorTgt_C1_CHM_0001 2060 17
allocated for r1_mirrorTgt_C1_CHM_0001
free-1057 1057 10
free
free-2094 2094 3930066
free
free-40 40 2
free
free-82 82 2
free
VPlexcli:/clusters/cluster-1/system-volumes/c1_log_vol>
See also
l extent create
l logging-volume add-mirror
l logging-volume destroy
l storage-volume claim
logging-volume detach-mirror
Detaches a mirror from a logging volume.
Contexts
All contexts.
Syntax
logging-volume detach-mirror
[-m|--mirror] mirror
[-v|--logging-volume] logging-volume
[-s|--slot] slot-number
[-h|--help]
[--verbose]
Arguments
Optional arguments
[-m|--mirror] mirror * Specifies the name or context path of the logging volume mirror
to detach. If you specify the mirror, do not specify the slot
number.
[-v|--logging-volume] Specifies the name of the logging volume from which to detach
logging-volume the mirror.
[-s|--slot] slot-number Specifies the slot number of the mirror to detach. If you specify
the slot number, do not specify the mirror.
[-h|--help] Displays command line help.
[--verbose] Provides more output during command execution. This may not
have any effect for some commands.
* - argument is positional.
Description
This command detaches a mirror from a logging-volume. The logging-volume must have a RAID1
geometry and the mirror must be a direct child of the logging-volume.
You must specify the --slot or --mirror option but not both.
To detach a mirror from a component of a logging-volume use the device detach-mirror
command.
Example
Lists the attributes of the logging volume:
VPlexcli:/clusters/Hopkinton/system-volumes> ll logging_vol/
/clusters/Hopkinton/system-volumes/logging_vol:
Attributes:
Name Value
-------------------------------- --------------
application-consistent false
biggest-free-segment-block-count 2620324
block-count 2621440
block-size 4K
capacity 10G
component-count 1
free-capacity 10G
geometry raid-1
health-indications []
health-state ok
locality local
operational-status ok
provision-type legacy
rebuild-allowed true
rebuild-eta -
rebuild-progress -
rebuild-status done
rebuild-type full
stripe-depth -
supporting-device logging
system-id logging
transfer-size 128K
volume-type logging-volume
Contexts:
Name Description
---------- -------------------------------------------------------------------
components The list of components that support this logging-volume.
segments Shows what parts of the logging volume are assigned to log
changes on distributed-device legs.
VPlexcli:/clusters/Hopkinton/system-volumes> ll logging_vol/components/
/clusters/Hopkinton/system-volumes/logging_vol/components:
Name Slot Type Operational Health Capacity
------------------------------- Number ------ Status State -------
------------------------------- ------- ------ ----------- ------ -------
extent_CLARiiON1389_LUN_00023_1 0 extent ok ok 10G
See also
l logging-volume add-mirror
l logging-volume create
l logging-volume destroy
logging-volume destroy
Destroys an existing logging volume.
Contexts
All contexts.
Syntax
logging-volume destroy
[-v|--logging-volume] logging-volume
Arguments
Required arguments
[-v|--logging-volume] logging-volume * Name of logging volume to destroy.
* - argument is positional.
Description
The volume to be destroyed must not be currently used to store block write logs for a distributed-
device.
Example
See also
l logging-volume add-mirror
l logging-volume create
l logging-volume detach-mirror
logical-unit forget
Forgets the specified logical units (LUNs).
Contexts
All contexts.
Syntax
logical-unit forget
[-s|--forget-storage-volumes]
[-u|--logical-units] context-path,context-path,...
Arguments
Required arguments
[-u|--logical-units] context- List of one or more LUNs to forget.
path
Optional arguments
[-s|--forget-storage-volumes] If a LUN has an associated storage-volume, forget it
AND the associated storage-volume.
Description
Forget one or more logical units (LUNs). Optionally, forget the storage volume if one is configured
on the LUN. This command attempts to forget each LUN in the list specified, logging/displaying
errors as it goes.
A logical unit can only be forgotten if it has no active paths. LUNs can be remembered even if a
cluster is not currently in contact with them. This command tells the cluster that the specified
LUNs are not coming back and therefore it is safe to forget about them.
If a specified LUN has an associated storage-volume, that LUN is skipped (is not forgotten).
Use the --verbose argument to print a message for each volume that could not be forgotten.
Use the --forget-storage-volume argument to forget the logical unit AND its associated
storage-volume. This is equivalent to using the storage-volume forget command on those storage-
volumes.
Example
Forget the logical units in the current logical unit context:
VPlexcli:/clusters/cluster-1/storage-elements/storage-arrays/EMC-
SYMMETRIX-192602773/logical-units> logical-unit forget
13 logical-units were forgotten.
102 logical-units have associated storage-volumes and were not forgotten
Use the --verbose arguments to display detailed information about any logical units that could
not be forgotten:
VPlexcli:/clusters/cluster-1/storage-elements/storage-arrays/EMC-
SYMMETRIX-192602773/logical-units> logical-unit forget --forget-storage-
volumes --verbose
WARNING: Error forgetting logical-unit: Logical-unit
'VPD83T3:6006016030802100e405a642ed16e111'has active paths and cannot be
forgotten.
.
.
.
WARNING: Error forgetting storage-volume
'VPD83T3:60000970000192602773533030353933': The 'use' property of storage-
volume VPD83T3:60000970000192602773533030353933' is 'meta-data' but must be
'unclaimed' or 'unusable' before it can be forgotten.
.
.
.
13 logical-units were forgotten:
VPD83T3:60000970000192602773533030353777
.
.
.
11 storage-volumes were forgotten:
VPD83T3:6006016030802100e405a642ed16e1099
.
.
.
See also
l storage-volume forget
ls
Displays information about the current object or context.
Contexts
All contexts.
Syntax
ls
[-l|--long]
[-a|--attributes]
[-A|--no-attributes]
[-t|--attribute] selector
[-p|--paged]
[-m|--commands]
[-f|--full]
[-C|--no-contexts]
[-x |--cache-max-age]
context,[[context]...]
Arguments
Optional arguments
[-l|--long] Display more detailed information.
[-a|--attributes] Includes the attributes of the target contexts.
Description
The contents of a context include: its child contexts; its attributes; and the available commands.
The context name can be any valid glob pattern.
The VPLEX CLI includes ll, a pre-defined alias of ls -a.
Example
Display a device’s attributes:
VPlexcli:/> ls -C /clusters/cluster-8/devices/device_CLAR0014_LUN04_1
/clusters/cluster-8/devices/device_CLAR0014_LUN04_1:
Name Value
---------------------- -----------------------
application-consistent false
block-count 2621440
block-size 4K
capacity 10G
geometry raid-0
health-indications []
health-state ok
locality local
operational-status ok
rebuild-allowed -
rebuild-eta -
rebuild-progress -
.
.
.
Use the --attribute argument to display the operational status of all directors:
Display a cluster’s attributes and the contexts below the cluster context:
VPlexcli:/> ls /clusters/cluster-1
/clusters/cluster-1:
Attributes:
Name Value
---------------------- ------------
allow-auto-join true
auto-expel-count 0
auto-expel-period 0
auto-join-delay 0
cluster-id 1
connected true
default-cache-mode synchronous
default-caw-template true
default-write-same-16-template true
default-xcopy-template true
director-names [director-1-1-A, director-1-1-B]
island-id 1
top-level-assembly FNM00151000986
operational-status ok
transition-indications []
transition-progress []
health-state ok
health-indications []
Contexts:
connectivity consistency-groups devices exports
performance-policies storage-elements
system-volumes uninterruptible-power-supplies virtual-volumes
Use the --attribute-selector argument to display the contents of the 'virtual-volumes' attribute on
all views:
See also
l alias
management-server set-ip
Assigns IP address, net-mask, and gateway IP address to the specified management port. Assign
IP from peer MMCS.
Contexts
All contexts.
Syntax
management-server set-ip
[-i|--ip-netmask] destination-IP-address/mask
[-g|--gateway] IP-address
[-p|--management-port] context-path
[-r|--peer]
Arguments
Required arguments
[-i|--ip-netmask] The address and subnet mask of the Ethernet port. The format of the
destination-IP-address/ address/subnet mask depends on the version of IP.
mask l To specify an IPv4 address - The format is: destination IP
address/subnet mask
For example: 172.16.2.0/255.255.255.0
l To specify an IPv6 address - The format is: destination IP
address/CIDR netmask
For example: 3ffe:80c0:22c:803a:250:56ff:feb5:c1/64.
[-g|--gateway] IP- The IP address of the gateway for this management server.
address
[-p|--management- Ethernet port for which the parameters are assigned/changed.
port] context-path
[-r|--peer] Invokes the set-ip command on the peer MMCS. This command is
only available on MMCS-A. For example, use this option to set the
public IP of MMCS-B from the MMCS-A CLI.
Description
The management server includes 4 Ethernet ports:
l eth0 - Service port.
l eth1 and eth2 - Internal management ports.
l eth3 - Public management port. The only Ethernet port in the server that you can connect to
an external management LAN.
The IP addresses for eth0, eth1, and eth2 cannot be changed.
Use the management-server set-ip command to set the IP address and netmask for port
eth3.
Ports eth0, eth1 and eth2 do not have IPv6 addresses. Example output of the ll command for
eth0, eth1 and eth2:
/management-server/ports/eth0:
Name Value
------------- ---------------
address 128.221.253.33
auto-negotiate true
duplex auto-negotiated
gateway 10.31.52.1
inet6-address []
inet6-gateway []
net-mask 255.255.255.224
speed auto-negotiated
status ok
WARNING Changing the IP address for port eth3 can disrupt your inter-cluster link, and if
VPLEX Witness is deployed, disrupt the VPN between the clusters and the Cluster Witness
server.
Additional failures (for example a remote cluster failure) while VPN between the clusters and
the Cluster Witness server is disrupted could lead to DU for all distributed virtual volumes in
synchronous consistency groups.
For the procedure to safely change the management server IP address, refer to the SolVe
Desktop.
Example
Modify an IPv4 address:
l The ll command displays the current setting for eth3
l The management-server set-ip command modifies the port’s IPv4 settings:
VPlexcli:/> ll /management-server/ports/eth3
/management-server/ports/eth3:
Name Value
-------------- -------------
address 10.31.52.70
auto-negotiate true
duplex full
gateway 10.31.52.21
inet6-address -
inet6-gateway -
net-mask 255.255.248.0
speed 0.977GB/s
status up
VPlexcli:/> management-server set-ip --ip-netmask 10.31.52.197/255.255.252.0
--gateway 10.31.52.1 -p eth3
VPlexcli:/> ll /management-server/ports/eth3
Name Value
------------- -----------------------------------------------------
address 10.31.52.197
auto-negotiate true
duplex full
gateway 10.31.52.1
inet6-address [3ffe:80c0:22c:803c:215:17ff:fecc:4408/64 Scope: Global,
3ffe:80c0:22c:803c:415:17ff:fecc:4408/64 Scope: Link]
inet6-gateway 3ffe:80c0:22c:803c::1
net-mask 255.255.252.0
speed 0.977GB/s
status up
VPlexcli:/> management-server set-ip --ip-netmask
3ffe:80c0:22c:803c:215:17ff:fed2:fe88/64
--gateway 3ffe:80c0:22c:803c::1 -p eth3
VPlexcli:/>management-server set-ip
--ip-netmask 10.103.101.112/255.255.248.0 -g 10.103.96.1 -p eth3 -r
manifest upgrade
Loads a new manifest file, replacing the old one, if it exists.
Contexts
All contexts.
Syntax
manifest upgrade
-m|--manifest] pathname
Arguments
Required arguments
[-m|--manifest] pathname Path to manifest file. Relative paths can be used.
Description
The new manifest file will be validated before it replaces the old one.
If there is no current valid manifest file (corrupted or missing), the specified manifest file is
installed without confirmation.
If a valid manifest file exists, confirmation is required if the specified manifest file does not have a
newer version than the existing one.
See also
l manifest version
manifest version
Displays the version of the currently loaded manifest file.
Contexts
All contexts.
Syntax
manifest version
Description
A Jython command used by scripts during upgrades.
Example
See also
l manifest upgrade
meta-volume attach-mirror
Attaches a storage-volume as a mirror to a meta-volume.
Contexts
All contexts.
Syntax
meta-volume attach-mirror
[-d|--storage-volume] context-path
[-v|--meta-volume] context-path
Arguments
Required arguments
[-d|--storage-volume] context- Storage-volume to attach as a mirror to the meta-
path volume.
[v|--meta-volume] context-path Meta-volume to which the storage volume should be
attached as a mirror.
Description
Creates a mirror and backup of the specified meta-volume. The specified storage volumes must
be:
l Not empty.
l At the implied or specified cluster.
l Unclaimed.
l 78 GB or larger.
l
Dell EMC recommends you create a mirror and a backup of the meta-volume using at least two
disks from two different arrays.
Note: You can attach a mirrorwhen the meta-volume is first created by specifying two storage
volumes.
Example
Attach storage volume VPD83T3:6…ade11 as a mirror to the existing meta-volume _dmx:
See also
l meta-volume detach-mirror
meta-volume backup
Creates a new meta-volume and writes the current in-memory system data to the new meta-
volume without activating it.
Contexts
All contexts.
Syntax
meta-volume backup
[-d|--storage-volumes] context-path,context-path...
[-c|--cluster] context-path
[-f|--force]
Arguments
Required arguments
[-d|--storage- * List of two or more storage volumes to use in creating the backup
volume] context-path meta-volume. The specified storage-volumes must be:
l Not empty
l At the implied or specified cluster
l Unclaimed
l 78 GB or larger.
l
Type the system IDs for multiple (two or more) storage volumes,
separated by commas.
Optional arguments
[-c|--cluster] The cluster whose active meta-volume will be backed-up.
context-path
[-f|--force] Forces the backup meta-volume to be activated without asking for
confirmation.
* - argument is positional.
Description
Backup creates a point-in-time copy of the current in-memory metadata without activating it. The
new meta-volume is named:
current-metadata-namebackup_yyyyMMMdd_HHmms
Metadata is read from the meta-volume only during the boot of each director.
Create a backup meta-volume:
l As part of an overall system health check before a major migration or update.
l If the system permanently loses access to both meta-volumes.
Note: No modifications should be made to the system during the backup procedure. Make
sure that all other users are notified.
Use the ll command in the system-volumes context to verify that the meta-volume is Active
and its Ready state is true.
Example
Back up the metadata to a RAID 1 of two specified storage volumes:
See also
l meta-volume create
l meta-volume destroy
meta-volume create
Creates a new meta-volume in a cluster when there is no existing active meta-volume.
Contexts
All contexts.
Syntax
meta-volume create
[-n|--name] name
[-d|--storage-volumes] context-path,context-path...
[-f|--force]
Arguments
Required arguments
[-n|--name] name * Name of the new meta-volume.
[-d|--storage- * List of two or more storage volumes to use in creating the new
volume] context-path meta-volume. The specified storage volumes must not be empty, and
must be at the implied or specified cluster.
Type the system IDs for the storage volumes separated by commas.
Note: Specify two or more storage volumes. Storage volumes
should be on different arrays.
Optional arguments
[f|--force] Forces the meta-volume to be created without asking for confirmation.
* - argument is positional.
Description
Metadata includes virtual-to-physical mappings, data about devices, virtual volumes, and
configuration settings.
Metadata is stored in cache and backed up on a specially designated external volume called the
meta-volume.
The meta-volume is critical for system recovery. The best practice is to mirror the meta-volume
across two or more back-end arrays to eliminate the possibility of data loss. Choose the arrays
used to mirror the meta-volume such that they are not required to migrate at the same time.
Meta-volumes differ from standard storage volumes in that:
l A meta-volume is created without first being claimed,
l Meta-volumes are created directly on storage volumes, not extents.
CAUTION If the meta-volume is configured on a CLARiiON array, it must not be placed on
the vault drives of the CLARiiON.
Performance is not critical for meta-volumes. The minimum performance allowed is 40 MB/sec
and 100 4 K IOP/second.
The physical spindles for meta-volumes should be isolated from application workloads.
Dell EMC recommends the following for meta-volumes:
l Read caching enabled.
l A hot spare meta-volume pre-configured in case of a catastrophic failure of the active meta-
volume.
l Minimum of 78 GB.
l
If two or more storage-volumes are specified, they must be on two separate arrays if more than
one array is present. This command creates a RAID 1 of all the storage volumes.
Examples
In the following example:
l The configuration show-meta-volume-candidates command displays possible
candidates:
Note: Example out put is truncated. Vendor, IO Status, and Type fields are omitted.
l The meta-volume create command creates a new mirrored volume using the 2
specified storage volumes.
l The ll command displays the new meta-volume.
FNM00083800068
l
.
.
.
VPlexcli:/> meta-volume create --name c1_meta -storage-volumes
VPD83T3:60000970000192601707533031333136,
VPD83T3:60060480000190300487533030343445
VPlexcli:/> cd /clusters/cluster-1/system-volumes
VPlexcli:/clusters/cluster-1/system-volumes> ll c1_meta
/clusters/cluster-1/system-volumes/c1_meta:
Attributes:
Name Value
---------------------- -----------
active true
application-consistent false
block-count 20971264
block-size 4K
capacity 80G
component-count 2
free-slots 27199
geometry raid-1
health-indications []
health-state ok
locality local
operational-status ok
ready true
rebuild-allowed true
rebuild-eta -
rebuild-progress -
rebuild-status done
rebuild-type full
slots 32000
stripe-depth -
system-id c1_meta
transfer-size 128K
volume-type meta-volume
Contexts:
Name Description
----------
-------------------------------------------------------------------
components The list of components that support this device or system virtual
volume.
See also
l meta-volume destroy
meta-volume destroy
Destroys a meta-volume, and frees its storage volumes for other uses.
Contexts
All contexts.
Syntax
meta-volume destroy
[-v|--meta-volume] context-path
[-f|--force]
Arguments
Required arguments
[-v|--meta-volume] - * Meta-volume to destroy.
context-path
Optional arguments
[f|--force] - Destroys the meta-volume without asking for confirmation
(allows the command to be run from a non-interactive script).
Allows the meta-volume to be destroyed, even if the meta-volume
is in a failed state and unreachable.
* - argument is positional.
Description
The meta-volume cannot be destroyed if its active attribute is true.
Example
In the following example:
l ll displays that the target meta-volume has an active state of false.
l The meta-volume destroy command destroys the meta-volume:
VPlexcli:/clusters/cluster-1/system-volumes> ll meta1
/clusters/cluster-1/system-volumes/meta1:
Attributes:
Name Value
---------------------- -----------
active false
application-consistent false
block-count 23592704
.
.
.
VPlexcli:/clusters/cluster-1/system-volumes> meta-volume destroy -v meta1
Meta-volume 'meta1' will be destroyed. Do you wish to continue? (Yes/No) y
See also
l meta-volume create
meta-volume detach-mirror
Detaches a storage-volume/mirror from a meta-volume.
Contexts
All contexts.
Syntax
meta-volume detach-mirror
[-d|--storage-volume] context-path
[-v|--meta-volume] context-path
[-s|--slot] slot-number
[f|--force]
--discard
Arguments
Required arguments
[-d|--storage-volume] Storage volume to detach as a mirror from the meta-
context-path volume.
[-v|--meta-volume] context- * The meta-volume from which the storage-volume/
path mirror should be detached.
Optional arguments
[-f|--force] Force the mirror to be discarded. Required when the --
discard argument is used.
[-s|--slot] slot-number The slot number of the mirror to be discarded. Applicable
only when the --discard argument is used.
[-u|--detach-unreachable- Supports the discard of an unreachable mirror.
mirror]
--discard Discards the mirror to be detached. The data is not
discarded.
* - argument is positional.
Description
Detaches the specified storage volume from a meta-volume.
Use the ll command in /clusters/cluster/system-volumes/meta-volume/
components context to display the slot number when using the discard argument.
Example
VPlexcli:/clusters/cluster-1/system-volumes/meta-vol-1/components> ll
Name Slot Type
See also
l meta-volume attach-mirror
meta-volume move
Writes the current in-memory system data to the specified target meta-volume, then activates it.
Contexts
All contexts.
Syntax
meta-volume move
[-t|--target-volume] context-path
Arguments
Required arguments
[-t|--target-volume] Storage volume to move metadata to. Target volume must be:
context-path l Unclaimed.
l Must be 78 GB or larger.
Description
Writes the metadata to the specified meta-volume, and activates it. The specified meta-volume
must already exist (it is not created automatically).
This command fails if the destination meta volume has a lower number of meta data slots than
required to support the current configuration. This is highly likely if the target meta-volume was
manually created before Release 5.1 and has 32000 slots. Confirm this by using the ll command in
the system volume context. See the troubleshooting procedures for VPLEX in the SolVe Desktop
for information on fixing this problem.
See also
l meta-volume create
l meta-volume destroy
meta-volume verify-on-disk-consistency
Analyzes a meta-volume's committed (on-disk) header slots for consistency across all mirrors/
components.
Contexts
All contexts.
Syntax
meta-volume verify-on-disk-consistency
[-l|--log] log-file
[-f|--first] first
[-n|--number] number
[-c|--cluster] cluster
[-m|--meta-volume] meta-volume
--style {short|long|slow}
Arguments
Required arguments
[-c|--cluster] cluster The cluster at which to analyze the active meta-volume. This argument
may be omitted if the --meta-volume argument is present.
[-m|--meta-volume] The meta-volume to analyze. This argument may be omitted if the --
meta-volume cluster argument is present.
[-l|--log] log file Full path to the log file on the management server.
[-f|--first] first Offset of first header to analyze.
[-n|--number] number Number of headers to analyze.
--style {short|long| The style of analysis to do. Valid values:
slow} short - Requires special firmware support available only in Release 5.0
and later.
long - Requires special firmware support available only in Release 5.0
and later.
slow - Available for all Release versions. Downloads the meta-volume
headers from the meta-volume legs one at a time and compares them.
CAUTION The slow option may take hours to complete on a
production meta-volume.
Description
An active meta-volume with an inconsistent on-disk state can lead to a data unavailability (DU)
during NDU.
Best practice is to upgrade immediately after passing this meta-volume consistency check.
Note: If any errors are reported, do not proceed with the upgrade, and contact Dell EMC
Customer Support.
Example
Verify the specified meta-volume is consistent using the slow style:
See also
l meta-volume create
monitor add-console-sink
Adds a console sink to the specified performance monitor.
Contexts
All contexts.
In context, command is add-console-sink.
Syntax
monitor add-console-sink
[-o|--format] {csv|table}
[-m|--monitor] monitor-name
[--force]
Arguments
Required arguments
[-m|--monitor] context-path * Performance monitor to which to add a console sink.
Optional arguments
[-f|--force] Forces the creation of the sink, even if existing monitors are
delayed in their polling.
[-o|--format] {csv|table} The output format. Can be csv (comma-separated values) or
table.
Default: table.
* -argument is positional.
Description
Creates a console sink for the specified performance monitor. Console sinks send output to the
management server console.
Every monitor must have at least one sink, and may have multiple sinks. A monitor does not begin
operation (polling and collecting performance data) until a sink is added to the monitor.
Use the monitor add-console-sink command to add a console sink to an existing monitor.
CAUTION Console monitors display the specified statistics on Unisphere for VPLEX,
interrupting any other input/output to/from the console.
Example
Add a console sink with output formatted as table (the default output format for console sinks):
Navigate to the monitor context and use the ll console command to display the sink settings:
sink-to console
type console
See also
l monitor add-file-sink
l monitor remove-sink
l monitor create
monitor add-file-sink
Adds a file sink to the specified performance monitor.
Contexts
All contexts.
In /monitoring context, command is add-file-sink.
Syntax
monitor add-file-sink
[-n|--name] name
[-o|--format] {csv|table}
[-m|--monitor] monitor-name
[-f|--file] filename
-- force
Arguments
Required arguments
[-m|--monitor] context-path * Performance monitor to which to add a console sink.
[-f|--file] filename * File to which to send the sink’s data.
Optional arguments
[-f|--force] Forces the creation of the sink, even if existing monitors are
delayed in their polling.
[-n|--name] name Name for the new sink. If no name is provided, the default
name “file” is applied.
[-o|--format] {csv| The output format. Can be csv (comma-separated values)' or
table} table.
Default: csv.
* -argument is positional.
Description
Creates a file sink for the specified monitor. File sinks send output to the specified file.
The default location of the output file is /var/log/VPlex/cli.
The default name for the file sink context is file.
Every monitor must have at least one sink, and may have multiple sinks. A monitor does not begin
operation (polling and collecting performance data) until a sink is added to the monitor
Use the monitor add-file-sink command to add a file sink to an existing monitor.
Example
To add a file sink to send output to the specified .csv file:
Navigate to the monitor sinks context and use the ll sink-name command to display the sink:
VPlexcli:>/cd /monitoring/directors/director-1-1-A/monitors/director-1-1-
A_stats/sinks
VPlexcli:/monitoring/directors/Director-1-1-A/monitors/director-1-1-A_stats/
sinks> ll file
/monitoring/directors/Director-1-1-A/monitors/director-1-1-A_stats/sinks/file:
Name Value
------- -------------------------------
enabled true
format csv
sink-to /var/log/VPlex/cli/director_1_1_A.csv
type file
See also
l monitor add-console-sink
l monitor collect
l monitor remove-sink
l report create-monitors
monitor collect
Force an immediate poll and collection of performance data without waiting for the automatic poll
interval.
Contexts
All contexts.
In /monitoring context, command is collect.
Syntax
monitor collect
[-m|--monitors] context-path,context-path...
Arguments
Required arguments
[-m|--monitor] context-path,context-path One or more performance monitors to update
immediately.
Description
Polls and collects performance data from user-defined monitors. Monitors must have at least one
enabled sink.
Example
See also
l monitor create
l report poll-monitors
monitor create
Creates a performance monitor.
Contexts
All contexts.
In /monitoring context, command is create.
Syntax
monitor create
[-p|--period] collection-period
[-n|--name] monitor-name
[-d|--director] context-path,context-path...
[-s|--stats] stat,[stat,...]
[-t|--targets] context-path,context-path...
[-f|--force]
Arguments
Required arguments
[-n|--name] monitor- * Name of the monitor. The name is appended to the director on
name which the monitor is configured.
[-s|--stats] * One or more statistics to monitor, separated by commas.
stat[,stat,...] Use the monitor stat-list command to display the available
statistics.
Optional arguments
[-p|--period] collection- Frequency at which this monitor collects statistics. Valid
period arguments are an integer followed by:
* - argument is positional.
Description
Performance monitoring collects and displays statistics to determine how a port or volume is being
used, how much I/O is being processed, CPU usage, and so on.
VPLEX collects and displays performance statistics using two user-defined objects:
l monitors - Gather the specified statistics.
l monitor sinks - Direct the output to the desired destination. Monitor sinks include the console,
a file, or a combination of the two.
The monitor defines the automatic polling period, the statistics to be collected, and the output of
the format. The monitor sinks define the output destination.
Polling occurs when:
l The timer defined by the monitor’s period attribute has expired.
l The monitor has at least one sink with the enabled attribute set to true.
Polling is suspended when:
l The monitor’s period is set to 0, and/or
l All the monitor’s sinks are either removed or their enabled attribute is set to false
Create short-term monitors to diagnose an immediate problem.
Create longer-term monitors for ongoing system management.
About file rotation and timestamps
The log files created by a monitor’s file sink are automatically rotated when they reach a size of 10
MB. The 10MB file is saved as filename.csv.n where n is a number 1 - 10, and output is saved
in a new file named filename.csv.n+1.
The .csv files are rotated up to 10 times.
In the following example, a monitor has exceeded 10MB of output. The initial 10MB are stored in
filename.csv.1. Subsequent output is stored in filename.csv.
service@sms-cluster-1:/var/log/VPlex/cli> ll my-data.csv*
-rw-r--r-- 1 service users 2910722 2012-03-06 21:23 my-data.csv
-rw-r--r-- 1 service users 10566670 2012-03-06 21:10 my-data.csv.1
If the second file exceeds, 10B, it is saved as filename.csv.2, and subsequent output is saved
in filename.csv. Up to 10 such rotations, and numbered .csv files are supported.
When the file sink is removed or the monitor is destroyed, output to the .csv file stops, and the
current .csv file is time stamped. For example:
service@sms-cluster-1:/var/log/VPlex/cli> ll my-data.csv*
-rw-r--r-- 1 service users 10566670 2012-03-06 21:23 my-data.csv.1
-rw-r--r-- 1 service users 5637498 2012-03-06 21:26 my-
data.csv_20120306092614973
Examples
Create a simple monitor with the default period, and no targets:
See also
l monitor add-console-sink
l monitor-add-file-sink
l monitor destroy
l monitor stat-list
l report create-monitors
monitor destroy
Destroys a performance monitor.
Contexts
All contexts.
In /monitoring context, command is destroy.
Syntax
monitor destroy
[-m|--monitor] monitor-name,monitor-name...
[-c|--context-only]
[-f|--force]
Arguments
Required arguments
[-m|--monitor] * List of one or more names of the monitors to destroy.
monitor-name
Optional arguments
[-f|-- force] Destroy monitors with enabled sinks and bypass confirmation.
[-c|--context-only] Removes monitor contexts from Unisphere for VPLEX and the CLI,
but does not delete monitors from the firmware. Use this argument
to remove contexts that were created on directors to which the
element manager is no longer connected.
* Argument is positional
Description
Deletes the specified performance monitor.
Example
See also
l monitor create
l report create-monitors
monitor get-stats
Get last stats from monitors
Contexts
All
Syntax
get-stats
[m | --monitors= context paths [, context paths>...]]
-p | --parseable
-h | --help
--verbose
Arguments
Required arguments
-m | --monitors= context paths [, * Get the last stats from the monitors specified by
context paths ...] the listed context paths.
-p | --parseable Output parser-friendly stats names
Optional arguments
-h | --help Displays the usage for this command.
--verbose Provides more output during command execution.
This may not have any effect for some commands.
* argument is positional
Description
Get last stats from monitors
The default polling frequency of System Wide perpetual monitors is 5 seconds and Virtual Volume
perpetual monitors is 1 minute. So your application should tune the poll frequency (calling the
REST API to get the stats from VPLEX) according to the poll frequency of the monitors. If your
application is polling at a higher frequency than the monitor, your application will get redundant
data or data that it has already polled.
Examples
monitor remove-sink
Removes a sink from a performance monitor.
Contexts
All contexts.
In /monitoring context, command is remove-sink.
Syntax
monitor remove-sink
[-s|--sinks] context-path,context-path...
Arguments
Required arguments
[-s|--sinks] context- * List of one or more sinks to remove. Entries must be
path,context-path... separated by commas.
* - argument is positional.
Description
Removes one or more performance monitor sinks.
Example
Remove a console sink:
VPlexcli:/monitoring/directors/director-2-1-B/monitors/director-2-1-B
_TestMonitor> monitor remove-sink console
See also
l monitor add-console-sink
l monitor add-file-sink
monitor stat-list
Displays statistics available for performance monitoring.
Contexts
All contexts.
In /monitoring context, command is stat-list.
Syntax
monitor stat-list
[-c|--categories] category,category...
Arguments
Optional arguments
[-c|--categories] category, category... List of one or more statistics categories to display.
Description
Performance statistics are grouped into categories Use the monitor stat-list command
followed by the <Tab> key to display the statistics categories.
Use the --categories categories argument to display the statistics available in the specified
category.
Use the * wildcard to display all statistics for all categories.
Note: A complete list of the command output is available in the Dell EMC VPLEX Administration
Guide.
Examples
See also
l monitor create
l Dell EMC VPLEX Administration Guide
ndu pre-check
Performs a pre-NDU validation and check.
Contexts
All contexts.
Syntax
ndu pre-check
Description
The ndu pre-check command should be run before you run a non-disruptive upgrade on a
system to upgrade GeoSynchrony. This command runs through a number of checks to see if the
non-disruptive upgrade would run into any errors in upgrading GeoSynchrony.
CAUTION NDU pre-checks must be run within 24 hours before starting the NDU process.
Disclaimers for multipathing in ndu pre-check give time for you to validate hosts.
For more detailed information about NDU pre-checks, see the upgrade procedures for VPLEX in
the SolVe Desktop.
The checks performed by ndu pre-check are listed in the Upgrade procedure for each software
release. This procedure can be found in the VPLEX procedures in the SolVe Desktop.
See also
l ndu start
l ndu recover
l ndu status
ndu pre-config-upgrade
Disruptively upgrades a VPLEX that has not been fully installed and configured.
Contexts
All contexts.
Syntax
ndu pre-config-upgrade
[-u|--firmware] firmware-tar-file
[-i|--image] firmware-image-file
Arguments
Optional arguments
[-u|--firmware] firmware-tar-file -Full path to director firmware package on the
management server.
Description
Disruptively upgrades a VPLEX when the VPLEX is not fully installed and configured.
CAUTION This command requires the VPLEX be in a pre-config state. Specifically, do not use
this procedure unless NO meta-volume is configured (or discoverable).
This command is used as part of a non-disruptive upgrade procedure for installed systems that
have not yet been configured. For more information, see the upgrade procedures for VPLEX in the
SolVe Desktop.
See also
l ndu start
l ndu recover
l ndu status
ndu recover
Perform NDU recovery after a failed NDU attempt.
Contexts
All contexts.
Syntax
ndu recover
Description
If the NDU failed before I/O is transferred from the second upgraders (running old software) to
the first upgraders (running new software), then the first upgraders are rolled back to the old
software.
If the NDU failed after I/O transfer, the directors are rolled forward to the new software.
If no recovery is needed, a message is displayed.
It is safe to run the ndu recover command multiple times.
See the upgrade procedure or the troubleshooting procedure in the SolVe Desktop for details of
the ndu recover command and its use.
See also
l ndu pre-check
l ndu start
l ndu status
[--skip-group-fe-checks]
[--skip-group-health-checks]
[--skip-meta-volume-backup-check]
[--skip-meta-volume-redundancy-check]
[--skip-remote-mgmt-version-check]
[--skip-storage-volumes-check]
[--skip-sysconfig-check]
[--skip-view-config-check]
[--skip-view-health-check]
[--skip-virtual-volumes-check]
[--skip-wan-com-check]
Arguments
Required arguments
[-i|--image] firmware- * Full path to director firmware image on the management server.
image-file For example:
/tmp/VPlexInstallPackages/VPlex-5.0.1.00.00.06-director-
field-disk-image.tar
Optional arguments
[-t|--targets] List of directors to upgrade.
targets,targets,...
--force Must be specified to ignore SSD firmware version checking. (To
upgrade to the same or older firmware)
--check-only Check which directors will have their SSD firmware upgraded, not
upgrade the firmware.
--dry-run Do not perform the ssd firmware upgrade but run the same
procedure as an actual install (including netbooting the directors).
--skip-be-switch- Skips the NDU pre-check for unhealthy back-end switches.
check
--skip-cluster- Skip the NDU pre-check for cluster problems (missing directors,
status-check suspended exports, inter-cluster link failure).
--skip-confirmations Skip any user confirmations normally required before proceeding
when there are NDU pre-check warnings.
--skip-distributed- Skips the NDU pre-check for distributed device settings (auto-
device-settings-check resume set to true).
--skip-fe-switch- Skips the NDU pre-check for unhealthy front-end switches.
check
--skip-group-be- Skip all NDU pre-checks related to back-end validation. This
checks includes the system configuration validation and unreachable
storage volumes pre-checks.
--skip-group-config- Skip all NDU pre-checks related to system configuration. This
checks includes the system configuration validation and director
commission pre-checks.
Description
Upgrades the directors one at a time. Assures that there are directors available to service I/O as
some of the directors are being upgraded. The upgraded director rejoins the system before the
next director is upgraded.
The director SSD firmware upgrade is performed by netbooting the director to ensure that the
SSD is not in use while the firmware is being upgraded.
Non-disruptively upgrades the SSD firmware on the directors in a running VPLEX system.
Refer to the upgrade procedure in the SolVe Desktop for more information on using this command.
See also
l ndu start
l The VPLEX procedures in the SolVe Desktop to upgrade/troubleshoot GeoSynchrony.
ndu start
Begins the non-disruptive upgrade (NDU) process of the director firmware.
Contexts
All contexts.
Syntax
ndu start
[--io-fwd-ask-for-confirmation] prompt type [-u|--firmware] firmware-tar-file
[optional-argument [optional-argument]]
Arguments
Required arguments
[-u|--firmware] * Full path to director firmware package on the management
firmware-tar-file server.
[--io-fwd-ask-for- The type of the prompt that you want to see during the IO
confirmation] prompt forwarding phase of the NDU. The available options are:
type l always - Choose this option if you have hosts that require
manual scanning for the paths to be visible. Assistance from
the customer is required to verify that initiator paths on the
hosts are alive. If the path is unavailable, resolve the issue
within the timeout period that you have specified. The
prompts for this options are:
n Continue: NDU continues even when there are missing
initiator logins. Make sure that the customer is aware that
missing logins can cause DU.
n Rollback: NDU rolls back and DU is avoided. The
customer can check the host, resolve the issue that led to
the missing initiator logins, and rerun the NDU.
n Refresh: Get the new list of initiators. If all the initiators
are logged in, VPLEX displays the prompts to move
forward.
l on-missing-logins - Assistance from the customer is
required to determine whether any missing initiators are from
critical hosts. If paths are unavailable from critical hosts, the
customer will need to resolve the issue before continuing with
the NDU. The prompts for this options are:
n Continue: NDU continues even when there are missing
initiator logins. Make sure that the customer is aware that
missing logins can cause DU.
n Rollback: NDU rolls back and DU is avoided. The
customer can check the host, resolve the issue that led to
the missing initiator logins, and rerun the NDU.
Optional arguments
--io-fwd-timeout= time The period after which the I/O forward phase times out. In the
I/O forward phase, the I/Os that are serviced to the first set of
directors are forwarded to the second set of directors. The hosts
are expected to connect back to the first set of directors during
this period. By default, this phase lasts for 180 minutes. You can
set this timeout period to a minimum of 6 minutes and a maximum
of 12 hours. Use:
l s for seconds
l m for minutes
l h for hours
l d for days
--cws-package cws- Full path to Cluster Witness Server package on the management
firmware-tar-file server.
Note: Not required if upgrading to an official product release.
Description
This command starts a non-disruptive upgrade and can skip certain checks to push a non-
disruptive upgrade when the ndu pre-checks command fails. The pre-checks executed by the
ndu pre-check command verify that the upgrade from the current software to the new
software is supported, the configuration supports NDU, and the system state is ready (clusters
and volumes are healthy).
You must resolve all issues disclosed by the ndu pre-check command before running the ndu
start command.
Skip options enable ndu start to skip one or more NDU pre-checks. Skip options should be used
only after fully understanding the problem reported by the pre-check to minimize the risk of data
unavailability.
Note: Skip options may be combined to skip more than one pre-check. Multiple skip options
must be separated by a space.
Note: It is recommended that you upgrade VPLEX using the upgrade procedure found in the
SolVe Desktop. This procedure also details when the ndu start command should be used
with skip options and how to select and use those skip options.
See also
l ndu pre-check
l ndu recover
l ndu status
l The VPLEX procedures to upgrade/troubleshoot GeoSynchrony in the SolVe Desktop .
ndu status
Displays the NDU status.
Contexts
All contexts.
Syntax
ndu status
[--verbose]
Description
If an NDU firmware or OS upgrade is running, this command displays the upgrade activity.
If neither NDU firmware or OS upgrade is running, this command displays information about the
previous NDU firmware upgrade.
If the last operation was a rolling-upgrade, the OS upgrade information is displayed. The ndu
start command clears this information.
If an NDU firmware or OS upgrade has failed, this command displays a message to use the ndu
recover command.
if an NDU recovery is in progress, has succeeded or failed, this command displays a status
message.
See also
l ndu pre-check
l ndu start
l ndu recover
Required arguments
[-m|--modified-events- Path to the file containing the modified call-home events.
file] file
Optional arguments
[-f|--force] Forces the import of the specified file without asking for
confirmation. Allows this command to be run from non-
interactive scripts.
Description
Imports and applies modifications to call-home events. This command imports the specified .xml
file that contains modified call-home events.
Use the set command to enable/disable call-home notifications.
Use the ls notifications/call-home command to display whether call-home is enabled.
See also
l notifications call-home remove-event-modifications
l notifications call-home view-event-modifications
l notifications call-home test
Syntax
notifications call-home remove-event-modificatons
[-c|--customer-specific]
[-e|--emc-generic]
[-f|--force]
Arguments
Optional arguments
[-c|--customer- If a customer-specific call-home events file has been imported,
specific] removes the file.
[-e |--emc-generic] If an Dell EMC call-home events file has been imported, removes
the file.
[-f|--force] Removes the specified imported call-home events file without
asking for confirmation. Allows this command to be executed from
a non-interactive script.
Description
This command removes the specified custom call-home events file. There are two types of .xml
event files:
l Dell EMC-generic events are modifications recommended by Dell EMC.
Dell EMC provides an .xml file containing commonly requested modifications to the default
call-home events.
l Customer-specific events are events modified to meet a specific customer requirement.
Dell EMC provides a custom events file developed by Dell EMC engineering and applied by Dell
EMC Technical Support.
If no file is specified, this command removes both custom call-home events files.
The specified file is not deleted from the management server. When a custom events file is
removed, the default events file LIC.xml is applied.
Use the ndu upgrade-mgmt-server command to re-import the file.
See also
l notifications call-home view-event-modifications
l notifications call-home test
[-e|--emc-generic]
Arguments
Optional arguments
[-c|--customer-specific] Displays customer specific modifications.
[-e|--emc-generic] Displays Dell EMC generic modifications.
Description
If event modifications are applied to call-home events, this command displays those events whose
call-home events have been modified.
If the same event is modified by both the customer-specific and the Dell EMC-generic events files,
the setting in the customer-specific file overrides the entry in the Dell EMC-generic file.
Use this command with no arguments to display a summary of all event modifications.
Use this command with the -c or -e arguments to display a summary of only the customer-
specific or Dell EMC generic modified events.
Use the --verbose argument to display detailed information.
See also
l notifications call-home remove-event-modifications
See also
l configuration event-notices-reports config
l configuration event-notices-reports reset
l notifications snmp-trap create
l set
Required arguments
[-j|--jobs] job-context-path * Specifies the jobs to cancel.
Optional arguments
[-h|--help] Displays command line help.
[--verbose] Provides more output during command execution. This may
not have any effect for some commands.
- * argument is positional.
Description
This command cancels jobs that are queued in the job queue.
Only jobs that are in a Queued state can be cancelled. Once they are cancelled, the jobs remain in
the job queue with a state of Cancelled.
See also
l notifications job delete
l notifications job resubmit
Syntax
notifications job delete
[-s|--state] job-state
[-j|--jobs] job-context-path [, job-context-path...]
[-f|--force]
[-h|--help]
[--verbose]
Arguments
Required arguments
[-j|--jobs] job-context- * Specifies the jobs to delete.
path
Optional arguments
[-s|--state] job-state Specifies a job state by which to filter the jobs. If specified,
this argument filters the jobs by their states. This option is
most useful when all jobs are specified in the command
invocation.
[-f|--force] Deletes the jobs without asking for confirmation.
[-h|--help] Displays command line help.
[--verbose] Provides more output during command execution.
- * argument is positional.
Description
This command permanently removes jobs from the job queue. Jobs that are in progress cannot be
deleted.
A job that is in progress cannot be deleted. All other job states - cancelled, failed, and successful -
can be deleted.
Example
Delete jobs that are in a failed state:
Shows the attempt to delete two specific jobs failed (jobs were skipped):
/notifications/jobs/Provision_4_14-04-14-15-57-03
Do you wish to proceed? (Yes/No) y
Skipped 2 jobs.
Use the --verbose option to display more information for a job that is skipped. In this example,
one job was skipped, or not deleted, because it was in progress:
See also
l notifications job cancel
l notifications job resubmit
Required arguments
[-j|--jobs] job-context-path [, job- * Specifies the jobs to resubmit.
context-path...]
Optional arguments
[-h|--help] Displays command line help.
[--verbose] Provides more output during command execution.
This may not have any effect for some commands.
- * argument is positional.
Description
Resubmit failed or canceled jobs to the job queue. The jobs will be placed in a queued state and
executed in normal priority order.
See also
l notifications job cancel
l notifications job delete
Required arguments
[-n|--name] trap-name Name of the SNMP trap sink.
Description
The SNMP trap does not start automatically.
To start the SNMP trap, do the following:
l Use the set command to set the IP address of the remote-host.
l Use the set command to set the started attribute to true.
Example
See also
l notifications call-home test
Optional arguments
[-s|--snmp-trap] trap-name Name of the SNMP trap sink to destroy.
[-f|--force] Destroy an SNMP trap sink that has been started.
Description
The --force argument is required to destroy an SNMP trap sink that has been started.
Example
See also
l notifications snmp-trap create
password-policy reset
Resets the password-policies to the default factory settings.
Contexts
/security/authentication
Syntax
password-policy reset
[-h|--help]
[-f|--force]
[--verbose]
Arguments
Optional arguments
[-h|--help] Displays the usage for this command.
[-f |--force] Forces reset of password-policy configuration without asking for
confirmation.
[--verbose] Provides more output during command execution.
Description
Resets the password-policies to the default factory settings.
Note: This command can only be run by the admin user. So it will ask for the admin password.
After successful authentication, If you do not specify the --force option during command
execution, the command prompts with a confirmation message and, based on user input, it will
proceed.
Note: Using this command will override all existing password-policy configurations and return
them to the default settings.
See also
l password-policy set
password-policy set
Each attribute in the Password policy is configurable. The new value will be updated to the
respective configuration file and existing users will be updated with this configuration.
Contexts
/security/authentication/password-policy
Syntax
set
[password-minimum-length
| minimum-password-age
| maximum-password-age
| password-warn-days
| password-inactive-days ] value
Arguments
Description
The password policies are not applicable to users configured through an LDAP server.
Password inactive days is not applied to admin user to protect the admin user from account
lockouts.
Note: Password Policy can be configured only by the admin user.
Example
To view the existing password-policy settings for the admin account and for other local users , run
the ll command in the security/authentication/password-policy context:
VPlexcli:/security/authentication/password-policy> ll
Name Value
----------------------- -----
password-inactive-days 1
password-maximum-days 90
password-minimum-days 1
password-minimum-length 8
password-warn-days 15
To view the existing password-policy settings for the service account, run the ll command in the
security/authentication/password-policy/service context:
VPlexcli:/security/authentication/password-policy/service>
Name Value
----------------------- -----
password-maximum-days 3650
password-minimum-days 0
password-warn-days 30
To view the default password-policy settings, run the ll command in the security/
authentication/password-policy/default context:
Name Value
----------------------- -----
password-inactive-days 1
password-maximum-days 90
password-minimum-days 1
password-warn-days 15
To view the permissible values for the password-policy set command, enter the command with no
options:
VPlexcli:/security/authentication/password-policy> set
attribute input-description
----------------------- --------------------------------------
name Read-only.
To set the password inactive days value to three days, use this command:
See also
l password-policy reset
plugin addurl
Adds an URL to the plug-in search path.
Contexts
All contexts.
Syntax
plugin addurl
[-u|--urls] url,url...
Arguments
Required arguments
[-u|--urls] url, url... A list of URLs to add to the search path. Entries must be separated by
commas.
Description
Note: The plugin commands are not intended for customer use.
Plug-ins extend the class path of the CLI. Plug-ins support dynamic addition of functionality. The
plugin search path is used by the plugin register command.
See also
l plugin listurl
l plugin register
plugin listurl
Lists URLs currently in the plugin search path.
Contexts
All contexts.
Syntax
plugin listurl
Description
The search path URLs are those locations added to the plugin search path using the plugin
addurl command.
Note: The plugin commands are not intended for customer use.
Example
See also
l plugin addurl
l plugin register
plugin register
Registers a shell plugin by class name.
Contexts
All contexts.
Syntax
plugin register
[-c|--classes] class-name[,class-name ...]
Arguments
Required arguments
[-c|--classes] class-name[, class- A list of plugin classes. Entries must be
name... ] separated by commas.
Description
Plugin class is found in the default classpath, or in locations added using the plugin addurl
command.
Plug-ins add a batch of commands to the CLI, generally implemented as a set of one or more
Jython modules.
Note: The plugin commands are not intended for customer use.
See also
l plugin addurl
l plugin listurl
popd
Pops the top context off the stack, and changes the current context to that context.
Contexts
All contexts.
Syntax
popd
Description
If the context stack is currently empty, an error message is displayed.
Example
In the following example:
l The pushd command adds a third context to the context stack. The output of the command
displays the three contexts in the stack.
l The popd command removes the top (last added) context, changes the context to the next
one in the stack, and the output displays the two remaining contexts:
See also
l pushd
pushd
Pushes the current context onto the context stack, and then changes the current context to the
given context.
Contexts
All contexts.
Syntax
pushd
[-c|--context] context
Arguments
Optional arguments
[-c|--context] context The context to push onto the context stack.
Description
Adds the context to the context stack.
If no context is supplied, and there is a context on the stack, the current context is exchanged
with the top-of-stack context.
Use the popd command to remove the topmost context from the context stack.
Example
Starting in the root context, use the pushd command to push the first context onto the context
stack:
VPlexcli:/>
VPlexcli:/> pushd /clusters/cluster-1/storage-elements/storage-arrays/
[/clusters/cluster-1/storage-elements/storage-arrays, /, /]
Use the pushd command to push a second context onto the context stack:
[/engines/engine-1-1/directors/director-1-1-A, /clusters/cluster-1/storage-
elements/storage-arrays, /, /]
Now, there are two contexts on the context stack. Use the pushd command to toggle between
the two contexts:
VPlexcli:/engines/engine-1-1/directors/director-1-1-A> pushd
[/clusters/cluster-1/storage-elements/storage-arrays, /engines/engine-1-1/
directors/director-1-1-A, /, /]
VPlexcli:/clusters/cluster-1/storage-elements/storage-arrays> pushd
[/engines/engine-1-1/directors/director-1-1-A, /clusters/cluster-1/storage-
elements/storage-arrays, /, /]
VPlexcli:/engines/engine-1-1/directors/director-1-1-A>
See also
l popd
rebuild set-transfer-size
Changes the transfer-size of the given devices.
Contexts
All contexts.
Syntax
rebuild set-transfer-size
[-r|--devices] context-path,context-path...
[-l|--limit] limit
Arguments
Required arguments
[-r|-devices] * List of one or more devices for which to change the transfer size.
context-path Wildcards are permitted. Entries must be separated by commas.
[-l|--limit] * Transfer size in bytes. Maximum number of bytes to transfer as one
limit operation per device. Specifies the size of read sector designated for
transfer in cache. Setting this value smaller implies more host I/O outside
the transfer boundaries. Setting the value larger may result in faster
transfers. Valid values must be multiples of 4K.
Range: 40K-128M.
See About transfer-size in the batch-migrate start command.
* - argument is positional.
Description
If the target devices are rebuilding when this command is issued, the rebuild is paused and
resumed using the new transfer-size.
Note: If there are queued rebuilds, the rebuild may not resume immediately.
Example
Set the transfer-size on a specified device to 1M:
See also
l rebuild show-transfer-size
l rebuild status
rebuild show-transfer-size
Shows the transfer-size of specified RAID 1 devices.
Contexts
All contexts.
Syntax
rebuild show-transfer-size
[-r|--devices] context-path
Arguments
Optional arguments
[-r|-devices] context- List of one or more RAID 1 devices for which to display the
path... transfer size. Entries must be separated by commas. Wildcards
are permitted.
Example
Display the rebuild transfer size for a specified device:
----------- -------------
dd_00 2M
dd_01 2M
dd_02 2M
.
.
.
See also
l rebuild set-transfer-size
l rebuild status
rebuild status
Displays all global and cluster-local rebuilds along with their completion status.
Contexts
All contexts.
Syntax
rebuild status
[--show-storage-volumes]
Arguments
Optional arguments
--show-storage- Displays all storage volumes that need to be rebuilt, both active
volumes and queued. If not present, only the active rebuilds are displayed.
Description
Completion status is listed as:
rebuilt/total (complete%)
Example
Check rebuild status from storage-volume context:
Global rebuilds:
No active global rebuilds.
cluster-1 local rebuilds:
device rebuild type rebuilder director rebuilt/total percent
finished throughput ETA
------------ ------------ ------------------ -------------
---------------- ---------- ----
test3313_r1 full s10_428f 1.23G/4G
30.81% 90.1M/s
VPlexcli:/distributed-storage/distributed-devices/testvol1/distributed-device-
components/C2testvol0000/components> rebuild status --show-storage-volumes
StorageVolumes marked for rebuild:
cluster-2:
extent_60060160639028006413c641e2a7e011_1
[1] storage_volumes marked for rebuild
Global rebuilds:
device rebuild type rebuilder director rebuilt/total percent finished
throughput ETA
-------- ------------ ------------------ ------------- ----------------
---------- ---------
testvol1 full s1_220d_spa 4.06G/11.2G
36.17% 9.94M/s 12.3min
Local rebuilds:
No active local rebuilds.
See also
l rebuild show-transfer-size
report aggregate-monitors
Aggregate the reports generated by the report create-monitors or monitor commands.
Contexts
All contexts.
Syntax
report aggregate-monitors
[-d|--directory] directory
Arguments
Optional arguments
[-d|-directory] Directory in which to create the .csv files.
directory Default directory path: /var/log/VPlex/cli/reports/ on
the management server.
Description
The reports are aggregated by cluster.
An aggregate report is generated for:
l Each cluster
report capacity-arrays
Generates a capacity report.
Contexts
All contexts.
Syntax
report capacity-arrays
[-t|--tier-regx] regular-expression
[-d|--directory] directory
Arguments
Optional arguments
[-t|-tier- Regular expression which when applied to the storage-volume name,
regex] regular- returns the tier ID in a group. Most expressions must be enclosed in
expression quotes.
Default: value of /system-defaults::tier-regular-expression
[-d|-directory] Directory in which to create the csv files. Output is written to two files:
directory l File for local storage: CapacityArraysLocal.csv.
l File for shared storage: CapacityArraysShared.csv.
Default directory path: /var/log/VPlex/cli/reports/* on the
management server.
Description
Generates a capacity report for all the storage in a VPLEX, grouped by storage arrays.
This command assumes the following:
l All storage volumes in a storage array have the same tier value.
l The tier is indicated in the storage volume name. The tier attribute in the virtual volumes
context is ignored.
If a file is specified, output is formatted as:
If the files already exist, the report is appended to the end of the files.
Note: Tier IDs are required to determine the tier of a storage volume/storage array. Storage
volumes that do not contain any of the specified IDs are given the tier value no-tier.
The report is separated into two parts: local storage and shared storage.
l Local storage is accessible only from the same cluster where the storage is physically located.
Information in the report for local storage includes:
n Cluster ID
n Storage array
n Tier - the tier of the storage array
n Allocated - storage that is visible through a view (exported)
n Unallocated-device - storage that is in devices, but not visible from a view. For example, a
virtual volume that has not been exported or free space in a device that is not part of a
virtual volume.
n Unallocated-storage-volume - storage in unused storage volumes.
l Shared storage is accessible from clusters other than where it is physically located (distributed
and remote virtual volumes). Information in the report for shared storage includes:
n allocated - storage that is visible through a view (exported)
n unallocated-device - storage that is in devices, but not visible from a view. For example, a
virtual volume that has not been exported or free space in a device that is not part of a
virtual volume.
VPlexcli:/> exit
Connection closed by foreign host.
service@ManagementServer:~>
Navigate to the VPLEX CLI reports directory (or the specified output directory):
service@ManagementServer:~> cd /var/log/VPlex/cli/reports
service@ManagementServer:/var/log/VPlex/cli/reports> ll
total 48
-rw-r--r-- 1 service users 2253 2010-08-12 15:46 CapacityArraysLocal.csv
-rw-r--r-- 1 service users 169 2010-08-12 15:46 CapacityArraysShared.csv
.
.
.
service@ManagementServer:/var/log/VPlex/cli/reports> cat
CapacityArraysLocal.csv
Time, Cluster name, Array name, Tier string, Allocated volumes (GiB), Unalloc
devices (GiB), Unalloc storage_volumes (GiB)
2010-06-21 16:00:32, cluster-1, EMC-0x00000000192601378, no-tier, 0, 0,
5666242560000
2010-06-21 16:00:32, cluster-1, EMC-0x00000000192601852, no-tier, 0, 0,
5292530073600
See also
l report capacity-clusters
l report capacity-hosts
report capacity-clusters
Generates a capacity report for every cluster.
Contexts
All contexts.
Syntax
report capacity-clusters
[-d|--directory] directory
[--verbose]
Arguments
Optional
arguments
[-d|- Directory in which to create the csv files. Output is written to a file named
directory] CapacityClusters.csv.
directory Default directory path: /var/log/VPlex/cli/reports/ on the
management server.
Description
The capacity report information includes:
l Unclaimed storage-volume capacity in GB.
l Number of unclaimed storage volumes.
l Claimed storage-volume capacity in GB.
l Number of claimed storage volumes.
l Used storage-volume capacity in GB.
l Number of used storage volumes.
l Unexported virtual volume capacity in GB.
l Number of unexported virtual volumes.
l Exported virtual volume capacity in GB.
l Number of exported virtual volumes.
Examples
See also
l report capacity-arrays
l report capacity-hosts
report capacity-hosts
Generates a host capacity report.
Contexts
All contexts.
Syntax
report capacity-hosts
[-d|--directory] directory
[--verbose]
Arguments
Optional arguments
[-d|-directory] Directory in which to create the csv files. Output is written to a file
directory named CapacityHosts.csv.
Default directory path: /var/log/VPlex/cli/reports/ on the
management server.
Description
The host capacity information includes:
l Number of views.
l Total exported capacity in GB.
l Number of exported virtual volumes per cluster.
Example
Generate a host capacity report.
See also
l report capacity-clusters
l report capacity-arrays
report create-monitors
Creates three performance monitors for each director in the VPLEX: storage-volume performance,
port performance, and virtual volume performance. Each monitor has one file sink.
Contexts
All contexts.
Syntax
report create-monitors
[-d|--directory] directory
[--force]
Arguments
Optional arguments
[-d|-directory] Directory in which to create the csv files.
directory Default directory path: /var/log/VPlex/cli/reports/ on
the management server.
--force Forces the creation of the monitor, even if existing monitors are
delayed in their polling.
Description
Creates three monitors for each director in the VPLEX. Monitors are named:
l Cluster_n_Dir_nn_diskReportMonitor
l Cluster_n_Dir_nn_portReportMonitor
l Cluster_n_Dir_nn_volumeReportMonitor
The period attribute for the new monitors is set to 0 (automatic polling is disabled). Use the
report poll-monitors command to force a poll.
Each monitor has one file sink. The file sinks are enabled.
By default, output files are located in /var/log/VPlex/cli/reports/ on the management
server. Output filenames are in the following format:
Monitor-name_<Cluster_n_Dir_nn.csv
Disk report monitors collect:
l storage-volume.per-storage-volume-read-latency
l storage-volume.per-storage-volume-write-latency.
Port report monitors collect:
l be-prt.read
l be-prt.write
l fe-prt.ops
l fe-prt.read
l fe-prt.write
Volume report monitors collect:
l virtual-volume.ops
l virtual-volume.read
l virtual-volume.write
Examples
In the following example:
l The report create-monitors command creates a diskReportMonitor, portReportMonitor,
and volumeReportMonitor for each director
l Thell /monitoring/directors/*/monitors command displays the new monitors:
6.7min - - - 64
.
.
.
In the following example, the --force argument forces the creation of monitors, even though the
creation results in missed polling periods:
See also
l monitor add-file-sink
l monitor create
l monitor destroy
l monitor remove-sink
l report poll-monitors
report poll-monitors
Polls the report monitors created by the report create-monitors command.
Contexts
All contexts.
Syntax
report poll-monitors
Description
The monitors created by the report create-monitors command have their period attribute
set to 0 seconds (automatic polling is disabled) and one file sink.
Use this command to force an immediate poll and collection of performance data for monitors
created by the report create-monitors command.
Output is written to files located in /var/log/VPlex/cli/reports/ on the management
server.
Example
See also
l monitor collect
l report create-monitors
rm
Deletes a file from the corresponding share location.
Contexts
This command can only be executed in the in or out sub-contexts within the share context of
the management server (either /management-server/share/in or /management-server/
share/out.
Syntax
rm -n|--filename filename [-h | --help] [--verbose]
Arguments
Optional arguments
[-h|--help] Displays the usage for this command.
[--verbose] Provides more output during command execution. This may not have any
effect for some commands.
Description
The rm command is used to delete a file from an SCP directory.
As part of Role-based access implementation, users other than service are not allowed shell
access and access by SCP is restricted to a single directory. The SCP directory, /diag/share/
consists of two sub-directories in and out which contain only files that can be transferred by
SCP to and from of the management-server respectively.
mangement-server/share/in and mangement-server/share/out are contexts
corresponding to the in and out sub-directories of the SCP directory. Users without shell access
use ls and rm commands to files transferred to and from the management server with SCP.
service and admin users are authorized to delete any existing file in the SCP sub-directories.
Other users are only authorized to delete files to which they have access.
See also
l user add
rp import-certificate
Imports a RecoverPoint security certificate from the specified RPA cluster.
Contexts
All contexts.
Syntax
rp import-certificate
Arguments
None.
Description
This command runs an interview script to import the RecoverPoint security certificate.
In Metro systems, run this command on both management servers.
CAUTION This command restarts VPLEX CLI and GUI sessions on the VPLEX cluster to which
the RPA cluster is attached. With this release, using this command will lead to loss of VPLEX
Integrated Array Services (VIAS) provisioning jobs created.
Before you begin, you will need the IP address of the RecoverPoint cluster from which to import
the security certificate.
Import the RecoverPoint security certificate from the RPA cluster at IP address 10.6.210.85:
VPlexcli:/> rp import-certificate
This command will cause the VPLEX CLI process to restart if security settings
are modified. This will require a new log in from all connected CLI and GUI
clients.
To proceed type CONTINUE or hit enter to abort: CONTINUE
Please enter the IP v4 address of the RP cluster: 10.6.210.85
-----Certificate Details-----
Owner: CN=RecoverPoint, OU=Unified Storage Division, O=EMC Corporation,
L=Ramat-Gan, ST=Israel, C=IL
Issuer: CN=RecoverPoint, OU=Unified Storage Division, O=EMC Corporation,
L=Ramat-Gan, ST=Israel, C=IL
Serial number: 4d907d4c
Valid from: Mon Mar 28 12:21:32 UTC 2011 until: Thu Mar 25 12:21:32 UTC 2021
Certificate fingerprints:
MD5: CF:38:C3:55:A9:99:AC:A6:79:12:7C:83:C3:95:23:CB
SHA1: 4D:D6:29:30:ED:0A:77:6D:38:4E:10:D3:2E:37:29:CB:45:DC:9E:C0
Signature algorithm name: SHA1withRSA
Version: 3
Trust this certificate? (Y/N): Y
The management server console process will now restart, please press any key
when you are ready. Please wait a minute before reconnecting.
Press '<Enter>' to continue ...
Stopping EMC VPlex Management Console: Connection closed by foreign host.
service@sms-advil-2:/opt/emc/VPlex/tools/utils>
See also
l rp summary
l rp validate-configuration
rp rpa-cluster add
Associates a cluster of RecoverPoint Appliances to single VPLEX cluster.
Contexts
All contexts.
Syntax
rp rpa-cluster add
[-o|--host] rpa-management-address
[-u|--admin-username] admin-username
[-c|--cluster] cluster-id
Arguments
Required arguments
[-o|--host] rpa- * The RPA cluster management IP address.
management-address
[-u|--admin-username] * The administrative username of the RPA
admin-user-name
Optional arguments
[-c|--cluster] cluster-id Context path of the VPLEX cluster associated with this cluster
of RPAs. If no VPLEX cluster is specified, the ID of the local
cluster is used. The local cluster is the cluster whose cluster-id
matches the management server’s IP seed. See “About cluster
IP seed and cluster ID” in the security ipsec-configure
command.
* argument is positional.
Description
Adds information about a RecoverPoint Appliance cluster to VPLEX. Used by VPLEX to connect to
RecoverPoint and retrieve replication information.
In Metro systems, run this command on both management servers.
Note: This command prompts for the RPA administrative password.
Configuration of RPAs is not permitted during VPLEX NDU.
After the RPA cluster is added, information about the RPA cluster and its consistency groups and
volumes appear in the following VPLEX CLI contexts and commands:
l /recoverpoint/rpa-clusters/ip_address/volumes
l /clusters/cluster_name/consistency-groups/cg_name/recoverpoint
l rp summary command
l rp validate-configuration command
Field Description
Field Description
MajorVersion.MinorVersion.ServicePack.Patc
h (branch.build)
In /recoverpoint/rpa-clusters/ip-address/consistency-groups context
In /recoverpoint/rpa-clusters/ip-address/volumes context
Field Description
In /recoverpoint/rpa-clusters/ip-address/volumes/volume context
Field Description
Examples
Add a RecoverPoint RPA cluster:
l ll /recoverpoint/rpa-clusters/ip-address/consistency-groups/cg-name displays
detailed information about the specified consistency group.
l ll /recoverpoint/rpa-cluster/ip-address/volumes/ displays volumes managed
by the RPA.
l ls /recoverpoint/rpa-cluster /ip-address/volumes/volume-name displays
detailed information about the specified volume
VPlexcli:/> ll /recoverpoint/rpa-clusters
/recoverpoint/rpa-clusters:
RPA Host VPLEX Cluster RPA Site RPA ID RPA Version
----------- ------------- --------- ------ -----------
10.6.210.87 cluster-1 Tylenol-1 RPA 1 4.1(d.147)
10.6.211.3 cluster-2 Tylenol-2 RPA 1 4.1(d.147)
VPlexcli:/> ll recoverpoint/rpa-clusters/10.6.210.87/
/recoverpoint/rpa-clusters/10.6.210.87:
Attributes:
Name Value
----------------------
-------------------------------------------------------
admin-username admin
config-changes-allowed true
rp-health-indications [Problem detected with RecoverPoint RPAs and
splitters]
rp-health-status warning
rp-software-serial-id -
rpa-host 10.6.210.87
rpa-id RPA 1
rpa-site Tylenol-1
rpa-version 4.1(d.147)
vplex-cluster cluster-1
Contexts:
Name Description
------------------
-----------------------------------------------------------
consistency-groups Contains all the RecoverPoint consistency groups which
consist of copies local to this VPLEX cluster.
volumes Contains all the distributed virtual volumes with a
local
extent and the local virtual volumes which are used
by this
RPA cluster for RecoverPoint repository and journal
volumes
and replication volumes.
VPlexcli:/> cd recoverpoint/rpa-clusters/10.6.210.87/consistency-groups
VPlexcli:/recoverpoint/rpa-clusters/10.6.210.87/consistency-groups> ll
Name
----
CG-1
CG-2
CG-3
VPlexcli:/recoverpoint/rpa-clusters/10.6.210.87/consistency-groups> ll
CG-1
/recoverpoint/rpa-clusters/10.6.210.87/consistency-groups/CG-1:
Attributes:
Name Value
----------------------------- ----------------------
active-replicating-rp-cluster Tylenol-1
distributed-group false
enabled true
preferred-cluster Tylenol-1
preferred-primary-rpa RPA2
production-copy Pro-1
protection-type MetroPoint Replication
uid 7a59f870
Contexts:
Name Description
----------------
-------------------------------------------------------------
copies Contains the production copy and the replica copies of
which
this RecoverPoint consistency group consists.
links Contains all the communication pipes used by
RecoverPoint to
replicate consistency group date between the production
copy
and the replica copies.
replication-sets Contains all the replication sets of which this
RecoverPoint
consistency group consists.
VPlexcli:/recoverpoint/rpa-clusters/10.6.210.87/consistency-groups/CG-1>
ll copies
/recoverpoint/rpa-clusters/10.6.210.87/consistency-groups/CG-1/copies:
Name
-----
Pro-1
Pro-2
REP-1
REP-2
REP-3
VPlexcli:/recoverpoint/rpa-clusters/10.6.210.87/consistency-groups/CG-1>
ll links
/recoverpoint/rpa-clusters/10.6.210.87/consistency-groups/CG-1/links:
Name
------------
Pro-1->REP-1
Pro-1->REP-3
Pro-2->REP-2
Pro-2->REP-3
VPlexcli:/recoverpoint/rpa-clusters/10.6.210.87/consistency-groups/CG-1>
ll replication-sets
/recoverpoint/rpa-clusters/10.6.210.87/consistency-groups/CG-1/
replication-sets:
Name
-----
RSet0
RSet1
RSet2
RSet3
VPlexcli:/> ll recoverpoint/rpa-clusters/10.6.210.87/volumes/
ty_dr1_pro_vol
/recoverpoint/rpa-clusters/10.6.210.87/volumes/ty_dr1_pro_vol:
Name Value
------------------------- --------------------------------
rp-consistency-group CG-1
rp-consistency-group-copy Pro-1
rp-replication-set RSet0
rp-role Production Source
rp-type Replication
rpa-site Tylenol-1
size 5G
uid 6000144000000010f03de8cb2f4c66d9
vplex-cluster cluster-1
vplex-consistency-group cg1_pro
See also
l rp rpa-cluster remove
l rp summary
l rp validate-configuration
rp rpa-cluster remove
Removes information about a RecoverPoint Appliance from VPLEX.
Contexts
All contexts.
Syntax
rp rpa-cluster remove
[-r|--rpa] IP-address
Arguments
Required arguments
[-r|--rpa-cluster] IP-address The site management IP address of the RPA cluster to
remove.
Description
Removes information about a RPA cluster from VPLEX.
Removes the following commands and contexts from the VPLEX CLI:
l /recoverpoint/rpa-clusters/ip_address/volumes
l /clusters/cluster_name/consistency-groups/cg_name/recoverpoint
l rp summary command
l rp validate-configuration command
Use the ll command in /recoverpoint/rpa-clusters context to display the site
management IP address.
Example
Remove an RPA:
VPlexcli:/> ll /recoverpoint/rpa-clusters
/recoverpoint/rpa-clusters:
See also
l rp rpa-cluster add
l rp summary
rp summary
Displays a summary of replication for the entire VPLEX cluster, across all connected RPA sites/
clusters.
Contexts
All contexts.
In /recoverpoint/ context, command is summary.
Syntax
rp summary
Description
This command calculates the total number of volumes and the total capacity for each RP type and
RP role it finds in the /recoverpoint/rpa-clusters context.
Also prints cumulative information.
RecoverPoint MetroPoint summary information is included in the totals.
Note: Distributed volumes used as production source volumes in MetroPoint replication will
have their capacity added to both cluster 1 and cluster 2 totals. Adding the total replicated
capacity of both clusters together produces a number that is larger than the actual total
replicated capacity of the VPLEX Metro system, as those particular volumes are counted
twice.
A summary of the number of production source volumes for MetroPoint groups is displayed
after the individual cluster summary, showing the shared capacity of those volumes.
Field Description
Field Description
Example
Display a VPLEX Metro with RecoverPoint RPAs deployed at both VPLEX clusters:
VPlexcli:/> rp summary
RecoverPoint Replication Totals:
VPLEX Cluster RP Type RP Role Total Volumes Total Capacity
------------- ----------- ----------------- ------------- --------------
cluster-1 Replication Production Source 3 15G
Local Replica 1 5G
Remote Replica 3 15G
Journal 5 50G
Repository 1 5G
------------- --------------
Totals: 13 90G
VPLEX Cluster RP Type RP Role Total Volumes Total Capacity
------------- ----------- ----------------- ------------- --------------
cluster-2 Replication Production Source 2 10G
Local Replica 2 10G
Remote Replica 0 0G
Journal 2 20G
Repository 1 5G
------------- --------------
Totals: 7 45G
RecoverPoint MetroPoint Replication summary:
1 MetroPoint group(s) are configured with 2 Production Source volumes using a
total capacity of 10G.
Distributed Volumes used for MetroPoint replication will be counted in the
capacity of each cluster above.
See also
l rp rpa-cluster add
l rp rpa-cluster remove
l rp validate-configuration
rp validate-configuration
Validates the RecoverPoint splitter configuration.
Contexts
All contexts.
Syntax
rp validate-configuration
Description
This command checks the system configuration with respect to RecoverPoint and displays errors
or warnings if errors are detected.
For VPLEX Metro configurations, run this command on both management servers.
CAUTION When RPAs are zoned to VPLEX using single-channel mode (2 RPA ports are zoned
to VPLEX front end ports, and 2 RPA ports are zoned to the VPLEX back end ports) this
command reports the ports as “WARNING”. This is because the command checks that all 4
ports on the RPA are zoned to both VPLEX front end and back end ports (dual-channel mode).
See the second example listed below.
Best practice is to zone every RPA port to both VPLEX front end and back end ports. For
configurations where this is not possible or desirable, this command detects the port
configuration, displays a warning. Administrators who have purposely configured single-
channel mode can safely ignore the warning.
This command performs the following checks:
l Splitter checks:
n VPLEX splitters are installed.
n All splitter versions agree.
n The VPLEX splitter status is OK.
l RecoverPoint cluster checks:
n VPLEX management server can reach all the attached RecoverPoint clusters
l Storage view checks:
n Storage views do not have mixed RecoverPoint and non-RecoverPoint initiator ports.
n RecoverPoint storage views have access to multiple ports.
n No volume is exposed to more than one RecoverPoint cluster.
n No RecoverPoint journal or repository volumes are exposed to hosts.
l Initiator port checks:
n All the RecoverPoint initiator ports are registered.
n All the registered RecoverPoint initiator ports are used.
l RP Cluster:
n VPLEX management server can reach all the attached RecoverPoint Clusters.
l Consistency group checks:
n VPLEX consistency groups are aligned with RecoverPoint consistency groups.
l Volumes checks:
n All production volumes are not remote volumes.
n All distributed production volumes have detach rule set correctly.
n All distributed production volumes have cache mode set correctly.
n All production and replica volumes are in RecoverPoint-enabled VPLEX consistency groups.
n No replica volume is a remote volume.
n All distributed replica volumes have detach rule set correctly.
n All distributed replica volumes have cache mode set correctly.
n All journal and repository volumes are local volumes.
n All repository volumes are not in any VPLEX consistency group.
Example
Check a healthy RecoverPoint configuration:
VPlexcli:/> rp validate-configuration
This command may take several minutes to complete. Please be patient.
==============================================================================
Validate the VPLEX Splitters
==============================================================================
Validating that VPLEX Splitters are
installed
OK
VPlexcli:/> rp validate-configuration
This command may take several minutes to complete. Please be patient.
.
.
.
=======================================================================
Validate the VPLEX to RPA zoning
=======================================================================
Validating that VPLEX sees all expected initiator ports from
RPAs WARNING
VPLEX does not see RPA initiator port: 0x500124804dc50283
VPLEX does not see RPA initiator port: 0x500124824dc50283
VPLEX does not see RPA initiator port: 0x500124804a00021b
VPLEX does not see RPA initiator port: 0x500124824a00021b
Validating that VPLEX sees all expected backend RPA
ports WARNING
VPLEX does not see RPA back end port: 0x500124814dc50283
VPLEX does not see RPA back end port: 0x500124834dc50283
VPLEX does not see RPA back end port: 0x500124814a00021b
VPLEX does not see RPA back end port: 0x500124834a00021b
=======================================
Validation Summary
=======================================
The following potential problems were found in the system:
1 problem(s) were found with RecoverPoint Clusters.
8 potential problem(s) were found with the zoning between VPLEX and
RecoverPoint.
VPlexcli:/> rp validate-configuration
.
.
.
Validating that storage views do not have mixed non-recoverpoint and
recoverpoint initiator
ports ERROR
Storage view rp-view-demo has mixed types of initiator ports.
.
.
.
==============================================================================
Validation Summary
==============================================================================
The following potential problems were found in the system:
1 problem(s) were found with storage views.
See also
l rp rpa-cluster add
l rp rpa-cluster remove
l rp summary
schedule add
Schedules a job to run at the specified times.
Contexts
All contexts.
Syntax
schedule add
[-t|--time] time
[-c|--command] command
Arguments
Required arguments
[-t|--time] time * Date and time the job executes in crontab-style format enclosed in quote
marks. Values are specified in the crontab-style format: “Minute Hour Day-
of-the-Month Month Day-of-the-week”
l Minute - 0-59.
l Hour - 0-23.
l Day of the Month - 1-31.
l Month - 1-12, January = 1...December = 12
l Day of the week - 0-6, Sunday = 0...Saturday = 6
* - argument is positional.
Examples
To run the tree command every night at 1:00 a.m.:
See also
l schedule list
l schedule modify
l schedule remove
schedule list
Lists all scheduled jobs.
Contexts
All contexts.
Syntax
schedule list
Examples
See also
l schedule modify
l schedule remove
schedule modify
Modifies an existing scheduled job.
Contexts
All contexts.
Syntax
schedule modify
[-j|--job] job-ID
[-t|--time] time
[-c|--command] command
Arguments
Required arguments
[-j|--job] job-ID * ID of the scheduled job as displayed by the schedule list command.
[-t|--time] time * Date and time the job executes in crontab-style format enclosed in
quote marks. Values are specified in the following format:
“MinuteHourDay-of-the-Month MonthDay of -he-week”
Minute - 0-59.
Hour - 0-23.
Day of the Month - 1-31.
Month - 1-12, January = 1...December = 12
Day of the week - 0-6, Sunday = 0...Saturday = 6
* - argument is positional.
Examples
To modify a job with the ID of 3 so that it runs every day at 11:00 a.m. type:
See also
l schedule list
l schedule remove
schedule remove
Removes a scheduled job.
Contexts
All contexts.
Syntax
schedule remove
[-j|--job] job-ID
Arguments
Required arguments
[-j|--job] job-ID * ID of the scheduled job as displayed by the schedule list command.
* - argument is positional.
Example
Remove job with the ID of 3:
See also
l schedule list
l schedule modify
scheduleSYR add
Schedules a weekly SYR data collection.
Contexts
All contexts.
Syntax
scheduleSYR add
[-d|--dayOfWeek] [0-6]
[-t|--hours] [0-23]
[-m|--minutes] [0-59]
Arguments
Required arguments
[-d|--dayOfWeek] [0-6] Day of the week run the collection.
Valid values are 0-6, where Sunday = 0...Saturday = 6.
Description
Typically, SYR collection and reporting are configured at initial system setup. Use this command to
add a scheduled SYR collection time if none was configured.
SYR data collection can be scheduled to occur at most once a week. Attempts to add another
weekly schedule results in an error.
SYR reporting gathers VPLEX configuration files and forward them to Dell EMC. SYR reports
provide:
l Faster problem resolution and RCA
l Proactive maintenance
See also
l configuration event-notices-reports config
l configuration event-notices-reports reset
l schedule list
l scheduleSYR list
l scheduleSYR remove
l syrcollect
scheduleSYR list
Lists the scheduled SYR data collection job.
Contexts
All contexts.
Syntax
scheduleSYR list
Example
List the SYC collection schedule:
See also
l configuration event-notices-reports config
l configuration event-notices-reports reset
l scheduleSYR add
l scheduleSYR remove
scheduleSYR remove
Removes the currently scheduled SYR data collection job.
Contexts
All contexts.
Syntax
scheduleSYR remove
Description
Only one SYR data collection can be scheduled. The current SYR collection cannot be modified. To
modify the SYR data collection job:
l Use the scheduleSYR remove command to remove the existing collection job.
l Use the scheduleSYR add command to create a new collection job.
Example
Remove a scheduled collection:
See also
l configuration event-notices-reports config
l configuration event-notices-reports reset
l scheduleSYR add
l scheduleSYR list
script
Changes to interactive Jython scripting mode.
Contexts
All contexts.
Syntax
script
[-i|--import] module
[-u|--unimport] module
Arguments
Optional arguments
[-i|--import] module Import the specified Jython module without changing to interactive
mode. After importation, commands registered by the module are
available in the CLI. If the module is already imported, it is explicitly
reloaded.
[-u|--unimport] Unimport the specified Jython module without changing to
module interactive mode. All the commands that were registered by that
module are unregistered.
Description
Changes the command mode from VPLEX CLI to Jython interactive mode.
To return to the normal CLI shell, type a period '.' and press ENTER.
Use the --import and --export arguments to import or export the specified Jython module
without changing to interactive mode.
Example
Enter Jython interactive mode:
VPlexcli:/> script
Jython 2.2 on java1.6.0_03
>>>
>>> .
VPlexcli:/>
See also
l source
security configure-mmcs-users
Configures MMCS user accounts to synchronize service user password to the peer MMCS.
Contexts
All contexts.
Syntax
security configure-mmcs-users [-h | --help] [--verbose]
Arguments
Optional arguments
[-h|--help] Displays the usage for this command.
[--verbose] Provides more output during command execution. This may not have any
effect for some commands.
Description
Command to sync the service account credentials between both MMCS of a V6 cluster. By
running this command, password of the service account is synced to the peer MMCS.
security create-ca-certificate
Creates a new Certification Authority (CA) certificate.
Contexts
All contexts.
Syntax
security create-ca-certificate
[-l|--keylength] length
[-d|--days] days
[-o|--ca-cert-outfilename] filename
[-f|--ca-key-outfilename] filename
[-s|--ca-subject-filename] filename
Arguments
Optional arguments
[-l|--keylength] length The length (number of bits) for the CA key. Default: 2048.
Range: 384 - 2048
[-d|--days] days Number of days that the certificate is valid. Default: 1825
(5 years). Range: 365 - 1825.
[-o|--ca-cert-outfilename] CA Certificate output filename. Default:
filename strongswanCert.pem.
[-f|--ca-key-outfilename] CA Key output filename. Default: strongswanKey.pem.
filename
[-s|--ca-subject-filename] Name of the CA subject information file that contains the
filename subject information to create the CA certificate.
Description
A management server authenticates users against account information kept on its local file system.
An authenticated user can manage resources in the clusters.
The system uses the Certification Authority (CA) to sign management server certificates.
The security create-ca-certificate and security create-host-certificate
commands create the CA and host certificates using a pre-configured Distinguished Name where
the Common Name is the VPLEX cluster Top Level Administrator (TLA). If the TLA is not already
set, it must be set manually to prevent certificate creation failure.
Alternatively, use the --ca-subject-filename argument to create a custom Distinguished
Name. Specify the full path of the subject file unless the subject file is in the local CLI directory.
This command creates two objects on the management server:
l A CA certificate file valid for 1825 days (5 years). The file is located at:
/etc/ipsec.d/cacerts/strongswanCert.pem
l A private key protected by a passphrase. The CA key is located at:
/etc/ipsec.d/private/strongswanKey.pem
Note: The passphrase used during the VPN configuration can contain letters, numerals, and
special characters.
Examples
Create a default CA certificate (strongswanCert.pem) with custom CA certificate subject
information. In the following example,
l The security create-certificate-subject command creates a custom subject file.
named TestSubject.txt.
l The security create-ca-certificate command creates a default CA certificate with
the specified custom subject file
See also
l security create-certificate-subject
l security create-host-certificate
l security delete-ca-certificate
l security delete-host-certificate
l security export-ca-certificate
l security export-host-certificate
l security import-ca-certificate
l security import-host-certificate
l security ipsec-configure
l security show-cert-subj
l Dell EMC VPLEX Security Configuration Guide
l Renew security certificate procedures in the VPLEX SolVe Desktop
security create-certificate-subject
Creates a subject file used in creating security certificates.
Contexts
All contexts.
Syntax
security create-certificate-subject
[-c|--country] country
[-s|--state] state
[-m|--org-name] name
[-u|--org-unit] unit
[-l|--locality] locality
[-n|--common-name] name
[-e|--email] e-mail
[-o|--subject-out-filename] filename
[--force]
Arguments
Required arguments
[-o|--subject-out- The filename of the subject file.
filename] filename
Optional arguments
[-c|--country] country The Country value for the country key in the subject file.
[-s|--state] state The State value for the state key in the subject file.
[-m|--org-name] name Organizational Name value for the organizational name key in
the subject file.
[-u|--org-unit] unit Organizational Unit value for the organizational unit key in the
subject file.
[-l|--locality] locality Locality value for the locality key in the subject file.
[-n|--common-name] name Name value for the name key in the subject file.
[-e|--email] e-mail E-mail value for the e-mail key in the subject file.
--force Overwrites the specified subject-out-filename if a file of that
name already exists. If a file with the subject-out-filename
already exists and the --force argument is not specified,
the command fails.
Description
Creates a subject file used in certificate creation.
Examples
Create a default certificate subject file:
The command creates a certificate subject file TestSubject.txt file in the /var/log/
VPlex/cli directory with the following information:
l SUBJECT_COUNTRY=US
l SUBJECT_STATE=Massachusetts
l SUBJECT_LOCALITY=Hopkinton
l SUBJECT_ORG=Dell EMC
l SUBJECT_ORG_UNIT=Dell EMC
l SUBJECT_COMMON_NAME=FNM00102200421
l SUBJECT_EMAIL=support@emc.com
Create a custom certificate subject file:
The command creates the certificate subject file TestSubject.txt file in the /var/log/
VPlex/cli directory with the following information:
l SUBJECT_COUNTRY=US
l SUBJECT_STATE=NewYork
l SUBJECT_LOCALITY=NewYork
l SUBJECT_ORG=EMC
l SUBJECT_ORG_UNIT=EMC
l SUBJECT_COMMON_NAME=CommonTestName
l SUBJECT_EMAIL=test@emc.com
See also
l security create-certificate-subject
l security create-host-certificate
l security export-ca-certificate
l security export-host-certificate
l security import-ca-certificate
l security import-host-certificate
l security ipsec-configure
l security show-cert-subj
l Dell EMC VPLEX Security Configuration Guide
security create-host-certificate
Creates a new host certificate and signs it with an existing CA certificate.
Contexts
All contexts.
Syntax
security create-host-certificate
[-l|--keylength] length
[-d|--days] days
[-o|--host-cert-outfilename] filename
[-f|--host-key-outfilename] filename
[-r|--ca-subject-filename] filename
[-c|--ca-cert-filename] ca-certificate
[-k|--ca-key-filename] ca-key
[-s|--host-subject-filename] filename
[-g|--get-master-ca]
Arguments
Optional arguments
[-l|--keylength] length The length (number of bits) for the CA key.
Default: 2048. Range: 384 - 2048.
[-s|--host-subject- File that contains the subject information to create the host
filename] filename certificate.
[-g|--get-master-ca] Pulls the master CA to the requesting cluster and creates the
digital certificate. Copies the updated serial file back to the
master server so that the master server has a serial number
that is up to date with the number of digital certificates that
the CA created. Enables strict browsers (FireFox) to connect
to different clusters from the same browser.
Description
Generates a host certificate request and signs it with the Certification Authority certificate
created by the security create-ca-certificate command.
The CA Certificate and CA Key must be created prior to running this command.
The host certificate is stored at /etc/ipsec.d/certs.
The host key is stored at /etc/ipsec.d/private.
The host certificate request is stored at /etc/ipsec.d/reqs.
The CA certificate file is read from /etc/ipsec.d/cacerts.
The CA Key is read from /etc/ipsec.d/private.
Note: The passphrase used during the VPN configuration can contain letters, numbers, and
special characters.
Examples
Create a default host certificate with the default host certificate subject information:
security configure-certificates
Creates self-signed or imports new vendor signed certificates .
Contexts
All contexts
Syntax
configure-certificates
[-p | --web-cacert-filepath=] web-cacert-filepath
[-k | --web-host-key-filepath=] web-host-key-filepath
[-w | --web-host-cert-filepath=] web-host-cert-filepath
[-n | --vpn-cacert-filepath]= vpn-cacert-filepath
[-m | --vpn-host-key-filepath=] vpn-host-key-filepath
[-c | --vpn-host-cert-filepath=] vpn-host-cert-filepath
--verbose
Arguments
Description
A management server authenticates an external client entity based on the Certificate Authority it
trusts. The trust store is used for Web/ REST clients over HTTPS connections and an SSL
database for inter-site VPN connections. The CA trust can be self-signed based on a local
certificate authority subject info or a third-party vendor (such as Verisign, Globalsign) signed. A
plain run of the command without any options creates self-signed CA certificates. This also
creates host and web certificates which are signed by the self signed CA certificate created.
Running the command with options --vpn-host-cert-filepath, --vpn-host-key-
filepath and --vpn-cacert-filepath imports the vendor signed certificates provided to
configure a VPN between the sites and the VPLEX Witness server. This also configures web with
self-signed CA/web certificates. Running the command with options --web-host-cert-
filepath, --web-host-key-filepath and --web-cacert-filepath imports the vendor
signed certificates provided to configure web. This also configures vpn with self-signed CA/ host
certificates. If all options --vpn-host-cert-filepath, --vpn-host-key-filepath, --
vpn-cacert-filepath, --web-host-cert-filepath, --web-host-key-filepath, and
--web-cacert-filepath are used then the command configures both vpn and web with
imported vendor signed certificates provided. For self-signed certificates the default values of
validity days and keylength are used. This command is the same as running security create-
ca-certificate and security create-host-certificate commands.
Note: Take note of the passphrases you use to create these certificates and save them in a
secure location. They will be required at other times when maintaining the VPLEX clusters. The
passphrase used during the VPN configuration can contain letters, numbers, and special
characters.
Examples
To create self-signed certificates for VPLEX Local configurations
1. Use the security configure-certificates command to create self-signed CA/host
certificates for both vpn and web.
2. Enter and write down the passphrase entered during the creation of CA certificate for vpn
configuration.
3. Enter and write down the passphrase entered during the creation of host certificate for vpn
configuration.
4. Enter and write down the passphrase entered during the creation of CA certificate for web
configuration.
5. Enter and write down the passphrase entered during the creation of host certificate for web
configuration.
Input file is a root CA but the CA signer not found in local ssl database
Added imported CA signer to local certificate database
Please enter the passphrase for the imported web certificate key (at least
8characters) :
Re-enter:
CA Certificate /tmp/certs/strongswanCert.pem successfully imported
Please create a passphrase (at least 8 characters) for the Local Host
Certificate Keys to be used to configure the VPN. Make a note of this
passphrase as you will need it later.
Host Certificate passphrase:
Re-enter:
New Host certificate request /etc/ipsec.d/reqs/hostCertReq.pem created New
Host certificate /etc/ipsec.d/certs/hostCert.pem created and signed by the CA
Certificate /etc/ipsec.d/cacerts/strongswanCert.pem
See also
l security list-certificates
l security web-configure
l configuration cw-vpn-configure
security delete-ca-certificate
Deletes the specified CA certificate and its key.
Contexts
All contexts.
Syntax
security delete-ca-certificate
[-o|--ca-cert-outfilename] filename
[-f|--ca-key-outfilename] filename
Arguments
Optional arguments
[-o|--ca-cert-outfilename] filename CA Certificate output filename.
Default: strongswanCert.pem.
Description
Deletes the CA certificate and deletes the entries from the lockbox that were created by EZ-
setup.
Examples
Delete a custom CA certificate (not the default):
security delete-host-certificate
Deletes the specified host certificate.
Contexts
All contexts.
Syntax
security delete-hostcertificate
[-o|--host-cert-outfilename] filename
[-f|--host-key-outfilename] filename
Arguments
Optional arguments
[-o|--host-cert-outfilename] filename host certificate output filename.
Default: hostCert.pem.
Description
Deletes the specified host certificate and deletes the entries from the lockbox that were created
by EZ-setup.
Examples
Delete the default host certificate and key:
l /etc/ipsec.d/certs/TestHostCert.pem
l /etc/ipsec.d/private/TestHostKey.pem
See also
l security create-ca-certificate
l security create-host-certificate
l security delete-ca-certificate
security export-ca-certificate
Exports a CA certificate and CA key to a given location.
Contexts
All contexts.
Syntax
security export-ca-certificate
[-c|--ca-cert-filepath] filepath
[-k|--ca-key-filepath] filepath
[-e|--ca-export-location] path
Arguments
Required arguments
[-e|--ca-export- The absolute path of the location to which to export the CA
location] path Certificate and CA Key.
Optional arguments
[-c|--ca-cert-filepath] The absolute path of the CA certificate file to export.
filepath Default: /etc/ipsec.d/cacerts/strongswanCert.pem.
Description
Exports the CA certificate to the specified location.
Note: You must have write privileges at the location to which you export the certificate.
The import or export of CA certificates does not work for external CA certificates.
Example
Export the default CA certificate and key to /var/log/VPlex/cli:
Export a custom CA certificate and it's key (created using the security create-ca-
certificate command) to /var/log/VPlex/cli:
See also
l security create-ca-certificate
l security create-certificate-subject
l security create-host-certificate
l security export-host-certificate
l security import-ca-certificate
l security import-host-certificate
l security ipsec-configure
l security show-cert-subj
l Dell EMC VPLEX Security Configuration Guide
security export-host-certificate
Exports a host certificate and host key to the specified location.
Contexts
All contexts.
Syntax
security export-host-certificate
[-c|--host-cert-filepath] path
[-k|--host-key-filepath] path
[-e|--host-export-location] path
Arguments
Required arguments
[-e|--host-export- The absolute path of the location to which to export the
location] path host certificate and host key.
Optional arguments
[-c|--host-cert-filepath] The absolute path of the host certificate file to export.
path Default: /etc/ipsec.d/certs/hostCert.pem
Description
Exports the host certificate to the specified location.
Note: You must have write privileges at the location to which you export the certificate .
Example
Export the default host certificate and key to /var/log/VPlex/cli:
Export a custom host certificate and it's key (created using the security create-host-
certificate command) to /var/log/VPlex/cli:
See also
l security create-ca-certificate
l security create-certificate-subject
l security create-host-certificate
l security export-ca-certificate
l security import-ca-certificate
l security show-cert-subj
l Dell EMC VPLEX Security Configuration Guide
security import-ca-certificate
Imports a CA certificate and CA key from a given location.
Contexts
All contexts.
Syntax
security import-ca-certificate
[-c|--ca-cert-filepath] path
[-k|--ca-key-filepath] path
[-i|--ca-cert-import-location] location
[-j|--ca-key-import-location] location
Arguments
Required arguments
[-c|--ca-cert-filepath] path The absolute path of the CA certificate file to import.
[-k|--ca-key-filepath] path The absolute path of the CA key file to import.
Optional arguments
[-i|--ca-cert-import- The absolute path of the location to which to import the
location] location CA certificate.
Default location - /etc/ipsec.d/cacerts.
Description
Imports the CA certificate from the specified location.
Note: You must have write privileges at the location from which you import the certificate.
If the import locations for the CA certificate and CA key has files with the same names, they are
overwritten.
The import or export of CA certificates does not work for external CA certificates.
Example
Import the CA certificate and its key from a specified location to the default CA certificate and key
location (/var/log/VPlex/cli):
security import-host-certificate
Imports a host certificate and host key from a given location.
Contexts
All contexts.
Syntax
security import-host-certificate
[-c|--host-cert-filepath] path
[-k|--host-key-filepath] path
[-i|--host-cert-import-location] path
[-j|--host-key-import-location] path
Arguments
Required arguments
[-c|--host-cert-filepath] path The absolute path of the host certificate file to import.
[-k|--host-key-filepath] path The absolute path of the host key file to import.
Optional arguments
[-i|--host-cert-import- The absolute path of the location to which to import the
location] path host certificate.
Default location - /etc/ipsec.d/certs.
Description
Imports the host certificate from the specified location.
Note: The user executing this command must have write privileges at the location from which
the certificate is imported.
If the import locations for the host certificate and host key have files with the same names, the
files are overwritten.
Examples
Import the host certificate and key from /var/log/VPlex/cli:
See also
l security create-ca-certificate
l security create-certificate-subject
l security create-host-certificate
l security export-ca-certificate
l security import-ca-certificate
l security ipsec-configure
l security show-cert-subj
l Dell EMC VPLEX Security Configuration Guide
security ipsec-configure
Configures IPSec after the CA and host certificates have been created.
Contexts
All contexts.
Syntax
security ipsec-configure
[-i|--remote-ms-ipaddr] Remote-address
[-c|--host-cert-filename] host-certificate
[-k|--host-key-filename] host-key
Arguments
Required arguments
[-i|--remote-ms-ipaddr] remote-address IP address of the remote management
server.
Optional arguments
[-c|--host-cert-filename] host-certificate host certificate filename.
[-k|--host-key-filename] host-key host key filename.
Description
This command does the following:
l Backs up the existing ipsec.conf and ipsec.secrets files.
l Configures ipsec.conf and ipsec.secrets with the latest VPN configuration.
l Enables the IPSec service at rc3, rc4, and rc5 run levels.
l Starts the VPN.
The following steps must be completed before using this command:
1. On the first cluster, use the security create-ca-certificate command to create the
CA certificate.
2. On the first cluster, use the security create-host-certificate command to create
the host certificate.
l On the first cluster, the vpn status command confirms that the VPN to the second cluster
is up
l On the second cluster, the vpn status command confirms that the VPN to the first cluster
is up
See also
l security create-ca-certificate
l security create-certificate-subject
l security create-host-certificate
l security export-ca-certificate
l security import-ca-certificate
l security show-cert-subj
l Dell EMC VPLEX Security Configuration Guide
security list-certificates
Displays the validation status of the existing Certificates
Contexts
All contexts
Syntax
security list-certificates
[-h | --help]
[--verbose]
Arguments
Description
The command lists down all the certificates present in the system along with the validation status
of each parameter associated with the certificate.
The following parameters are currently displayed :
See Also
l security configure-certificates
l security web-configure
l configuration cw-vpn-configure
security remove-login-banner
Removes the login banner from the management server.
Contexts
All contexts.
Syntax
security remove-login-banner
[-f|--force]
Arguments
Optional arguments
[-f|--force] Forces the removal of the login banner without asking for any user
confirmation. Allows this command to be run from non-interactive scripts.
Description
Removes a custom login banner from the management server.
The change takes effect at the next login to the management server.
Example
Remove the login banner:
See also
l security set-login-banner
l Dell EMC VPLEX Security Configuration Guide
security renew-all-certificates
Renews CA and host security certificates.
Contexts
All contexts.
Syntax
security renew-all-certificates
Description
When VPLEX is installed, EZ-Setup creates one CA certificate and two or three host certificates:
l Certification Authority (CA) certificate shared by all clusters
l VPN host certificate
l Web server host certificate
l VPLEX Witness host certificate (when VPLEX Witness is installed)
All types of certificates expire and must be periodically renewed. By default:
l CA certificates must be renewed every 5 years
l Host certificates must be renewed every 2 years
Use the security renew-all-certificates command to renew all security certificates on
a VPLEX system.
In Metro systems, run the command twice, once on each cluster. For systems with VPLEX Witness
deployed, make sure you run the command first on the cluster where VPLEX Witness was initially
installed. See the “Before you begin” section below for the steps to determine the correct cluster.
You can use the command at any time to renew certificates whether or not they are about to
expire.
Each certificate has an associated passphrase. During renewal, you are prompted to enter any
passphrases that VPLEX does not know.
After renewal, VPLEX is aware of all passphrases.
There are two general methods for renewing passphrases:
l Renew the security certificates using their current passphrases.
If you choose to renew the certificates using their current passphrases, you are prompted to
provide the passphrase for any certificate that VPLEX does not find.
You are always prompted for the Certificate Authority (CA) passphrase when you run the
command on the second cluster.
When renewing the certificates on the second cluster, you might be prompted to enter the
service password. Contact the System Administrator to obtain the current service password.
l Renew the certificates using a common passphrase.
All certificates are renewed using the same passphrase.
CAUTION In Metro systems, do not renew the security certificates using the current
passphrases if you do not have a record of the Certificate Authority (CA) passphrase. You
must provide the current CA passphrase when you renew the certificates on the second
cluster. If you do not have a record of the CA passphrase, do not renew the certificates
until you have the passphrase or renew with a common passphrase.
CAUTION
certificate renewals:
Re-enter the passphrase for the Certificate Key: CA-passphrase
Renewing CA certificate...
The CA certificate was successfully renewed.
Renewing VPN certificate...
The VPN certificate was successfully renewed.
Renewing WEB certificate...
Your Java Key Store has been created.
https keystore: /var/log/VPlex/cli/.keystore
started web server on ports {'http': 49880, 'https': 49881}
The Web certificate was successfully renewed.
Generating certificate renewal summary...
All VPLEX certificates have been renewed successfully
An example of running the certificate renewal on a cluster where the VPLEX Witness certificate
was created.
This example shows running the renew-all-certificates command on the second cluster.
If VPLEX Witness was disabled before the security certificates were renewed:
l Use the cluster-witness enable command to re-enable VPLEX Witness.
l Use the ll cluster-witness command to verify that the admin-state is enabled.
See also
l security create-ca-certificate
l security create-host-certificate
l security export-ca-certificate
l security import-ca-certificate
l security import-host-certificate
security set-login-banner
Applies a text file as the login banner on the management server.
Contexts
All contexts.
Syntax
security set-login-banner
[-b|--login-banner-file] file
[-f|--force]
Arguments
Required arguments
[-b|--login-banner- Full pathname to the file containing the formatted login banner
file] file text.
Optional arguments
[-f|--force] Forces the addition of the login banner without asking for any
user confirmation. Allows this command to be run from non-
interactive scripts.
Description
This command sets the login banner for the management server. This command applies the
contents of the specified text file as the login banner.
The change takes effect at the next login to the management server.
The formatting of the text in the specified text file is replicated in the banner.
There is no limit to the number of characters or lines in the specified text file.
Use this command to create a customized login banner. The formatting of the text in the specified
text file is replicated in the banner.
Examples
In the following example, a text file login-banner.txt containing the following lines is specified
as the login banner:
VPLEX cluster-1/Hopkinton
Test lab 3, Room 6, Rack 47
Metro with RecoverPoint CDP
At next login to the management server, the new login banner is displayed:
See also
l security remove-login-banner
l EMC VPLEX Security Configuration Guide
security show-cert-subj
Displays the certificate subject file.
Contexts
All contexts.
Syntax
security show-cert-subj
[s|--subject-infilename] filename
Arguments
Required arguments
[-s|--subject- Filename of the certificate subject file to display. The file is
infilename] filename assumed to reside in the following directory on the management
server:
/var/log/VPlex/cli
Description
Displays the certificate subject file.
Example
See also
l security create-certificate-subject
security web-configure
Configures the webserver CA, certificate and key by deleting the previous entries from keystore
and truststore and registering the new entries in the truststore and keystore.
Contexts
All contexts
Syntax
security web-configure
[-h | --help]
[--verbose]
Arguments
Description
Use this command to configure the web certificates after importing the CA and host certificate
and keys.
l Use the security configure-certificates command to create or import the CA and
host certificate and keys to configure web.
l Run the security web-configure command.
You can supply the file names for all the three certificates as parameters. This command supports
external certificates.
Examples
See also
l security configure-certificates
l security list-certificates
l configure cw-vpn-configure
sessions
Displays active Unisphere for VPLEX sessions.
Contexts
All contexts.
Syntax
sessions
Description
Displays the username, hostname, port and start time of active sessions to the Unisphere for
VPLEX.
Example
VPlexcli:/> sessions
Type Username Hostname Port Creation Time
------------- -------- --------- ----- ----------------------------
TELNET_SHELL service localhost 23848 Wed Sep 15 15:34:33 UTC 2010
DEFAULT_SHELL - - - Tue Aug 03 17:16:07 UTC 2010
set
Changes the value of writable attributes in the given context.
Contexts
All contexts.
Syntax
set
[-d|--default]
[-f|--force]
[-a|--attributes] pattern
[-v|--value] value
Arguments
Optional arguments
[-d|--default] Sets the specified attributes to the default values, if any exist. If
no attributes are specified, displays the default values for
attributes in the current/specified given context.
[-f|--force] Force the value to be set, bypassing any confirmations or
guards.
[-a|--attributes] pattern * Attribute selector pattern.
[-v|--value] value * The new value to assign to the specified attributes.
* - argument is positional.
Description
Use the set command with no arguments to display the attributes available in the current context.
Use the set --default command with no additional arguments to display the default values for
the current context or a specified context.
Use the set command with an attribute pattern to display the matching attributes and the
required syntax for their values.
Use the set command with an attribute pattern and a value to change the value of each matching
attribute to the given value.
An attribute pattern is an attribute name optionally preceded with a context glob pattern and a
double-colon (::). The pattern matches the named attribute on each context matched by the glob
pattern.
If the glob pattern is omitted, set assumes the current context.
If the value and the attribute name are omitted, set displays information on all the attributes on all
the matching contexts.
Examples
Display which attributes are writable in the current context, and their valid inputs:
VPlexcli:/distributed-storage/distributed-devices/TestDisDevice> set
attribute input-description
------------------------------------------------------------------------------
-------------------------
application-consistent Takes one of '0', '1', 'f', 'false', 'n', 'no',
'off', 'on', 't', 'true', 'y', 'yes' (not case sensitive).
auto-resume Takes one of '0', '1', 'f', 'false', 'n', 'no',
'off', 'on', 't', 'true', 'y', 'yes' (not case sensitive).
block-count Read-only.
block-size Read-only.
capacity Read-only.
clusters-involved Read-only.
.
.
.
Use the --default argument without any attribute(s) to display the default values for the
current (or specified) context's attributes:
VPlexcli:/clusters/cluster-1/system-volumes/
new_meta1_backup_2010May24_163810> set name backup_May24_pre_refresh
/management-server/ports/eth0::net-mask Read-only.
/management-server/ports/eth0::speed Read-only.
/management-server/ports/eth0::status Read-only.
Set the remote IP address and started attributes for SNMP traps:
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes/clar_LUN83> set
thin-rebuild true
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes/clar_LUN83> ll
Name Value
----------------------
-------------------------------------------------------
application-consistent false
.
.
.
storage-volumetype normal
system-id VPD83T3:6006016061212e00b0171b696696e211
thin-rebuild true
total-free-space 0B
underlying-storage-block-size 512
use used
used-by [extent_test01_1]
vendor-specific-name DGC
vias-based false
VPlexcli:/> cd /notifications/call-home/
VPlexcli:/notifications/call-home> set enabled false --force
VPlexcli:/> cd /notifications/call-home/
VPlexcli:/notifications/call-home> set enabled true
VPlexcli:/engines/engine-1-1/directors/director-1-1-A/hardware/ports> ll
Name Address Role Port Status
------- ------------------ --------- -----------
A0-FC00 0x5000144260006e00 front-end no-link
A0-FC01 0x5000144260006e01 front-end up
A0-FC02 0x5000144260006e02 front-end up
A0-FC03 0x0000000000000000 front-end down
A1-FC00 0x5000144260006e10 back-end up
A1-FC01 0x5000144260006e11 back-end up
A1-FC02 0x5000144260006e12 back-end no-link
A1-FC03 0x5000144260006e13 back-end no-link
A2-FC00 0x5000144260006e20 wan-com up
A2-FC01 0x5000144260006e21 wan-com up
A2-FC02 0x5000144260006e22 wan-com no-link
A2-FC03 0x5000144260006e23 wan-com no-link
A3-FC00 0x5000144260006e30 local-com up
A3-FC01 0x5000144260006e31 local-com up
A3-FC02 0x0000000000000000 - down
A3-FC03 0x0000000000000000 - down
VPlexcli:/engines/engine-1-1/directors/director-1-1-A/hardware/ports> set A0-
FC03::enabled true
VPlexcli:/engines/engine-1-1/directors/director-1-1-A/hardware/ports> ll
Name Address Role Port Status
------- ------------------ --------- -----------
A0-FC00 0x5000144260006e00 front-end no-link
A0-FC01 0x5000144260006e01 front-end up
A0-FC02 0x5000144260006e02 front-end up
A0-FC03 0x5000144260006e03 front-end no-link
A1-FC00 0x5000144260006e10 back-end up
A1-FC01 0x5000144260006e11 back-end up
A1-FC02 0x5000144260006e12 back-end no-link
A1-FC03 0x5000144260006e13 back-end no-link
A2-FC00 0x5000144260006e20 wan-com up
A2-FC01 0x5000144260006e21 wan-com up
A2-FC02 0x5000144260006e22 wan-com no-link
A2-FC03 0x5000144260006e23 wan-com no-link
A3-FC00 0x5000144260006e30 local-com up
A3-FC01 0x5000144260006e31 local-com up
A3-FC02 0x0000000000000000 - down
A3-FC03 0x0000000000000000 - down
VPlexcli:/clusters/cluster-1/virtual-volumes/EMC-CLARiiON-0075-VNX-
LUN122_1_vol>set -a name -v new_name
VPlexcli:/clusters/cluster-1/virtual-volumes/new_name> ll
Name Value
------------------ -----------------------------------------------
block-count 2621440
block-size 4K
cache-mode synchronous
capacity 10G
consistency-group -
expandable true
health-indications []
health-state ok
locality local
operational-status ok
scsi-release-delay 0
service-status running
storage-tier -
supporting-device device_EMC-CLARiiON-APM00113700075-VNX_LUN122_1
system-id EMC-CLARiiON-0075-VNX-LUN122_1_vol
volume-type virtual-volume
Return to the virtual-volumes context and change directory to the new name:
VPlexcli:/clusters/cluster-1/virtual-volumes/new_name> cd ..
VPlexcli:/clusters/cluster-1/virtual-volumes> cd new_name
Run a listing on the volume to display the new name for the system-id:
VPlexcli:/clusters/cluster-1/virtual-volumes/new_name> ll
Name Value
------------------ -----------------------------------------------
block-count 2621440
block-size 4K
cache-mode synchronous
capacity 10G
consistency-group -
expandable true
health-indications []
health-state ok
locality local
operational-status ok
scsi-release-delay 0
service-status running
storage-tier -
supporting-device device_EMC-CLARiiON-APM00113700075-VNX_LUN122_1
system-id new_name
volume-type virtual-volume
VPlexcli:/clusters/cluster-1/exports/initiator-ports/test_port_1> ll
Name Value
-----------------
------------------------------------------------------------
node-wwn 0x20000025b505003f
port-wwn 0x200000cc05bb002e
scsi-spc-version 3
suspend-on-detach -
target-ports [P0000000043E00BDD-A0-FC00, P0000000043E00BDD-A0-FC01,
P0000000043F00BDD-B0-FC00, P0000000043F00BDD-B0-FC01]
type default
VPlexcli:/clusters/cluster-1/exports/storage-views/test_view_1> ll
Name Value
------------------------
------------------------------------------------------------------------------
--------------------------
caw-enabled true
controller-tag -
initiators [test_port]
operational-status ok
port-name-enabled-status [P0000000043E00BDD-A0-FC00,true,ok,
P0000000043E00BDD-A0-FC01,true,ok,
P0000000043F00BDD-B0-FC00,true,ok,
P0000000043F00BDD-B0-FC01,true,ok]
ports [P0000000043E00BDD-A0-FC00, P0000000043E00BDD-A0-
FC01, P0000000043F00BDD-B0-FC00,
P0000000043F00BDD-B0-FC01]
scsi-spc-version 3
virtual-volumes [(0,device_C1-
RHEL_XtremIO0547_LUN_00001_1_vol,VPD83T3:6000144000000010f00bddd268733d19,200G
)]
write-same-16-enabled true
xcopy-enabled true
See also
l storage-volume claim
l storage-volume unclaim
set topology
Changes the topology attribute for a Fibre Channel port.
Contexts
/engines/engine/directors/director/hardware/ports/port
Syntax
set topology
[p2p|loop]
Arguments
Required
arguments
p2p Sets the port’s topology as point-to-point. The port comes up as an F-port.
Use the p2p topology to connect the Fibre Channel fabric to a node.
loop Sets the port’s topology as loop. The port comes up as an FL-Port.
Use the loop topology to connect a Fibre Channel Arbitrated Loop (ring-style
network topology) to a fabric.
Description
Change the default setting for a Fibre Channel port.
Default: p2p.
Note: According to best practices, the front-end ports should be set to the default p2p and
connected to the hosts via a switched fabric.
WARNING It is not recommended to change the topology on the local COM ports, as it can
lead to the directors going down and data unavailability.
Example
Navigate to a Fibre Channel port context and set the topology as p2p:
VPlexcli:/> cd /engines/engine-1-1/directors/Cluster_1_Dir1A/hardware/
ports/A4-FC02
VPlexcli:/engines/engine-1-1/directors/Cluster_1_Dir1A/hardware/ports/A4-
FC02> set topology p2p
VPlexcli:/engines/engine-1-1/directors/Cluster_1_Dir1A/hardware/ports/A4-
FC02> ll
Name Value
------------------ ------------------
address 0x5000144240014742
current-speed 8Gbits/s
description -
enabled true
max-speed 8Gbits/s
node-wwn 0x500014403ca00147
operational-status ok
port-status up
port-wwn 0x5000144240014742
protocols [fc]
role wan-com
target-port -
topology p2p
See also
l set
show-use-hierarchy
Display the complete usage hierarchy for a storage element from the top-level element down to
the storage-array.
Contexts
All contexts.
Syntax
show-use-hierarchy
[-t|--targets] path, path,...
Arguments
Required
arguments
[-t|-- * Comma separated list of target storage elements.
targets] path, You can specify meta, logging and virtual volumes, local and distributed
path,... devices, extents, storage-volumes or logical-units on a single command line.
Note: A complete context path to the targets must be specified. For
example:
show-use-hierarchy /clusters/cluster-1/storage-
elements/storage-volumes/volume
or:
show-use-hierarchy /clusters/cluster-1/storage-
elements/storage-volumes/*
* - argument is positional.
Description
This command drills from the specified target up to the top-level volume and down to the storage-
array. The command will detect sliced elements, drill up through all slices and indicate in the output
that slices were detected. The original target is highlighted in the output.
See also
l drill-down
l tree
sms dump
Collects the logs files on the management server.
Contexts
All contexts.
Syntax
sms dump
[-d|--destination-directory] directory
[-t|--target_log] logName
Arguments
Required arguments
[-d| --destination-directory] directory Destination directory for the sms dump logs.
Optional arguments
[-t|--target_log] logName Collect only files specified under logName
from smsDump.xml.
Description
Collects the following log files:
Note: The log files listed below are the core set of files along with other files that are not
listed.
Clilogs
l /var/log/VPlex/cli/client.log* -- VPlexcli logs, logs dumped by VPlexcli scripts
l /var/log/VPlex/cli/session.log* -- what the user does in a VPlexcli session
Examples
Collect the logs files on the management server and send them to the designated directory:
See also
l cluster configdump
l collect-diagnostics
l director appdump
l getsysinfo
snmp-agent configure
Configures the VPLEX SNMP agent service on the local cluster.
Contexts
All contexts.
Syntax
snmp-agent configure
Arguments
Optional arguments
[-s|--snmp-protocol-version] snmpv2c Configures the VPLEX SNMP agent service
version 2c
[-s|--snmp-protocol-version] snmpv3 Configures the VPLEX SNMP agent service
version 3
Description
Configures the SNMP agent on the local cluster, and starts the SNMP agent. You can configure
VPLEX SNMP agent service version 2c or version 3. If you do not specify a version, the SNMP
agent service version 3 is configured.
snmp-agent configure checks the number of directors in the local cluster and configures the
VPLEX SNMP agent on the VPLEX management server. Statistics can be retrieved from all
directors in the local cluster.
Note: All the directors have to be operational and reachable through the VPLEX management
server before the SNMP agent is configured.
When configuration is complete, the VPLEX snmp-agent starts automatically.
The VPLEX SNMP agent:
l Supports retrieval of performance-related statistics as published in the VPLEX-MIB.mib.
l Runs on the management server and fetches performance related data from individual
directors using a firmware specific interface.
l Provides SNMP MIB data for directors for the local cluster only.
l Runs on Port 161 of the management server and uses the UDP protocol.
l Supports the following SNMP commands:
n SNMP Get
n SNMP Get Next
n SNMP get Bulk
The SNMP Set command is not supported in this release.
VPLEX supports SNMP versions SNMPv3 and SNMPv2C.
VPLEX MIBs are located on the management server in the /opt/emc/VPlex/mibs directory.
Use the public IP address of the VPLEX management server to retrieve performance statistics
using SNMP.
See also
l snmp-agent start
l snmp-agent status
l snmp-agent stop
l snmp-agent unconfigure
snmp-agent start
Starts the SNMP agent service.
Contexts
All contexts.
Syntax
snmp-agent start
Description
Starts the SNMP agent on the local cluster.
The SNMP agent must be configured before this command can be used.
Example
See also
l snmp-agent configure
l snmp-agent status
l snmp-agent stop
l snmp-agent unconfigure
snmp-agent status
Displays the SNMP agent service on the local cluster.
Contexts
All contexts.
Syntax
snmp-agent status
Description
Displays the status of the SNMP agent on the local cluster.
Example
SNMP agent is running:
See also
l snmp-agent configure
l snmp-agent start
l snmp-agent stop
l snmp-agent unconfigure
snmp-agent stop
Stops the SNMP agent service.
Contexts
All contexts.
Syntax
snmp-agent stop
Description
Stops the SNMP agent on the local cluster.
The SNMP agent must be configured before this command can be used.
Example
See also
l snmp-agent configure
l snmp-agent start
l snmp-agent status
l snmp-agent unconfigure
snmp-agent unconfigure
Destroys the SNMP agent.
Contexts
All contexts.
Syntax
snmp-agent unconfigure
Description
Unconfigures the SNMP agent on the local cluster, and stops the agent.
Example
See also
l snmp-agent configure
l snmp-agent start
l snmp-agent status
l snmp-agent stop
source
Reads and executes commands from a script.
Contexts
All contexts.
Syntax
source
[-f|--file] filename
Arguments
Required arguments
[-f| --file] filename * Name of the script file to read and execute.
* - argument is positional.
Description
Filenames use the syntax of the underlying platform.
The script file may contain any CLI commands.
If the exit command is included, the shell exits immediately, without processing the commands that
follow it in the file.
Examples
In the following example, a text file Source.txt contains only two commands:
See also
l script
storage-tool dismantle
Dismantles virtual-volumes, devices (local or distributed) and extents down to the storage-
volumes, including unclaiming the storage-volumes.
Contexts
All contexts.
Syntax
storage-tool dismantle
[--do-not-unclaim]
[-h | --help]
[--verbose]
[-f | --force]
[-s | --storage-extents= storage-extent [, storage-extent] ...]]
Arguments
Optional arguments
Required arguments
[-s | --storage-extents= Specifies the storage-extents (virtual-volumes, local or
storage-extent [, storage-extent] ...]] distributed devices or extents) to dismantle.
* argument is positional
Description
Dismantles virtual-volumes, devices (local or distributed) and extents down to the storage-
volumes, including unclaiming the storage-volumes.
Run storage-tool dismantle against top-level storage elements only. If you run storage-
tool dismantle against virtual-volumes, they must not belong to either a consistency-group or
storage-view.
Note: This command does NOT allow dismantling of consistency groups or storage views, or of
storage extents that are not root nodes in a storage hierarchy (i.e. targets must not be
supporting other storage).
The command fails with an exception before dismantling anything if:
l A volume to be dismantled is exported in a view and that view is not a dismantle target.
l A volume to be dismantled is in a consistency group and that consistency group is not a
dismantle target.
l The dismantle target is supporting other storage (i.e. has anything above it).
storage-tool compose
Creates a virtual-volume on top of the specified storage-volumes, building all intermediate extents,
local, and distributed devices as necessary.
Contexts
All contexts.
Syntax
storage-tool compose
[-n|--name] name
[-g|--geometry] {raid-0|raid-1|raid-c}
[-d|--storage-volumes] storage-volume [, storage-volume...]
[-m|--source-mirror] source-mirror
[-c|--consistency-group] consistency-group
[-v|--storage-views] storage-view [, storage-view ...]
[-t|--thin]
[-h|--help]
[--verbose]
Arguments
Required arguments
[-n|--name] name * Specifies the name for the new virtual volume. Must be unique
across the system.
[-g|--geometry] * Specifies the geometry to use for the local devices at each
{raid-0 |raid-1| cluster. Valid values are raid-0, raid-1, or raid-c.
raid-c}
Optional arguments
[-d|--storage- * Specifies a list of storage volumes to build the virtual volume
volumes] storage-volume from. These may be claimed, but must be unused.
[, storage-volume...]
[-m|--source-mirror] Specifies the storage volume to use as a source mirror when
source-mirror creating local and distributed devices.
Note: If specified, --source-mirror will be used as a
source-mirror when creating local and distributed RAID 1
devices. This will trigger a rebuild from the source-mirror to all
other mirrors of the RAID 1 device (local and distributed). While
the rebuild is in progress the new virtual volume (and
supporting local and/or distributed devices) will be in a
degraded state, which is normal. This option only applies to
RAID 1 local or distributed devices. The --source-mirror
may also appear in --storage-volumes.
[-c|--consistency- Specifies the context path of a consistency group that the new
group] consistency-group virtual volume should be added to. The new virtual-volume’s global
geometry must be compatible with the consistency group’s
storage-at-clusters attribute.
[-v|--storage-views] Specifies the context path of the storage views that the new
storage-view [, storage- virtual volume will be added to. The new virtual volume’s global
view...] geometry must be compatible with the storage view’s locality.
[-t|--thin] Specifies whether the new virtual-volume is thin-enabled or not.
The supporting storage-volumes must be thin-capable in order for
a virtual-volume to be thin-enabled. The virtual-volume must also
have a valid RAID geometry to be thin-enabled.
[-h|--help] Displays command line help.
[--verbose] Provides more help during command execution. This may not have
any effect for some commands.
* - argument is positional.
Description
This command supports building local or distributed (i.e., distributed RAID 1 based) virtual volumes
with RAID 0, RAID 1, or RAID C local devices. It does not support creating multi-device storage
hierarchies (such as a RAID 1 on RAID 0s on RAID Cs).
For RAID 1 local devices, a maximum of eight legs may be specified.
If the new virtual volume’s global geometry is not compatible with the specified consistency group
or storage views, the virtual volume will not be created. However, failure to add the new virtual
volume to the specified consistency group or storage views does not constitute an overall failure
to create the storage and will not be reported as such.
Note: In the event of an error, the command will not attempt to perform a roll-back and
destroy any intermediate storage objects it has created. If cleanup is necessary, use the
show-use-hierarchy command on each storage volume to identify all residual objects and
delete each one manually.
The --stop-at option imposes the following constraints on other options:
l If --stop-at=virtual-volume, only the --consistency-group and --storage-views
options can be specified.
l If --stop-at=local-device, storage-volumes from only one cluster can be specified.
l If--stop-at=distributed-device, storage-volumes from at least two clusters must be
specified.
Example
Create a thin-capable virtual volume with RAID 0 local devices and specified storage volumes:
/clusters/cluster-1/virtual-volumes/myVolume:
Name Value
-------------------------- ----------------------------------------
block-count 2621440
block-size 4K
cache-mode synchronous
capacity 10G
consistency-group -
expandable true
expandable-capacity 0B
expansion-method storage-volume
expansion-status -
health-indications []
health-state ok
locality local
operational-status ok
recoverpoint-protection-at []
recoverpoint-usage -
scsi-release-delay 0
service-status unexported
storage-tier -
supporting-device device_myVolume_c1
system-id myVolume
thin-capable true
thin-enabled true
volume-type virtual-volume
vpd-id VPD83T3:6000144000000010e018b6fbc02ab396
VPlexcli:/clusters/cluster-1/virtual-volumes>
Example
Create a virtual volume with RAID 1 local devices and specified storage volumes:
VPD83T3:60060160cea33000fb9c532eac48e211,
VPD83T3:600601605a903000f2a9692fa548e211,
VPD83T3:600601605a903000f3a9692fa548e211
See also
l storage-volume unclaim
l virtual-volume provision
storage-volume auto-unbanish-interval
Displays or changes auto-unbanish interval on a single director.
Contexts
All contexts.
In /clusters/cluster/storage-elements/storage-volumes context, command is
auto-unbanish-interval.
Syntax
storage-volume auto-unbanish-interval
[-n|--director] path
[-i|--interval] [seconds]
Arguments
Required arguments
[-n|--director] path * The director on which to show or change the delay for
automatic unbanishment.
Optional arguments
[-i|--interval] [ seconds] Number of seconds the director firmware waits before
unbanishing a banished storage volume (LUN).
Range: 20 seconds - no upper limit.
Default: 30 seconds.
* - argument is positional.
Description
See “Banished storage volumes (LUNs)” in the storage-volume unbanish command description.
At regular intervals, the VPLEX directors look for logical units that were previously banished. If
VPLEX finds banished logical units, it unbanishes them. This process happens automatically and
continuously, and includes a delay interval with a default value of 30 seconds.
Every 30 seconds the process looks for previously banished logical units and unbanishes any it
finds.
Use this command to display change the delay interval.
Note: This change in the interval value is not saved between restarts of the director firmware
(NDU, director reboots). When the director firmware is restarted, the interval value is reset to
the default of 30 seconds.
Example
In the following example:
l The auto-unbanish-interval --director director --interval interval
command changes the delay timer to 200 seconds.
l The auto-unbanish-interval --director director command displays the new setting.
See also
l storage-volume list-banished
l storage-volume unbanished
storage-volume claim
Claims the specified storage volumes.
Contexts
All contexts.
In /clusters/cluster/storage-elements/storage-volumes context, command is
claim.
Syntax
storage-volume claim
[--appc]
[-n|--name] name
--thin-rebuild
--batch-size integer
[-d|--storage-volumes] path,path...
[-f|--force]
Arguments
Required arguments
[-n|--name] name The new name of the storage volume after it is claimed.
--thin-rebuild Claims the specified storage volumes as “thin”. Thin storage allocates
blocks of data on demand versus allocating all the blocks up front.
If a storage volume has already been claimed, it can be designated as thin
using the set command.
--batch-size When using wildcards to claim multiple volumes with one command, the
integer maximum number of storage volumes to claim at once.
[-f|--force] Force the storage volume to be claimed. For use with non-interactive
scripts.
* - argument is positional.
Description
A storage volume is a device or LUN that is visible to VPLEX. The capacity of storage volumes is
used to create extents, devices and virtual volumes.
Storage volumes must be claimed, and optionally named before they can be used in a VPLEX
cluster. Once claimed, the storage volume can be used as a single extent occupying the volume’s
entire capacity, or divided into multiple extents (up to 128).
This command can fail if there is not a sufficient number of meta volume slots. See the
troubleshooting section of the VPLEX procedures in the SolVe Desktop for a resolution to this
problem.
Thin provisioning
Thin provisioning allows storage to migrate onto a thinly provisioned storage volumes while
allocating the minimal amount of thin storage container capacity.
Thinly provisioned storage volumes can be incorporated into RAID 1 mirrors with similar
consumption of thin storage container capacity.
VPLEX preserves the unallocated thin pool space of the target storage volume by detecting zeroed
data content before writing, and suppressing the write for cases where it would cause an
unnecessary allocation. VPLEX requires you to specify thin provisioning for each back-end storage
volume. If a storage volume is thinly provisioned, the thin-rebuild attribute must be true either
during or after claiming.
CAUTION If a thinly provisioned storage volume contains non-zero data before being
connected to VPLEX, the performance of the migration or initial RAID 1 rebuild is adversely
affected.
System volumes are supported on thinly provisioned LUNs, but these volumes must have their
full capacity of thin storage container resources set aside and not be in competition for this
space with any user-data volumes on the same pool.
If:
l The thin storage allocation pool runs out of space, and
l If this is the last redundant leg of the RAID 1,
further writing to a thinly provisioned device causes the volume to lose access to the device.
Examples
In the following example:
l The ll command in storage-volumes context displays the available storage.
l The claim command claims the specified unclaimed storage volume from the clusters/
cluster/storage-elements/storage-volumes context.
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes>ll
.
.
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> claim --
storage-volumes VPD83T3:6006016021d025007029e95b2327df11
Claim a storage volume and name it Symm1254_7BF from the clusters/cluster context:
Claim storage volumes using the --thin-rebuild option. In the following example:
l The claim command with --thin-rebuild claims two storage volumes as thin storage
(from the clusters/cluster/storage-elements/storage-volumes context)
l The ll command displays one of the claimed storage volumes:
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> claim --
thin-rebuild --storage-volumes
VPD83T3:6006016091c50e005057534d0c17e011,VPD83T3:6006016091c50e005257534d0
c17e011
Of the 2 storage-volumes that were given, 2 storage-volumes were claimed.
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> ll
VPD83T3:6006016091c50e005057534d0c17e011
/clusters/cluster-1/storage-elements/storage-volumes/
VPD83T3:6006016091c50e005057534d0c17e011:
Name Value
----------------------
-------------------------------------------------------
application-consistent false
block-count 524288
block-size 4K
capacity 2G
description -
free-chunks ['0-524287']
health-indications []
health-state ok
io-status alive
itls 0x5000144230354911/0x5006016930600523/6,
0x5000144230354910/0x5006016930600523/6,
0x5000144230354910/0x5006016830600523/6,
0x5000144230354911/0x5006016830600523/6,
0x5000144220354910/0x5006016930600523/6,
0x5000144220354910/0x5006016830600523/6,
0x5000144220354911/0x5006016930600523/6,
0x5000144220354911/0x5006016830600523/6
largest-free-chunk 2G
locality -
operational-status ok
storage-array-name EMC-CLARiiON-APM00042201310
storage-volumetype normal
system-id VPD83T3:6006016091c50e005057534d0c17e011
thin-capable false
thin-rebuild false
total-free-space 2G
use claimed
used-by []
vendor-specific-name DGC
See also
l set
l storage-volume claimingwizard
l storage-volume unclaim
storage-volume claimingwizard
Finds unclaimed storage volumes, claims them, and names them appropriately.
Contexts
All contexts.
In /clusters/cluster/storage-elements/storage-volumes context, command is
claimingwizard.
Syntax
storage-volume claimingwizard
[-c|--cluster] cluster
[-f|--file] file,file...
[-d|--dryRun]
[-t|--set-tier] list
[--force]
--appc
--thin-rebuild
Arguments
Optional arguments
[-c|--cluster] - Cluster on which to claim storage.
cluster
[-f|--file] List of one or more files containing hints for storage-volume naming,
file,file... separated by commas. Required for claiming volumes on storage arrays
that do not include their array and serial number in response to SCSI
inquiries.
[-d|dryRun] Do a dry-run only, do not claim and name the storage volumes.
[-t|--set-tier] Set a storage tier identifier per storage array in the storage-volume
list names. Type multiple arrayName, tier-character pairs separated by
commas. Storage tier identifiers cannot contain underscores.
[--force] Forces a successful run of the claimingwizard. For use with non-
interactive scripts.
--appc Make the specified storage volumes 'application consistent'. Prevents
data already on the specified storage volume from being deleted or
overwritten.
CAUTION Once set, the application consistent attribute cannot be
changed. This attribute can only be set when the storage-volumes or
extents are in the claimed state.
--thin-rebuild Claims the specified storage volumes as “thin”. Thin storage allocates
blocks of data on demand versus allocating all the blocks up front. Thin
provisioning eliminates almost all unused storage and improves utilization
rates.
Description
You must first claim and optionally name a storage volume before using the storage volume in a
VPLEX cluster.
Storage tiers allow the administrator to manage arrays based on price, performance, capacity and
other attributes. If a tier ID is assigned, the storage with a specified tier ID can be managed as a
single unit. Storage volumes without a tier assignment are assigned a value of ‘no tier’.
This command can fail if there is not a sufficient number of meta volume slots. See the
troubleshooting section of the VPLEX procedures in the SolVe Desktop for a resolution to this
problem.
The following table lists examples to create hint files:
Dell EMC Symmetrix symdev -sid 781 list -wwn > Symm0781.txt
Example
Use the --set-tier argument to add or change a storage tier identifier in the storage-volume
names from a given storage array. For example:
names all storage volumes from the CLARiiON array as Clar0400L_llun name, and all storage
volumes from the Symmetrix® array as Symm04A1H_lun name
Dell EMC Symmetrix, HDS 9970/9980 and USP V storage arrays include their array and serial
number in response to SCSI inquiries. The claiming wizard can claim their storage volumes without
additional information. Names are assigned automatically.
Other storage arrays require a hints file generated by the storage administrator using the array’s
command line. The hints file contains the device names and their World Wide Names.
Use the --file argument to specify a hints file to use for naming claimed storage volumes.
In the following example, the claimingwizard command with no arguments claims storage
volumes from an Dell EMC Symmetrix array:
Note that the Symmetrix storage volumes are named in the format:
Symmlast-4-digits-of-array-serial-number_Symmetrix-Device-Number
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> claimingwizard
--cluster cluster-1 --file /home/service/clar.txt --thin-rebuild
Found unclaimed storage-volume VPD83T3:6006016091c50e004f57534d0c17e011
vendor DGC : claiming and naming clar_LUN82.
Found unclaimed storage-volume VPD83T3:6006016091c50e005157534d0c17e011
vendor DGC : claiming and naming clar_LUN84.
Claimed 2 storage-volumes in storage array clar
Claimed 2 storage-volumes in total.
Find and claim storage volumes on any array in cluster-1 that does not require a hints file from
the /clusters/cluster/storage-elements/storage-volumes context:
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> claimingwizard
Found unclaimed storage-volume VPD83T1:HITACHI R45150040023 vendor HITACHI :
claiming and naming HDS20816_0023.
Found unclaimed storage-volume VPD83T1:HITACHI R45150040024 vendor HITACHI :
claiming and naming HDS20816_0024.
.
.
.
Fri, 20 May 2011 16:38:14 +0000 Progress : 6/101 storage_volumes processed
(6%).
.
.
.
Fri, 20 May 2011 16:38:14 +0000 Progress : 96/101 storage_volumes processed
(96%).
.
.
.
Claimed 37 storage-volumes in storage array Symm0487
Claimed 64 storage-volumes in storage array HDS20816
Claimed 101 storage-volumes in total.
See also
l storage-volume claim
l storage-volume unclaim
storage-volume find-array
Searches storage arrays for the specified storage-volumes.
Contexts
All contexts.
storage-volume find-array
[-d|--opt_s_vol] storage-volume
Arguments
Required arguments
[-d|--opt_s_vol] storage- * Storage volume pattern for which to search. The pattern
volume conforms to glob. The following pattern symbols are
supported: *, ?, [seq], [!seq].
* argument is positional.
Description
Searches all the storage arrays in all clusters for the specified storage volumes.
The search is case-sensitive.
Example
Find all storage arrays for storage volumes in cluster-1:
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> find-array *
Searching for cluster-1_journal
Storage-volume: cluster-1_journal is in: /clusters/cluster-1/storage-elements/
storage-arrays/EMC-Invista-14403b
Searching for cluster-1_journal_1
Storage-volume: cluster-1_journal_1 is in: /clusters/cluster-1/storage-
elements/storage-arrays/EMC-Invista-14403b
Searching for CLAR1912_10G_Aleve_1_vol_1
Storage-volume: CLAR1912_10G_Aleve_1_vol_1 is in: /clusters/cluster-1/storage-
elements/storage-arrays/EMC-CLARiiON-APM00111501912
Searching for CLAR1912_10G_Aleve_1_vol_2
Storage-volume: CLAR1912_10G_Aleve_1_vol_2 is in: /clusters/cluster-1/storage-
elements/storage-arrays/EMC-CLARiiON-APM00111501912
.
.
.
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> find-array -d
VPD83T3:60060160d2a02c00ff3b1abb99e3e011
Searching for VPD83T3:60060160d2a02c00ff3b1abb99e3e011
Storage-volume: VPD83T3:60060160d2a02c00ff3b1abb99e3e011 is in: /clusters/
cluster-1/storage-elements/storage-arrays/EMC-CLARiiON-APM00111402062
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> find-array --
opt_s_vol VPD83T3:60060160d2a02c00ff3b1abb99e3e011
Searching for VPD83T3:60060160d2a02c00ff3b1abb99e3e011
Storage-volume: VPD83T3:60060160d2a02c00ff3b1abb99e3e011 is in: /clusters/
cluster-1/storage-elements/storage-arrays/EMC-CLARiiON-APM00111402062
See also
l storage-volume claimingwizard
storage-volume forget
Tells the cluster that a storage volume or a set of storage volumes are physically removed.
Contexts
All contexts.
In /clusters/cluster/storage-elements/storage-volumes context, command is
forget.
Syntax
storage-volume forget
[-d|--storage-volumes] path [,path...]
Arguments
Required arguments
[-d|--storage-volumes] path[, path...] * List of one or more storage volumes to forget.
* - argument is positional.
Description
Storage volumes can be remembered even if a cluster is not currently in contact with them. Use
this command to tell the cluster that the storage volumes are not coming back and therefore it is
safe to forget about them.
You can use the storage-volume forget command only on storage volumes that are unclaimed or
unusable, and unreachable.
This command also forgets the logical unit for this storage volume.
Use the storage-volume forget command to tell the cluster that unclaimed and unreachable
storage volumes are not coming back and it is safe to forget them.
Forgotten storage volumes are removed from the context tree.
Use the --verbose argument to print a message for each volume that could not be forgotten.
Use the logical-unit forget command for the functionality supported by the removed
arguments.
Example
Forget a specified storage volume:
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> forget --
storage-volume VPD83T3:6006016021d0250027b925ff60b5de11
Forget all unclaimed, unused, and unreachable storage volume on the cluster:
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> storage-volume
forget *
3 storage-volumes were forgotten.
Use the --verbose argument to display detailed information while you forget all unclaimed,
unused, and unreachable storage volumes on the cluster:
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> storage-volume
forget * --verbose
WARNING: Error forgetting storage-volume
'VPD83T3:60000970000192602773533030353933': The 'use' property of storage-
volume VPD83T3:60000970000192602773533030353933' is 'meta-data' but must be
'unclaimed' or 'unusable' before it can be forgotten.
.
.
.
3 storage-volumes were forgotten:
VPD83T3:6006016030802100e405a642ed16e111
.
.
See also
l logical-unit forget
l storage-volume unclaim
storage-volume list-banished
Displays banished storage-volumes on a director.
Contexts
All contexts.
In /clusters/cluster/storage-elements/storage-volumes context, command is
list-banished.
Syntax
storage-volume list-banished
[-n|--director] path
Arguments
Required arguments
[-n|--director] path *The director whose banished storage volumes to display.
Description
Displays the names of storage volumes that are currently banished for a given director.
See “Banished storage volumes (LUNs)” in the storage-volume unbanish command
description.
Example
In the following example; director-1-1-A has one banished storage volume:
See also
l storage-volume auto-unbanish-interval
l storage-volume unbanish
storage-volume list-thin-capable
Provides a summary of all thin-capable storage-volumes and determines whether or not the
volumes are declared thin (thin-rebuild).
Contexts
All contexts.
Syntax
storage-volume list-thin-capable
[-c|--clusters] context path[, context path...]
[-h|--help]
[--verbose]
Arguments
Required arguments
[-c | --clusters ] context path * Specifies the clusters at which to list the thin-capable
storage-volumes.
Optional arguments
[-h|--help] Displays command line help.
[--verbose] Provides more help during command execution. This may
not have any effect for some commands.
* - argument is positional.
Description
Lists all thin-capable storage volumes at the given clusters with an abbreviated list of fields for
performance. The fields include: name, thin-rebuild status, capacity, current use, and I/O status. If
more fields are desired, use the --verbose option.
Example
Displays thin-capable storage volumes for the specified clusters.
cluster-2:
Name Thin Rebuild Capacity Use IO Status
------------------------ ------------ -------- ------- ---------
VPD83T3:514f0c5d8320055e false 10G claimed alive
VPD83T3:514f0c5d83200560 false 10G claimed alive
XtremIO0541_LUN_00000 false 10G claimed alive
XtremIO0541_LUN_00002 false 10G claimed alive
XtremIO0541_LUN_00004 false 10G claimed alive
XtremIO0541_LUN_00005 false 10G claimed alive
XtremIO0541_LUN_00006 false 10G claimed alive
XtremIO0541_LUN_00007 false 10G claimed alive
XtremIO0541_LUN_00008 false 10G claimed alive
XtremIO0541_LUN_00009 false 10G claimed alive
XtremIO0541_LUN_00010 false 10G claimed alive
VPlexcli:/>
See also
l virtual-volume list-thin
storage-volume resurrect
Resurrect the specified storage-volumes.
Contexts
All contexts.
In /clusters/cluster/storage-elements/storage-volumes context, command is
resurrect.
Syntax
storage-volume resurrect
[-d|--storage-volume] path[, path...]
[-f|--force]
Arguments
Required arguments
[-d|--storage-volume] path[, List of one or more storage volume with dead I/O
path...] status to resurrect.
Optional arguments
[-f|--force] Force the storage-volume resurrect and bypass the
test.
Description
Resurrects the specified dead storage volumes and tests the resurrected device before setting its
state to healthy.
A storage volume is declared dead:
l After VPLEX retries a failed I/O to the backend arrays 20 times without success.
l If the storage volume is reachable but errors prevent the I/O from succeeding.
A storage volume declared hardware dead cannot be unclaimed or removed (forgotten). Use this
command to resurrect the storage volume. After the storage volume is resurrected, it can be
unclaimed and removed.
CAUTION Fix the root cause before resurrecting a storage volume because the volume can be
successfully resurrected only to go back to dead on the next I/O.
This command will not work if the storage volume is marked unreachable.
This command has no ill effects if issued for a healthy storage volume.
LUNs exported from storage arrays can disappear or display I/O errors for various reasons,
including:
l Marked read-only during copies initiated by the storage array
l Unrecoverable device errors
l Snapshot activation or deactivation on the storage array
l An operator shrinks the size of a storage volume, causing VPLEX to refuse to do I/O to the
storage volume.
l 100% allocated thin pools
l Persistent reservation on storage volume
l Dropped frames due to a bad cable or SFP
Dead storage volumes are indicated by one of the following conditions:
l The cluster summary command shows degraded health-state and one or more unhealthy
storage volumes. For example:
l The storage-volume summary command shows the I/O status of the volume as dead. For
example:
Examples
Resurrect two storage volumes:
See also
l cluster status
l storage-volume forget
l storage-volume summary
storage-volume summary
Displays a list of a cluster's storage volumes.
Contexts
All contexts.
In /clusters/cluster/storage-elements/storage-volumes context, command is
summary.
Syntax
storage-volume summary
[-c|--clusters] cluster,[cluster]...
Optional arguments
[-c|--clusters] cluster,[cluster...] Displays storage volumes for only the specified clusters.
Description
Displays a two-part summary for each cluster's storage volumes:
l I/O status, operational status, and health state for each unhealthy storage volume.
l Summary of health-state, vendor, use, and total capacity for the cluster.
Use the --clusters argument to restrict output to only the specified clusters.
If no argument is used, and the command is executed at or below a /clusters/cluster
context, output is for the specified cluster only.
Otherwise, output is for all clusters.
Field Description
Field Description
Storage-Volume Summary
out-of-date Of the total number of storage volumes on
the cluster, the number that are out-of-date
compared to their mirror.
Field Description
Examples
Display default summary (all clusters) on a VPLEX with unhealthy volumes:
Use meta-data 4
unusable 0
used 358
Capacity total 2T
SUMMARY (cluster-2)
Storage-Volume Summary (no tier)
---------------------- --------------------
Health out-of-date 0
storage-volumes 362
unhealthy 0
Vendor DGC 114
EMC 248
Use meta-data 4
used 358
Capacity total 1.99T
When slot usage reaches 90%, this command also displays the following:
Display summary for both clusters in a VPLEX with no unhealthy storage volumes:
See also
l ds summary
l ds dd set-log
l export port summary
l export storage-view summary
l extent summary
l local-device summary
l storage-volume resurrect
l virtual-volume provision
storage-volume unbanish
Unbanishes a storage volume on one or more directors.
Contexts
In /clusters/cluster/storage-elements/storage-volumes context, command is
unbanish.
All contexts.
Syntax
storage-volume unbanish
[-n|--directors] path[, path...]
[-d|--storage-volume] path
Arguments
Required arguments
[-n|--directors] path[, * The context path of the directors to unbanish the given storage
path,]... volume on.
Optional arguments
[-d|--storage-volume] The context path of the storage volume to unbanish.
path
* - argument is positional.
Description
VPLEX examines path state information for LUNs on arrays. If the path state information is
inconsistent, VPLEX banishes the LUN, and makes it inaccessible.
Use this command to unbanish a banished LUN (storage volume).
Banished storage volumes (LUNs)
LUNs (storage volumes) are banished when VPLEX detects an unexpected configuration of array
controllers or paths to arrays. Under normal active/passive operation, one controller for any given
LUN is active, the other is passive.
If the path to the active controller fails, the passive path transitions to active. The transition must
wait for the failed active controller to drain its pending I/Os. This transient state may be seen
during disk replacement, hot sparing, and disk failure.
If the system detects a LUN in this state, it waits 20 seconds for the LUN to return to normal. If
the LUN does not return to the expected state, the system banishes the LUN.
Example
In the following example:
l The list-banished command shows a volume is banished from director 1-1-A
l The unbanish command unbanishes the volume.
l The list-banished command shows the change:
See also
l storage-volume auto-unbanish-interval
l storage-volume list-banished
storage-volume unclaim
Unclaims the specified previously claimed storage volumes.
Contexts
All contexts.
In /clusters/cluster/storage-elements/storage-volumes context, command is
unclaim.
Syntax
storage-volume unclaim
[-b|--batch-size] integer
[-d|--storage-volumes] path, [path...]
[-r|--return-to-pool]
Arguments
Required arguments
[-d|--storage-volumes] path, * Specifies the storage volumes to unclaim.
[path...]
Optional arguments
[-b|--batch-size] integer Specifies the maximum number of storage volumes to
unclaim at once.
[-r|--return-to-pool] Returns the storage capacity of each VIAS-based
volume to the pool on the corresponding storage-array.
* - argument is positional.
Description
Use the storage-volume unclaim command to return the specified storage volumes to the
unclaimed state.
The target storage volume must not be in use.
Note: When you use the storage-volume unclaim command with VIAS based storage
volumes, the command removes the storage volumes from VPLEX and they are no longer
visible. When you use the command with non VIAS based storage volumes, the command
marks the storage volumes as unclaimed. This is the intended behavior.
Unclaim a thin storage volume
When a storage volume is unclaimed, the thin-rebuild attribute is set to false.
Note: The thin-rebuild attribute can only be modified for storage volumes that are either
claimed or used. When the unclaimed storage volume is claimed and its state is claimed or
used, use the set command to modify the thin-rebuild attribute.
Example
In the following example:
l The ll command in storage-volumes context displays storage volumes, including their use
state,
l The storage-volume unclaim command unclaims two claimed volumes:
VPlexcli:/clusters/cluster-2/storage-elements/storage-volumes> ll
VPlexcli:/clusters/cluster-2/storage-elements/storage-volumes> unclaim -d
Basic_c1_ramdisk_100GB_686_See also
storage-volume used-by
Displays the components that use the specified storage volumes.
Contexts
All contexts.
In /clusters/cluster/storage-elements/storage-volumes context, command is
used-by.
Syntax
storage-volume used-by
[-d|--storage-volumes] path [,path...]
Arguments
Required arguments
[-d|--storage-volumes] path * List of one or more storage volumes for which to find
users.
Description
To manually deconstruct an encapsulated storage volume, remove each layer starting from the
top.
Use the storage-volume used-by command to see the layers from the bottom up.
Example
VPlexcli:/clusters/cluster-2/storage-elements/storage-volumes> used-by
CX4_lun0
/clusters/cluster-1/devices/base0:
extent_CX4_lun0_1
CX4_lun0
/clusters/cluster-1/devices/base1:
extent_CX4_lun0_2
CX4_lun0
/clusters/cluster-1/devices/base2:
extent_CX4_lun0_3
CX4_lun0
/clusters/cluster-1/devices/base3:
extent_CX4_lun0_4
CX4_lun0
/clusters/cluster-1/storage-elements/extents/extent_CX4_lun0_5:
CX4_lun0
/clusters/cluster-1/storage-elements/extents/extent_CX4_lun0_6:
CX4_lun0
syrcollect
Collects system configuration data for System Reporting (SYR).
Contexts
All contexts.
Syntax
syrcollect
[-d|--directory] directory
Arguments
Optional
arguments
[-d|-- Non-default directory in which to store the output. Files saved in the non-
directory] default directory are not automatically sent to Dell EMC.
directory l Default: Files are stored in the Event_Msg_Folder in the directory
specified in the EmaAdaptorConfig.properties file.
l EmaAdaptorConfig.properties and the Event_Msg_Folder are
located in /opt/emc/VPlex on the management server.
l Files in the default directory are automatically sent to Dell EMC.
Description
Manually starts a collection of SYR data, and optionally sends the resulting zip file to Dell EMC.
Run this command after every major configuration change or upgrade.
Data collected includes:
l VPLEX information
l RecoverPoint information (if RecoverPoint is configured)
l Cluster information
l Engine/chassis information
l RAID information
l Port information
l Back end storage information
The output of the command is a zipped xml file named:
VPLEXTLA_Config_TimeStamp.zip.
in the specified output directory.
Files in the default directory are automatically sent to Dell EMC.
Use the --directory argument to specify a non-default directory. Output files sent to a non-
default directory are not automatically sent to Dell EMC.
Example
Start an SYR data collection, and send the output to Dell EMC:
VPlexcli:/> syrcollect
Start an SYR data collection, and send the output to the specified directory:
See also
l scheduleSYR add
l scheduleSYR list
l scheduleSYR remove
tree
Displays the context tree.
Contexts
All contexts.
Syntax
tree
[-e|--expand]
[-c|--context] subcontext-root
[-s|--select] glob-pattern
Arguments
Optional arguments
[-e|--expand] Expand the subcontexts.
[-c|--context] subcontext-root The subcontext to use as the root for the tree.
[-s|--select] glob-pattern Glob pattern for selecting the contexts in the tree.
Description
Displays the sub-context tree.
Use the tree command with no arguments to display the sub context tree from the current
context.
Use the --context subcontext root to display the sub context tree from the specified
subcontext.
Use the --expand argument to expand the sub-contexts if applicable.
Use the --selectglob-pattern argument to display contexts in the specified sub-tree that match
the glob pattern. The glob pattern may match more contexts that are outside the given sub-tree.
Examples
Display contexts below the current context:
VPlexcli:/management-server> tree
/management-server:
ports
eth0
eth1
eth2
eth3
See also
l drill-down
l set
unalias
Removes a command alias.
Contexts
All contexts.
Syntax
unalias
[-n|--name] name
[-a|--all]
Arguments
Optional arguments
[-n|--name] name The name of the alias to remove.
[-a|--all] Remove all defined aliases.
Example
In the following example:
l alias displays a list of all aliases on the VPLEX
l unalias deletes the specified alias
VPlexcli:/> alias
Name Description
------------ -------------------------------------------
? Substitutes the 'help' command.
GoToDir_2_2A Substitutes the 'cd
/engines/engine-2-2/directors/Cluster_2_Dir_2A' command.
ll Substitutes the 'ls -al' command.
quit Substitutes the 'exit' command.
VPlexcli:/> unalias GoToDir_2_2A
VPlexcli:/> alias
Name Description
---- ---------------------------------
? Substitutes the 'help' command.
ll Substitutes the 'ls -al' command.
quit Substitutes the 'exit' command.
See also
l alias
user add
Adds a username to the VPLEX management server and optionally assigns a role to the added
username.
Contexts
All contexts.
Syntax
user add
[-u|--username] username
[-r|--rolename] rolename
Arguments
Required arguments
[-u|--username] username Username to add.
Optional arguments
[-r|--rolename] rolename Rolename to assign.
Description
Administrator privileges are required to execute the user add command.
VPLEX has two pre-configured CLI users that can not be removed: admin and service.
Note: In VPLEX Metro configurations, the system does not propagate VPLEX CLI accounts
created on one management server to the second management server. The user list
command displays only those accounts configured on the local management server, not both
servers.
Administrative privileges are required to add, delete, and reset user accounts. You must reset
the password for the admin account the first time you access the admin account. After the
admin password is reset, the admin user can manage (add, delete, reset) user accounts.
To change the password for the admin account, ssh to the management server as user admin.
Enter the default password listed in the Dell EMC VPLEX Security Configuration Guide. A prompt
to change the admin account password appears. Enter a new password.
Examples
Login to the CLI as an Administrator user.
At the CLI prompt, type the user add command:
admin password:
New password:
Confirm password:
Type the user list command to verify the new username is added:
See also
l user event-server change-password
l user passwd
l user remove
l user reset
Syntax
Arguments
Required arguments
[-u|--username] username * Specifies the user name to add in the event server.
Optional arguments
[-h|--help] Displays command line help.
[--verbose] Provides more output during command execution. This may not
have any effect for some commands.
* - argument is positional.
Description
Add a user to the event server to access VPLEX events.
Example
Add a user to the event server:
See also
l user event-server change-password
Arguments
Required arguments
[-u|--username] username * Specifies the user name for which to change the password.
Optional arguments
[-h|--help] Displays command line help.
[--verbose] Provides more output during command execution. This may not
have any effect for some commands.
* - argument is positional.
Description
Change the password that an external subscriber uses to access VPLEX events.
Examples
An event-server user changes the default password:
See also
l user event-server add-user
user list
Displays usernames configured on the local VPLEX management server.
Contexts
All contexts.
Syntax
user list
Description
Displays the configured usernames.
Note: In VPLEX Metro configurations, the system does not propagate VPLEX CLI accounts
created on one management server to the second management server. The user list
command displays only accounts configured on the local management server, not on both
servers.
Examples
Display the user accounts configured on the local management server:
See also
l user add
l user passwd
l user remove
l user reset
user passwd
Allows a user to change the password for their own username.
Contexts
All contexts.
Syntax
user passwd
[-u|--username] username
Arguments
Required arguments
[-u|--username] username *Username for which to change the password.
* - argument is positional.
Description
Executable by all users to change the password only for their own username.
Examples
old password:
New password:
Type the new password. Passwords must be at least 8 characters long, and must not be dictionary
words.
A prompt to confirm the new password appears:
Confirm password:
See also
l user add
l user event-server change-password
l user remove
l user reset
user remove
Removes a username from the VPLEX management server.
Contexts
All contexts.
Syntax
user remove
[-u|--username] username
Arguments
Required arguments
[-u|--username] username Username to remove.
Description
Administrator privileges are required to execute the user remove command.
Note: Administrative privileges are required to add, delete, and reset user accounts. The
password for the admin account must be reset the first time the admin account is accessed.
After the admin password has been reset, the admin user can manage (add, delete, reset)
user accounts.
To change the password for the admin account, ssh to the management server as user
admin. Enter the default password listed in the Dell EMC VPLEX Security Configuration Guide.
A prompt to change the admin account password appears. Enter a new password.
Example
Login as an Administrator user.
Type the user remove username command:
See also
l user add
l user event-server change-password
l user passwd
l user reset
user reset
Allows an Administrator user to reset the password for any username.
Contexts
All contexts.
Syntax
user reset
[-u|--username] username
Arguments
Required arguments
[-u|--username] username The username whose password is to be reset.
Description
Resets the password for any username.
Administrator privileges are required.
Note: Administrative privileges are required to add, delete, and reset user accounts. The
password for the admin account must be reset the first time the admin account is accessed.
After the admin password has been reset, the admin user can manage (add, delete, reset) user
accounts.
To change the password for the admin account, ssh to the management server as user
admin. Enter the default password listed in the Dell EMC VPLEX Security Configuration Guide.
A prompt to change the admin account password appears. Enter a new password.
All users can change the password for their own account using the user passwd command.
Examples
Login as an Administrator user.
Type the user reset --username username command:
New password:
Confirm password:
validate-system-configuration
Performs a basic system configuration check.
Contexts
All contexts.
Syntax
validate-system-configuration
Description
This command performs the following checks:
l Validates cache mirroring.
l Validates the logging volume.
l Validates the meta-volume.
l Validates back-end connectivity.
Examples
Validate system configuration:
VPlexcli:/> validate-system-configuration
Validate cache replication
Checking cluster cluster-1 ...
rmg component not found skipping the validation of cache replication.
ok
Validate logging volume
No errors found
ok
Validate back-end connectivity
Cluster cluster-2
0 storage-volumes which are dead or unreachable.
0 storage-volumes which do not meet the high availability requirement for
storage volume paths*.
0 storage-volumes which are not visible from all directors.
*To meet the high availability requirement for storage volume paths each
storage volume must be accessible from each of the directors through 2 or
more VPlex backend ports, and 2 or more Array target ports, and there should
be 2 or more ITLs.
Cluster cluster-1
10 storage-volumes which are dead or unreachable.
0 storage-volumes which do not meet the high availability requirement for
See also
l cluster status
l connectivity validate-be
l health-check
vault go
Initiates a manual vault on every director in a given cluster under emergency conditions.
Contexts
All contexts.
Syntax
vault go
[-c|--cluster] cluster
[--force]
Arguments
Description
Use this command to initiate a manual dump from every director in a given cluster to persistent
local storage under emergency conditions.
Use this command to manually start cache vaulting if an emergency shutdown is required and the
storage administrator cannot wait for automatic vaulting to begin.
Examples
Start a manual vault on cluster-1:
See also
l vault overrideUnvaultQuorum
l vault status
l Dell EMC VPLEX Administration Guide
vault overrideUnvaultQuorum
Allows the cluster to proceed with the recovery of the vaults without all the required directors.
Contexts
All contexts.
Syntax
vault overrideUnvaultQuorum
[-c|--cluster] cluster
[--evaluate-override-before-execution]
[--force]
Arguments
Description
WARNING This command could result in data loss.
Use this command to tell the cluster not to wait for all the required directors to rejoin the cluster
before proceeding with vault recovery.
Use this command with the --evaluate-override-before-execution argument to
evaluate the cluster's vault status and make a decision whether to accept a possible data loss and
continue to bring the cluster up. The evaluation provides information as to whether the cluster has
sufficient vaults to proceed with the vault recovery that will not lead to data loss.
Note: One valid vault can be missing without experiencing data loss.
Examples
In the following example, the --evaluate-override-before-execution argument evaluates
the cluster's unvault quorum state in the following circumstances:
l Three directors in a dual engine configuration booted and joined the cluster.
l None of these directors have a valid vault.
l The cluster is waiting for the remaining director to join the cluster before unvault recovery
quorum is established.
In the following example, the command overrides the unvault quorum wait state in the following
circumstances:
l None of the operational directors have valid vaults
l One director is not operational
In the following example, the command evaluates the cluster vault status and overrides the unvault
quorum when:
l Three of four configured directors are operational and have valid vaults
l One director is not operational
See also
l vault go
l vault status
l Dell EMC VPLEX Administration Guide
l VPLEX procedures in the SolVe Desktop
vault status
Displays the current cache vault/unvault status of the cluster.
Contexts
All contexts.
Syntax
vault status
[-c|--cluster] cluster
[--verbose]
Arguments
Optional arguments
[-c|--cluster] cluster Displays vault status for the specified cluster.
[--verbose] Displays additional description and data.
Description
Cache vaulting safeguards dirty data during power outages. Cache vaulting dumps all dirty data to
persistent local storage. Vaulted data is recovered (unvaulted) when power is restored.
This command always displays the cluster’s vault state and the vault state of each of the cluster’s
directors.
When run after a vault has begun and the vault state is Vault Writing or Vault Written, the
following information is displayed:
l Total number of bytes to be vaulted in the cluster
l Estimated time to completion for the vault
When run after the directors have booted and unvaulting has begun and the states are
Unvaulting or Unvault Complete, the following information is displayed:
l Total number of bytes to be unvaulted in the cluster
l Estimated time to completion for the unvault
l Percent of bytes remaining to be unvaulted
l Number of bytes remaining to be unvaulted. If you enter the --verbose argument, the
command displays the following information:
n Average vault or unvault rate.
If this command is run after the directors have booted, unvaulted, and are waiting to acquire an
unvault quorum:
l The state is Unvault Quorum Waiting.
l The output displays a list of directors that are preventing the cluster from gaining unvault
quorum. If the --verbose argument is used, the following additional information is displayed:
n Vaulted data is valid or invalid.
n Required number of directors to proceed with the recovery of vault.
n Number of directors missing and preventing the cluster from proceeding with the recovery
of vault.
Power Loss Detected Power loss has been detected. Waiting for
power to be restored.
Unvault Complete All vaulted dirty data has been read from local
persistent storage.
Examples
Display the summarized status for a cluster that is not currently vaulting or unvaulting:
/engines/engine-1-1/directors/director-1-1-A:
state: Vault Inactive
See also
l vault go
l vault overrideUnvaultQuorum
verify fibre-channel-switches
Verifies that the Fibre Channel switch on each cluster's internal management network has been
configured correctly.
Contexts
/clusters/cluster-n
Syntax
verify fibre-channel-switches
Description
Verifies that the Fibre Channel switch on each cluster's internal management network has been
configured correctly.
Passwords for the service accounts on the switches are required to run this command.
Example
Verify the internal management network:
version
Display version information for connected directors.
Contexts
All contexts.
Syntax
version
[-a|--all]
[-n|directors] context-path,context-path...
[--verbose]
Arguments
Optional arguments
[-a|--all] Displays version information for all connected directors.
* - argument is positional.
Description
This command displays version information for all directors, a specified director, or individual
software components for each director.
Examples
Display management server/SMS version information:
VPlexcli:/> version
What Version Info
-------------------- -------------- ----
Product Version 5.4.0.00.00.10 -
SMSv2 D35.20.0.10.0 -
Mgmt Server Base D35.20.0.1 -
Mgmt Server Software D35.20.0.13 -
Display management server/SMS version and version for the specified director:
Display version information for management server, SMS, and all directors:
VPlexcli:/> version -a
What Version Info
-------------------------------------------- -------------- ----
Product Version 5.4.0.00.00.10 -
SMSv2 D35.20.0.10.0 -
Mgmt Server Base D35.20.0.1 -
Mgmt Server Software D35.20.0.13 -
/engines/engine-2-1/directors/director-2-1-B 6.5.54.0.0 -
/engines/engine-2-1/directors/director-2-1-A 6.5.54.0.0 -
/engines/engine-1-1/directors/director-1-1-B 6.5.54.0.0 -
/engines/engine-1-1/directors/director-1-1-A 6.5.54.0.0 -
Display version information for individual software components on each director. See Software
components table below for a description of the components.
Version: 0005
For director /engines/engine-1-1/directors/director-1-1-A:
What: O/S
Version: D35.20.0.1 (SLES11)
What: Director Software
Version: 6.5.54.0.0
What: ECOM
Version: 6.5.1.0.0-0
What: VPLEX Splitter
Version: 4.1.b_vplex_D35_00_Ottawa_MR1.10-1
What: ZECL
Version: 6.5.52.0.0-0
What: ZPEM
Version: 6.5.52.0.0-0
What: NSFW
Version: 65.1.54.0-0
What: BIOS Rev
Version: 08.50
What: POST Rev
Version: 43.80
What: FW Bundle Rev
Version: 12.60
What: SSD Model: P30056-MTFDBAA056SAL 118032803
Version: 0005
virtual-volume create
Creates a virtual volume on a host device.
Contexts
All contexts.
Syntax
virtual-volume create
[-r|--device] context-path
[-t|--set-tier] tier
[-n | --thin]
[-i | --initialize]
[--confirm-init]
[--verbose]
Arguments
Required arguments
[-r | --device] context-path * Device on which to host the virtual volume.
Optional arguments
[-t | --set-tier] tier Set the storage-tier for the new virtual volume.
[-n | --thin] Specifies whether to create a thin-enabled virtual volume or
not.
[-i | --initialize] Initializes the virtual volume by erasing 10 MB of the initial
storage blocks. This prevents the virtual volume from
retaining old or stale data. This must be used along with the
confirm-init option.
* - argument is positional.
Description
A virtual volume is created on a device or a distributed device, and is presented to a host through a
storage view. Virtual volumes are created on top-level devices only, and always use the full
capacity of the device or distributed device.
The underlying storage of a virtual volume may be distributed over multiple storage volumes, but
appears as a single contiguous volume.
The specified device must not already have a virtual volume and must not have a parent device.
Use the --set-tier argument to set the storage tier for the new virtual volume.
Field Description
Field Description
Field Description
Field Description
Field Description
Use the ll command in a specific virtual volume’s context to display the current storage-tier.
Use the set command to modify a virtual volume’s storage-tier.
Examples
In the following example:
l The virtual-volume create command creates a new virtual volume,
l The cd command navigates to the new virtual volume’s context,
l The ll command displays the new virtual volume:
VPlexcli:/clusters/cluster-1/virtual-volumes> cd r0_C1_VATS_00001_vol
VPlexcli:/clusters/cluster-1/virtual-volumes/r0_C1_VATS_00001_vol> ll
Name Value
-------------------------- ----------------------------------------
block-count 20971520
block-size 4K
cache-mode synchronous
capacity 80G
consistency-group -
expandable true
expandable-capacity 0B
expansion-method storage-volume
expansion-status -
health-indications []
health-state ok
initialization-status success
locality local
operational-status ok
recoverpoint-protection-at []
recoverpoint-usage -
scsi-release-delay 0
service-status unexported
storage-array-family clariion
storage-tier -
supporting-device r0_C1_VATS_00001
system-id r0_C1_VATS_00001_vol
thin-capable false
thin-enabled unavailable
volume-type virtual-volume
vpd-id VPD83T3:6000144000000010200ecb6260b7ac42
VPlexcli:/clusters/cluster-1/virtual-volumes/r0_C1_VATS_00001_vol>
See Also
l virtual-volume destroy
l virtual-volume expand
l virtual-volume provision
l virtual-volume reinitialize
virtual-volume destroy
Destroys existing virtual volumes.
Contexts
All contexts.
Syntax
virtual-volume destroy
[-v|--virtual-volumes] context-path,context-path...
[-f|--force]
Arguments
Required arguments
[-v|--virtual-volumes] List of one or more virtual volumes to destroy. Entries must be
context-path, context-path... separated by commas. The specified virtual volumes must not be
exported to hosts.
Optional arguments
[-f|--force] Forces the destruction of the virtual volumes without asking for
confirmation. Allows this command to be run from non-
interactive scripts.
Description
Deletes the virtual volume and leaves the underlying structure intact. The data on the volume is no
longer accessible.
Only unexported virtual volumes can be deleted. To delete an exported virtual volume, first remove
the volume from the storage view.
Examples
See also
l virtual-volume create
l virtual-volume expand
l virtual-volume provision
virtual-volume expand
Non-disruptively increases the capacity of an existing virtual volume.
Contexts
All contexts.
In clusters/cluster/virtual-volumes/ context and below, command is expand.
Syntax
virtual-volume expand
[-v|--virtual-volume] context-path
[-e|--extent] extent
[-f|--force]
Arguments
Required
arguments
[-v|-- * The virtual volume to expand.
virtual-
l For both storage volume and concatenation methods of expansion, the
volume]
virtual volume must be expandable, and have a geometry of RAID 1, RAID
context-path
C, or RAID 0.
l For storage-volume expansions, the virtual volume must be expandable,
and have a geometry of RAID 1, RAID C, RAID 0, or DR1.
Optional
arguments
[-e|--extent] * The target local device or extent to add to the virtual volume using the
extent concatenation method of expansion. The local device or extent must not have
a virtual volume on top of it.
[-f|--force] The meaning of this argument varies, depending on whether the --extent
argument is used (expansion method = concatenation) or not used
(expansion-method = storage-volume)
l For storage-volume expansion, the --force argument skips the
confirmation message.
l For concatenation expansion, the --force argument expands a virtual
volume built on a RAID 1 device using a target that is not a RAID 1 or that
is not as redundant as the device supporting the virtual volume.
* - argument is positional.
Description
This command expands the specified virtual volume using one of two methods; storage-volume or
concatenation.
The ll command output shows whether the volume is expandable, the expandable capacity (if
any), and the expansion method available for the volume. For example:
There are two methods to expand a virtual volume; storage-volume and concatenation.
l storage-volume - If the virtual volume has a non-zero expandable-capacity, this command will
expand the capacity of the virtual volume by it's full expandable-capacity.
To use the storage-volume method of expansion, use this command without the --extent
argument. The storage-volume method of expansion adds the entire amount of the
expandable-capacity to the volume’s configured capacity.
l concatenation - (also known as RAID C expansion) Expand the virtual volume by adding the
specified extents or devices.
The concatenation method does not support non-disruptive expansion of DR1 devices.
Use this command with the --extent argument to expand a virtual volume using the
concatenation method of expansion.
Note: You cannot expand a virtual volume if the initialization status of the virtual volume is
failed or in-progress.
Before expanding a storage volume, understand the limitations of the function and the
prerequisites required for volumes to be expanded. See the Dell EMC VPLEX Administration Guide
for more information on how expansion works. For procedure to expand virtual volumes, see the
VPLEX procedures in the SolVe Desktop.
Examples
Expand a volume using the storage-volume method:
l The ll clusters/cluster-1/virtual-volumes command displays virtual volumes, and
whether the volumes are expandable, and the expandable capacity, if any (not all columns are
shown in example).
l The ll clusters/cluster-1/virtual-volumes/virtual-volume command displays the
method (storage-volume) of expansion applicable to the volume.
l The expand command starts the expansion of the specified virtual volume.
l The ll clusters/cluster-1/virtual-volumes command displays the expanded
volume:
VPlexcli:/clusters/cluster-1/virtual-volumes> ll /clusters/cluster-1/
virtual-volumes
/clusters/cluster-1/virtual-volumes:
Name ...Capacity Locality Supporting Cache
Expandable Expandable ...
... Device Mode
Capacity ...
----------------- ...--------- -------- ----------- -----------
------------ ---------- ...
Raid0_1Ga_11_vol ...5G local raid1-dev synchronous
true 4.5G
RaidC_1Gb_11_vol ...5G local raid1-dev synchronous
true 0B
Test_volume ...0.5G local Test synchronous
true 4.5G
.
.
.
VPlexcli:/clusters/cluster-1/virtual-volumes> ll /clusters/cluster-1/
virtual-volumes/ Test_volume
Name Value
------------------- --------------
block-count 131072
block-size 4K
cache-mode synchronous
capacity 0.5G
consistency-group -
expandable true
expandable-capacity 4.5G
expansion-method storage-volume
expansion-status -
.
.
.
VPlexcli:/clusters/cluster-1/virtual-volumes> expand -v Test_volume/
Virtual Volume expansion can take some time and once started, cannot be
cancelled. Some operations such as upgrades and data migrations will not
be possible during the expansion. In some cases hosts and their
applications may need to be restarted once the expansion has completed.
Do you wish to proceed ? (Yes/No) yes
The expansion of virtual-volume 'Test_volume' has started.
VPlexcli:/clusters/cluster-1/virtual-volumes> cd Test_volume/
VPlexcli:/clusters/cluster-1/virtual-volumes/Test_volume> ll
Name Value
------------------- --------------
block-count 131072
block-size 4K
cache-mode synchronous
capacity 0.5G
consistency-group -
expandable true
expandable-capacity 4.5G
expansion-method storage-volume
expansion-status in-progress
health-indications []
.
.
.
VPlexcli:/clusters/cluster-1/virtual-volumes> ll /clusters/cluster-1/
virtual-volumes
/clusters/cluster-1/virtual-volumes:
Name ...Capacity Locality Supporting Cache
Expandable Expandable ...
... Device Mode
Capacity ...
----------------- ...--------- -------- ----------- -----------
------------ ---------- ...
Raid0_1Ga_11_vol ...5G local raid1-dev synchronous
true 4.5G
RaidC_1Gb_11_vol ...5G local raid1-dev synchronous
true 0B
Test_volume ...5G local Test synchronous
true 0B
.
.
.
VPlexcli:/> ll /clusters/cluster-1/virtual-volumes
/clusters/cluster-1/virtual-volumes:
Name Operational Health ... ... Expandable
----------------- Status State ... ... ----------
----------------- ----------- ------ ... ... ----------
Raid0_1Ga_11_vol ok ok ... ... true
RaidC_1Gb_11_vol ok ok ... ... true
Raid1_1Gc_11_vol ok ok ... ... true
Test-Device_vol ok ok ... ... true
.
.
.
VPlexcli:/> ll /clusters/cluster-1/virtual-volumes/Test-Device-vol
Name Value
------------------ --------------------------------
.
.
.
expandable true
expansion-method concatenation
health-indications []
.
.
.
VPlexcli:/> ll /clusters/cluster-1/storage-elements/extents
/clusters/cluster-1/storage-elements/extents:
Name StorageVolume Capacity Use
-------------------------------- ----------------------- --------
-------
extent_Symm1554Tdev_061D_1 Symm1554Tdev_061D 100G used
extent_Symm1554Tdev_0624_1 Symm1554Tdev_0624 100G used
extent_Symm1554Tdev_0625_1 Symm1554Tdev_0625 100G used
extent_Symm1554_0690_1 Symm1554_0690 8.43G used
extent_Symm1554_0691_1 Symm1554_0691 8.43G used
extent_Symm1554_0692_1 Symm1554_0692 8.43G used
.
.
.
VPlexcli:/> cd /clusters/cluster-1/virtual-volumes/Test-Device_vol
VPlexcli:/clusters/cluster-1/virtual-volumes/Test-Device_vol> expand --
virtual-volume Test-Device_vol --extent ext_Symm1254_7BF_1
See also
l batch-migrate pause
l batch-migrate resume
l dm migration pause
l dm migration resume
l virtual-volume create
l virtual-volume destroy
l Dell EMC VPLEX Administration Guide
virtual-volume list-thin
Lists the virtual volumes at the given clusters with additional thin-property filtering options.
Contexts
All contexts.
Syntax
virtual-volume list-thin
-t | --clusters context path
-e | --enabled true|false
-c | --capable true|false
[--verbose]
Arguments
Required arguments
-t | --clusters context * The target cluster where virtual volumes are listed.
path
Optional arguments
-e | --enabled true|false Filters volumes with the matching thin-enabled value. The value
can be true or false. If omitted, the results will match volumes
regardless of whether they are thin-enabled or not.
-c | --capable true|false Filters volumes with the matching thin-capable value. The value
can be true or false. If omitted, the results will match volumes
regardless of whether they are thin-capable or not.
[-h|--help] Displays command line help.
[--verbose] Provides more help during command execution. This may not
have any effect for some commands.
* - argument is positional.
Description
This command lists virtual volumes at the given clusters with additional thin-property filtering
options.
The following table describes the filter combinations, and the results that are listed.
See also
storage-volume list-thin-capable
virtual-volume provision
Provisions new virtual volumes using storage pools.
Contexts
All contexts.
Syntax
virtual-volume provision
[-b|--base-name] base-name
[-p|--storage-pools] storage-pools [,storage-pools...]
[-n|--number-of-volumes] number-of-volumes
[-c|--capacity] capacity
[-j|--job] =job
[-g|--consistency-group] consistency-group
[-v|--storage-views] storage-views [,storage-views...]
[-t|--thin]
[-h|--help]
[--verbose]
Arguments
Required arguments
[-b|--base-name] * Specifies the base-name of the virtual-volumes to provision.
base-name Refer to the Dell EMC Simple Support Matrix for a list of
supported arrays.
[-p|--storage-pools] * Specifies the storage-pools to use for provisioning. Storage
storage-pools [, storage- pools must be given in cluster-name, storage-array-name, storage-
pools] pool-name [, cluster-name, storage-array-name, storage-pool-
name...] tuple format. A maximum of four tuples can be specified.
Note: A storage-group name is mandatory for VMAX3 arrays
[-c|--capacity] * Specifies the capacity of the virtual-volumes in MB, GB, TB, and
capacity so on. The minimum size is 150 MB.
Optional arguments
[-j|--job] =job The name of the job to create.
[-g|--consistency- Specifies the context path of the consistency-group to add the
group] consistency-group virtual-volumes to.
Note: During Integrated Array Services (VIAS) virtual volume
provisioning, the virtual volume can be added to new or
existing VPLEX consistency groups using the GUI. However,
the CLI only supports adding virtual volumes to existing VPLEX
consistency groups.
* - argument is positional.
Description
Provisions new virtual volumes using storage pools, and optionally specify a job name and which
VPLEX storage-group to use on the back-end managed-array . The RAID geometry is determined
automatically by the number and location of the specified pools.
Provisions new virtual-volumes using storage-pool tuples where an optional fourth storage-group-
name value may be provided in the tuple. This would be the storage-group of masking view on the
- array which exposes volumes to this cluster.
If no storage-group is specified, the system finds all masking views on the back-end managed
arrays which expose volumes/LUNs to the specified clusters, and uses the masking view with the
lowest LUN count.
Note: When creating a storage group using VIAS, always associate it with a pool. Failure to
associate a storage group to a pool could cause the storage group to not work properly.
See also
l storage-tool compose
l storage-volume unclaim
virtual-volume re-initialize
Restarts the initialization process on a virtual volume.
Contexts
All contexts.
Syntax
virtual-volume re-initialize
[-v | --virtual-volume] virtual-volume
[--verbose]
Arguments
Required arguments
[-v | --virtual-volume] virtual- * The virtual-volume that you want to reinitialize.
volume
Optional arguments
[-h|--help] Displays command line help.
[--verbose] Provides more output during command execution. This
may not have any effect for some commands.
* - argument is positional.
Description
This command restarts a failed initialization process on a virtual-volume. The command runs only if
the initialization-status field of the virtual volume shows failed.
See Also
l virtual-volume create
l virtual-volume destroy
l virtual-volume expand
l virtual-volume provision
virtual-volume set-thin-enabled
Sets the thin-enabled property to either true or false for the given virtual volumes.
Contexts
All contexts.
Syntax
virtual-volume set-thin-enabled
-v | --virtual-volumes context path [, context path...]
-t | --thin-enabled arg
[-h|--help]
[--verbose]
Arguments
Required arguments
-t | --thin-enabled arg Specifies the desired value of the thin-enabled
property.
-v | --virtual-volumes context * Specifies the virtual volumes for which the thin-
path [, context path...] enabled property must be set.
Optional arguments
[-h|--help] Displays command line help.
[--verbose] Provides more help during command execution. This
may not have any effect for some commands.
* - argument is positional.
Description
This command sets the thin-enabled property to either true or false for the given virtual volumes.
Virtual volumes can be specified as a parameter, using globbing or wildcards.
The virtual-volume set-thin-enabled command does not fail even if virtual volumes are
not thin-capable. Virtual volumes that are not thin-capable are skipped. For brevity of the user
messages, the regular output of this command only includes:
l the number of volumes that are set as thin-enabled (or not set)
l the number of volumes that are skipped
If you want detailed output showing exactly which volumes are set as thin-enabled or skipped, use
the --verbose option. However, the output can be very long.
Example
Displays all the virtual volumes that are set as thin-enabled, or are skipped.
Virtual-volumes that were skipped because they are either already thin-
enabled or not thin-capable:
thick_vol_1, thick_vol_2
VPlexcli:/>
See also
storage-volume list-thin-capable
virtual-volume summary
Displays a summary for all virtual volumes.
Contexts
All contexts.
virtual-volume summary
[-c|--clusters] cluster,cluster
Arguments
Optional arguments
[-c|--clusters] cluster List of one or more names of clusters. Display information for only
the specified clusters. Entries must be separated by commas.
Description
Displays a list of any devices with a health-state or operational-status other than ok.
Displays a summary including devices per locality (distributed versus local), cache-mode, and total
capacity for the cluster.
Displays any volumes with an expandable capacity greater than 0, and whether an expansion is in
progress.
If the --clusters argument is not specified and the command is executed at or below a /
clusters/cluster context, information is displayed for the current cluster.
Otherwise, virtual volumes of all clusters are summarized.
Field Description
Field Description
Summaries
Total Total number of virtual volumes on the cluster,
and number of unhealthy virtual volumes.
Field Description
Examples
In the following example, all devices on cluster-1 are healthy:
In the following example, one distributed virtual volume has expandable capacity at both clusters:
See also
l ds summary
l export port summary
l export storage-view summary
l extent summary
l local-device summary
vpn restart
Restarts the VPN connection between management servers.
Contexts
All contexts.
Syntax
vpn restart
Description
Restarts the VPN.
Example
See also
l vpn status
l vpn start
l vpn stop
vpn start
Starts the VPN connection between the management servers.
Contexts
All contexts.
Syntax
vpn start
Description
Starts the VPN connection between the management servers. In a VPLEX Metro system, run this
command on both the clusters.
Example
See also
l vpn restart
l vpn status
l vpn stop
vpn status
Verifies the VPN connection between management servers.
Contexts
All contexts.
Syntax
vpn status
Description
Verifies whether the VPN connection between the management servers is operating correctly and
checks whether all the local and remote directors can be reached (pinged).
If Cluster Witness is deployed, verifies the VPN connection between the management servers and
the Cluster Witness server.
Examples
Display VPN status (no Cluster Witness):
IPSEC is UP
Cluster Witness Server at IP Address 128.221.254.3 is reachable
See also
l vpn restart
l vpn start
l vpn stop
vpn stop
Stops the VPN connection between management servers.
Contexts
All contexts.
Syntax
vpn stop
Description
Stops the VPN connection between management servers.
Examples
VPlexcli:/> vpn stop
See also
l vpn restart
l vpn status
l vpn start
wait
Causes a wait until specified context-tree conditions are met.
Contexts
All contexts.
Syntax
Arguments
Required arguments
[-c | --context-list] [, context-list ...] Context list, separated by commas
Optional arguments
Description
If a context-list is provided without an attribute, the command will wait until the contexts in
the list exist. If wildcard patterns are used, the command will wait until at least one context can be
resolved for every pattern.
If an attribute and value pair are given, the command will wait until the attribute of every context
resolved from context-list has the given value.
The attribute values are compared as strings.
Use the --timeout option to set the timeout in seconds. The default timeout is 20 seconds.
webserver
Start, stop, or restart the Webserver.
Contexts
All contexts.
Syntax
Arguments
Optional arguments
[-h | --help] Displays the usage for this command
[--verbose] Provides additional output during command execution. This may not have
any effect for some commands.
Description
This command starts, stops, or restarts the Webserver.
Note: To ensure a successful restart of the Webserver, it is recommended to avoid using the
restart option as it has proven to be unreliable in some cases due to a number of external
environmental factors. Instead, to restart, issue a stop, and then a start. After issuing these
commands, verify that the Webserver is running.
T
thin volumes 218
tree 512
U
unalias 513
useer:remove 519
user:add 514
user:event-server add-user 515
user:event-server change-password 516
user:list 517
user:passwd 518
user:reset 520