SF Admin 61 Lin
SF Admin 61 Lin
SF Admin 61 Lin
Foundation 6.1
Administrator's Guide - Linux
May 2014
Legal Notice
Copyright 2014 Symantec Corporation. All rights reserved.
Symantec, the Symantec Logo, the Checkmark Logo, Veritas, Veritas Storage Foundation,
CommandCentral, NetBackup, Enterprise Vault, and LiveUpdate are trademarks or registered
trademarks of Symantec Corporation or its affiliates in the U.S. and other countries. Other
names may be trademarks of their respective owners.
The product described in this document is distributed under licenses restricting its use, copying,
distribution, and decompilation/reverse engineering. No part of this document may be
reproduced in any form by any means without prior written authorization of Symantec
Corporation and its licensors, if any.
THE DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED
CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED
WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR
NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH
DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. SYMANTEC CORPORATION SHALL
NOT BE LIABLE FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES IN CONNECTION
WITH THE FURNISHING, PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE
INFORMATION CONTAINED IN THIS DOCUMENTATION IS SUBJECT TO CHANGE
WITHOUT NOTICE.
The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, "Rights in
Commercial Computer Software or Commercial Computer Software Documentation", as
applicable, and any successor regulations, whether delivered by Symantec as on premises
or hosted services. Any use, modification, reproduction release, performance, display or
disclosure of the Licensed Software and Documentation by the U.S. Government shall be
solely in accordance with the terms of this Agreement.
Symantec Corporation
350 Ellis Street
Mountain View, CA 94043
http://www.symantec.com
Technical Support
Symantec Technical Support maintains support centers globally. Technical Supports
primary role is to respond to specific queries about product features and functionality.
The Technical Support group also creates content for our online Knowledge Base.
The Technical Support group works collaboratively with the other functional areas
within Symantec to answer your questions in a timely fashion. For example, the
Technical Support group works with Product Engineering and Symantec Security
Response to provide alerting services and virus definition updates.
Symantecs support offerings include the following:
A range of support options that give you the flexibility to select the right amount
of service for any size organization
For information about Symantecs support offerings, you can visit our website at
the following URL:
www.symantec.com/business/support/index.jsp
All support services will be delivered in accordance with your support agreement
and the then-current enterprise technical support policy.
Hardware information
Operating system
Network topology
Problem description:
Customer service
Customer service information is available at the following URL:
www.symantec.com/business/support/
Customer Service is available to assist with non-technical questions, such as the
following types of issues:
customercare_apac@symantec.com
semea@symantec.com
supportsolutions@symantec.com
Documentation
Product guides are available on the media in PDF format. Make sure that you are
using the current version of the documentation. The document version appears on
page 2 of each guide. The latest product documentation is available on the Symantec
website.
https://sort.symantec.com/documents
Your feedback on product documentation is important to us. Send suggestions for
improvements and reports on errors or omissions. Include the title and document
version (located on the second page), and chapter and section titles of the text on
which you are reporting. Send feedback to:
doc_feedback@symantec.com
For information regarding the latest HOWTO articles, documentation updates, or
to ask a question regarding product documentation, visit the Storage and Clustering
Documentation forum on Symantec Connect.
https://www-secure.symantec.com/connect/storage-management/
forums/storage-and-clustering-documentation
Contents
Section 1
Chapter 1
Chapter 2
Chapter 3
26
28
28
29
29
30
31
31
32
32
32
33
34
38
40
42
43
44
47
48
49
49
51
54
Contents
Chapter 4
55
56
56
56
57
57
59
62
63
64
65
72
72
75
76
76
77
77
77
78
78
79
79
79
81
82
83
84
85
86
89
91
92
93
93
94
Contents
107
108
108
109
Section 2
Chapter 5
Chapter 6
Chapter 7
112
113
114
114
115
117
118
121
124
126
134
138
139
140
142
144
146
147
147
151
151
153
155
155
156
157
Contents
Chapter 8
159
159
160
160
161
161
161
162
164
165
167
167
167
167
168
168
169
170
171
172
173
174
175
177
178
178
179
179
181
182
Section 3
Chapter 9
185
186
187
189
10
Contents
191
204
206
207
207
208
210
212
213
214
214
217
218
219
219
220
223
224
230
231
232
233
233
239
241
242
242
244
245
245
246
247
248
248
249
11
Contents
Chapter 10
............................ 251
Chapter 11
Chapter 12
251
252
252
254
255
256
258
259
260
261
262
263
264
265
266
267
268
270
270
270
280
283
285
286
287
287
288
12
Contents
Section 4
Chapter 13
Chapter 14
290
291
291
292
292
293
294
294
295
295
Section 5
Chapter 15
300
301
302
304
310
311
312
312
312
313
314
315
315
316
317
317
321
324
324
13
Contents
Chapter 16
325
325
326
326
326
327
327
329
330
331
333
334
335
337
338
339
340
346
349
351
354
356
357
359
360
360
360
361
361
362
363
363
364
364
365
367
369
369
370
14
Contents
Chapter 17
Chapter 18
370
371
371
375
376
377
378
378
379
380
381
383
383
385
386
387
388
388
390
398
399
399
405
406
406
407
408
408
409
409
409
410
15
Contents
Chapter 19
412
413
413
414
415
Section 6
Chapter 20
Chapter 21
418
419
420
420
421
421
422
423
Chapter 22
429
430
431
431
432
434
436
438
16
Contents
Section 7
Chapter 23
Chapter 24
Chapter 25
445
447
448
448
448
449
449
450
451
452
453
453
453
454
455
456
457
457
459
460
460
461
462
463
465
465
465
465
17
Contents
Chapter 26
Administering SmartTier
466
466
466
468
468
470
470
471
471
................................................. 474
474
476
476
477
478
478
479
479
480
480
480
480
482
482
483
483
486
488
503
505
515
524
528
529
530
531
18
Contents
Chapter 27
Chapter 28
531
531
534
534
535
535
536
536
537
538
542
542
543
544
547
548
549
550
550
551
552
553
554
554
555
555
558
558
560
561
562
562
563
566
566
568
568
19
Contents
Chapter 29
568
569
569
569
570
572
573
573
574
574
576
577
578
579
579
583
Section 8
Chapter 30
585
586
586
587
587
588
589
602
602
603
605
606
607
609
610
610
611
611
20
Contents
Chapter 31
612
613
613
614
615
616
617
617
623
625
626
627
628
630
631
637
643
644
646
653
653
654
656
658
658
661
662
662
663
663
664
667
667
673
677
678
679
680
682
21
Contents
Chapter 32
Chapter 33
689
689
690
690
690
696
698
699
699
701
702
703
704
704
705
705
706
706
707
707
707
708
709
710
711
713
715
Section 9
Appendix A
Appendix B
22
Contents
Appendix C
731
731
731
739
739
746
751
752
753
754
755
757
Appendix D
721
721
722
722
722
723
723
759
761
761
762
762
763
765
785
785
789
789
792
23
Contents
24
Section
Introducing Storage
Foundation
Chapter
Overview of Storage
Foundation
This chapter includes the following topics:
Table 1-1
Component
Description
Dynamic Multi-Pathing
(DMP)
Veritas Volume Manager Provides a logical layer between your operating system devices
(VxVM)
and your applications.
VxVM enables you to create logical devices called volumes on
the physical disks and LUNs.The applications such as file systems
or databases access the volumes as if the volumes were physical
devices but without the physical limitations.
VxVM features enable you to configure, share, manage, and
optimize storage I/O performance online without interrupting data
availability. Additional VxVM features enhance fault tolerance and
fast recovery from disk failure or storage array failure.
Veritas File System
(VxFS)
Symantec Replicator
(VR)
I/O fencing
27
28
Additionally, VxVM provides features that enhance fault tolerance and fast recovery
from disk failure or storage array failure.
VxVM overcomes restrictions imposed by hardware disk devices and by LUNs by
providing a logical volume management layer. This allows volumes to span multiple
disks and LUNs.
VxVM provides the tools to improve performance and ensure data availability and
integrity. You can also use VxVM to dynamically configure storage while the system
is active.
Extents
About extents
29
in a circular intent log. The intent log recovery feature is not readily apparent to
users or a system administrator except during a system failure. By default, VxFS
file systems log file transactions before they are committed to disk, reducing time
spent recovering file systems after the system is halted unexpectedly.
During system failure recovery, the VxFS fsck utility performs an intent log replay,
which scans the intent log and nullifies or completes file system operations that
were active when the system failed. The file system can then be mounted without
requiring a full structural check of the entire file system. Replaying the intent log
might not completely recover the damaged file system structure if there was a disk
hardware failure; hardware problems might require a complete system check using
the fsck utility provided with VxFS.
The mount command automatically runs the VxFS fsck command to perform an
intent log replay if the mount command detects a dirty log in the file system. This
functionality is only supported on a file system mounted on a Veritas Volume
Manager (VxVM) volume, and is supported on cluster file systems.
See the fsck_vxfs(1M) manual page and mount_vxfs(1M) manual page.
The VxFS intent log is allocated when the file system is first created. The size of
the intent log is based on the size of the file systemthe larger the file system, the
larger the intent log. You can resize the intent log at a later time by using the fsadm
commnad.
See the fsadm_vxfs(1M) manual page.
The maximum default intent log size for disk layout Version 7 or later is 256
megabytes.
Note: Inappropriate sizing of the intent log can have a negative impact on system
performance.
See Intent log size on page 155.
About extents
An extent is a contiguous area of storage in a computer file system, reserved for a
file. When starting to write to a file, a whole extent is allocated. When writing to the
file again, the data continues where the previous write left off. This reduces or
eliminates file fragmentation. An extent is presented as an address-length pair,
which identifies the starting block address and the length of the extent (in file system
or logical blocks). Since Veritas File System (VxFS) is an extent-based file system,
addressing is done through extents (which can consist of multiple blocks) rather
than in single-block segments. Extents can therefore enhance file system throughput.
30
Extents allow disk I/O to take place in units of multiple blocks if storage is allocated
in contiguous blocks. For sequential I/O, multiple block operations are considerably
faster than block-at-a-time operations; almost all disk drives accept I/O operations
on multiple blocks.
Extent allocation only slightly alters the interpretation of addressed blocks from the
inode structure compared to block-based inodes. A VxFS inode references 10 direct
extents, each of which are pairs of starting block addresses and lengths in blocks.
Disk space is allocated in 512-byte sectors to form logical blocks. VxFS supports
logical block sizes of 1024, 2048, 4096, and 8192 bytes. The default block size is
1 KB for file system sizes of up to 1 TB, and 8 KB for file system sizes 1 TB or
larger.
31
What is VFR?
Veritas File Replicator (VFR) enables cost-effective periodic replication of data over
IP networks, giving organizations an extremely flexible storage independent data
availability solution for disaster recovery and off-host processing. With flexibility of
scheduling the replication intervals to match the business requirements, Veritas
File Replicator tracks all updates to the file system and replicates these updates at
the end of the configured time interval. VFR leverages data deduplication provided
by Veritas File System (VxFS) to reduce the impact that replication can have on
scarce network resources. VFR is included, by default, with Symantec Virtual Store
6.0 on Linux and is available as an option with Symantec Storage Foundation and
associated products on Linux.
Features of VFR
Veritas File Replicator (VFR) includes the following features:
Supports reversible data transfer. The target of replication may become the
source at runtime, with the former source system becoming a target.
Supports automatic recovery from the last good successfully replicated point in
time image.
32
See the Symantec Storage Foundation and High Availability Solutions Replication
Administrators Guide for more information.
Optimizing thin array usage: you can use Storage Foundation thin provisioning
and thin reclamation solutions to set up and maintain thin storage.
Backing up and recovering data: you can use Storage Foundation Flashsnap,
Storage Checkpoints, and NetBackup point-in-time copy methods to back up
and recover your data.
Processing data off-host: you can avoid performance loss to your production
hosts by using Storage Foundation volume snapshots.
Optimizing test and development environments: you can optimize copies of your
production database for test, decision modeling, and development purposes
using Storage Foundation point-in-time copy methods.
Maximizing storage utilization: you can use Storage Foundation Flexible Storage
Sharing for data redundancy, high availability, and disaster recovery, without
physically shared storage.
Migrating your data: you can use Storage Foundation Portable Data Containers
to easily and reliably migrate data from one environment to another.
For a supplemental guide that documents Storage Foundation use case solutions
using example scenarios: See the Symantec Storage Foundation and High
Availability Solutions Solutions Guide.
33
Chapter
Table 2-1
Array type
Description
Active/Active (A/A)
Asymmetric Logical Unit Access (ALUA) DMP supports all variants of ALUA.
Active/Passive (A/P)
35
Table 2-1
Array type
(continued)
Description
Active/Passive in explicit failover mode The appropriate command must be issued to the
or non-autotrespass mode (A/PF)
array to make the LUNs fail over to the secondary
path.
This array mode supports concurrent I/O and load
balancing by having multiple primary paths into a
controller. This functionality is provided by a
controller with multiple ports, or by the insertion of
a SAN switch between an array and a controller.
Failover to the secondary (passive) path occurs
only if all the active primary paths fail.
Active/Passive with LUN group failover For Active/Passive arrays with LUN group failover
(A/PG)
(A/PG arrays), a group of LUNs that are connected
through a controller is treated as a single failover
entity. Unlike A/P arrays, failover occurs at the
controller level, and not for individual LUNs. The
primary controller and the secondary controller are
each connected to a separate group of LUNs. If a
single LUN in the primary controllers LUN group
fails, all LUNs in that group fail over to the
secondary controller.
This array mode supports concurrent I/O and load
balancing by having multiple primary paths into a
controller. This functionality is provided by a
controller with multiple ports, or by the insertion of
a SAN switch between an array and a controller.
Failover to the secondary (passive) path occurs
only if all the active primary paths fail.
An array policy module (APM) may define array types to DMP in addition to the
standard types for the arrays that it supports.
Symantec Storage Foundation uses DMP metanodes (DMP nodes) to access disk
devices connected to the system. For each disk in a supported array, DMP maps
one node to the set of paths that are connected to the disk. Additionally, DMP
associates the appropriate multi-pathing policy for the disk array with the node.
For disks in an unsupported array, DMP maps a separate node to each path that
is connected to a disk. The raw and block devices for the nodes are created in the
directories /dev/vx/rdmp and /dev/vx/dmp respectively.
Figure 2-1 shows how DMP sets up a node for a disk in a supported disk array.
36
Figure 2-1
VxVM
Host
c1
c2
Mapped by DMP
DMP
Multiple paths
Multiple paths
Disk
DMP implements a disk device naming scheme that allows you to recognize to
which array a disk belongs.
Figure 2-2 shows an example where two paths, sdf and sdm, exist to a single disk
in the enclosure, but VxVM uses the single DMP node, enc0_0, to access it.
Example of multi-pathing for a disk enclosure in a SAN environment
Figure 2-2
Host
c1
VxVM
c2
enc0_0
Mapped
by DMP
Fibre Channel
switches
DMP
sdf
sdm
Disk enclosure
enc0
Disk is sdf or sdm
depending on the path
See About enclosure-based naming on page 38.
See Changing the disk device naming scheme on page 263.
See Discovering and configuring newly added disk devices on page 185.
37
Device discovery
Device discovery is the term used to describe the process of discovering the disks
that are attached to a host. This feature is important for DMP because it needs to
support a growing number of disk arrays from a number of vendors. In conjunction
with the ability to discover the devices attached to a host, the Device Discovery
service enables you to add support for new disk arrays. The Device Discovery uses
a facility called the Device Discovery Layer (DDL).
The DDL enables you to add support for new disk arrays without the need for a
reboot.
See How to administer the Device Discovery Layer on page 191.
38
Figure 2-3
Host
c1
Fibre Channel
switch
Disk enclosures
enc0
enc1
enc2
39
Figure 2-4 shows a High Availability (HA) configuration where redundant-loop access
to storage is implemented by connecting independent controllers on the host to
separate switches with independent paths to the enclosures.
Example HA configuration using multiple switches to provide
redundant loop access
Figure 2-4
Host
c1
c2
Fibre Channel
switches
Disk enclosures
enc0
enc1
enc2
Such a configuration protects against the failure of one of the host controllers (c1
and c2), or of the cable between the host and one of the switches. In this example,
each disk is known by the same name to VxVM for all of the paths over which it
can be accessed. For example, the disk device enc0_0 represents a single disk for
which two different paths are known to the operating system, such as sdf and sdm.
See Changing the disk device naming scheme on page 263.
To take account of fault domains when configuring data redundancy, you can control
how mirrored volumes are laid out across enclosures.
40
information about the threads. The name restored has been retained for backward
compatibility.
One kernel thread responds to I/O failures on a path by initiating a probe of the host
bus adapter (HBA) that corresponds to the path. Another thread then takes the
appropriate action according to the response from the HBA. The action taken can
be to retry the I/O request on the path, or to fail the path and reschedule the I/O on
an alternate path.
The restore kernel task is woken periodically (by default, every 5 minutes) to check
the health of the paths, and to resume I/O on paths that have been restored. As
some paths may suffer from intermittent failure, I/O is only resumed on a path if the
path has remained healthy for a given period of time (by default, 5 minutes). DMP
can be configured with different policies for checking the paths.
See Configuring DMP path restoration policies on page 247.
The statistics-gathering task records the start and end time of each I/O request,
and the number of I/O failures and retries on each path. DMP can be configured to
use this information to prevent the SCSI driver being flooded by I/O requests. This
feature is known as I/O throttling.
If an I/O request relates to a mirrored volume, VxVM specifies the FAILFAST flag.
In such cases, DMP does not retry failed I/O requests on the path, and instead
marks the disks on that path as having failed.
See Path failover mechanism on page 41.
See I/O throttling on page 42.
41
I/O throttling
If I/O throttling is enabled, and the number of outstanding I/O requests builds up
on a path that has become less responsive, DMP can be configured to prevent new
I/O requests being sent on the path either when the number of outstanding I/O
requests has reached a given value, or a given time has elapsed since the last
successful I/O request on the path. While throttling is applied to a path, the new I/O
requests on that path are scheduled on other available paths. The throttling is
removed from the path if the HBA reports no error on the path, or if an outstanding
I/O request on the path succeeds.
See Configuring the I/O throttling mechanism on page 244.
Load balancing
By default, Symantec Dynamic Multi-Pathing (DMP) uses the Minimum Queue I/O
policy for load balancing across paths for all array types. Load balancing maximizes
I/O throughput by using the total bandwidth of all available paths. I/O is sent down
the path that has the minimum outstanding I/Os.
For Active/Passive (A/P) disk arrays, I/O is sent down the primary paths. If all of
the primary paths fail, I/O is switched over to the available secondary paths. As the
42
43
44
45
To remove a FORMER ASM disk from ASM control for use with VxVM
Clean the disk with the dd command to remove all ASM identification information
on it. For example:
dd if=/dev/zero of=/dev/rdsk/<wholedisk|partition> count=1 bs=1024
You can use either of the following commands to display ASM disks:
The vxdisk list command displays the disk type as ASM.
# vxdisk list
DEVICE
Disk_0s2
Disk_1
EVA4K6K0_0
EVA4K6K0_1
TYPE
auto:LVM
auto:ASM
auto
auto
DISK
-
GROUP
-
STATUS
LVM
ASM
online
online
The vxdisk classify command classifies and displays ASM disks as Oracle
ASM.
# vxdisk -d classify disk=c1t0d5
device:
status:
type:
groupid:
hostname:
domainid:
centralhost:
c1t0d5
CLASSIFIED
Oracle ASM
-
Specify the -f option to the vxdisk classify command, to perform a full scan
of the OS devices.
Use the vxisasm utility to check if a particular disk is under ASM control.
# /etc/vx/bin/vxisasm 3pardata0_2799
3pardata0_2799
ACTIVE
# /etc/vx/bin/vxisasm 3pardata0_2798
3pardata0_2798
FORMER
Alternatively, use the vxisforeign utility to check if the disk is under control
of any foreign software like LVM or ASM:
# /etc/vx/bin/vxisforeign 3pardata0_2799
3pardata0_2799
ASM
ACTIVE
# /etc/vx/bin/vxisforeign 3pardata0_2798
3pardata0_2798
ASM
FORMER
46
Chapter
Online relayout
Volume resynchronization
Hot-relocation
Volume snapshots
FastResync
Volume sets
through the operating system device interface. VxVM is layered on top of the
operating system interface services, and is dependent upon how the operating
system accesses physical disks.
VxVM is dependent upon the operating system for the following functionality:
device handles
VxVM relies on the following constantly-running daemons and kernel threads for
its operation:
vxconfigd
vxiod
vxrelocd
48
Virtual objects
When one or more physical disks are brought under the control of
VxVM, it creates virtual objects called volumes on those physical
disks. Each volume records and retrieves data from one or more
physical disks. Volumes are accessed by file systems, databases,
or other applications in the same way that physical disks are
accessed. Volumes are also composed of other virtual objects
(plexes and subdisks) that are used in changing the volume
configuration. Volumes and their virtual components are called
virtual objects or VxVM objects.
See Virtual objects on page 51.
Physical objects
A physical disk is the basic storage device (media) where the data is ultimately
stored. You can access the data on a physical disk by using a device name to locate
the disk. The physical disk device name varies with the computer system you use.
Not all parameters are used on all systems.
Typical device names are of the form sda or hdb, where sda references the first (a)
SCSI disk, and hdb references the second (b) EIDE disk.
Figure 3-1 shows how a physical disk and device name (devname) are illustrated
in the Veritas Volume Manager (VxVM) documentation.
Figure 3-1
devname
49
VxVM writes identification information on physical disks under VxVM control (VM
disks). VxVM disks can be identified even after physical disk disconnection or
system outages. VxVM can then re-form disk groups and logical objects to provide
failure detection and to speed system recovery.
Figure 3-2
devname1
Partition
devname1
devname2
devname
Disk arrays
Performing I/O to disks is a relatively slow process because disks are physical
devices that require time to move the heads to the correct position on the disk
before reading or writing. If all of the read or write operations are done to individual
disks, one at a time, the read-write time can become unmanageable. Performing
these operations on multiple disks can help to reduce this problem.
A disk array is a collection of physical disks that VxVM can represent to the operating
system as one or more virtual disks or volumes. The volumes created by VxVM
look and act to the operating system like physical disks. Applications that interact
with volumes should work in the same way as with physical disks.
Figure 3-3 shows how VxVM represents the disks in a disk array as several volumes
to the operating system.
50
Figure 3-3
Operating system
Volumes
Physical disks
Disk 1
Disk 2
Disk 3
Disk 4
Data can be spread across several disks within an array, or across disks spanning
multiple arrays, to distribute or balance I/O operations across the disks. Using
parallel I/O across multiple disks in this way improves I/O performance by increasing
data transfer speed and overall throughput for the array.
Virtual objects
Veritas Volume Manager (VxVM) uses multiple virtualization layers to provide distinct
functionality and reduce physical limitations. The connection between physical
objects and VxVM objects is made when you place a physical disk under VxVM
control.
Table 3-1 describes the virtual objects in VxVM.
Table 3-1
Virtual object
Description
Disk groups
51
Table 3-1
Virtual object
Description
VxVM disks
Subdisks
Plexes
Volumes
After installing VxVM on a host system, you must bring the contents of physical
disks under VxVM control by collecting the VxVM disks into disk groups and
allocating the disk group space to create logical volumes.
Bringing the contents of physical disks under VxVM control is accomplished only
if VxVM takes control of the physical disks and the disk is not under control of
another storage manager such as LVM.
For more information on how LVM and VxVM disks co-exist or how to convert LVM
disks to VxVM disks, see the Symantec Storage Foundation and High Availability
Solutions Solutions Guide.
52
VxVM creates virtual objects and makes logical connections between the objects.
The virtual objects are then used by VxVM to do storage management tasks.
The vxprint command displays detailed information about the VxVM objects that
exist on a system.
See the vxprint(1M) manual page.
Figure 3-4 shows the connections between VxVM virtual objects and how they
relate to physical disks.
53
Figure 3-4
Disk group
vol01
vol02
Volumes
vol01-01
vol02-01
vol02-02
vol01-01
vol02-01
vol02-02
disk01-01
disk02-01
disk03-01
disk01-01
disk02-01
disk03-01
Subdisks
disk01-01
disk02-01
disk03-01
VM disks
disk01
disk02
disk03
devname1
devname2
devname3
Plexes
Physical
disks
The disk group contains three VxVM disks which are used to create two volumes.
Volume vol01 is simple and has a single plex. Volume vol02 is a mirrored volume
with two plexes.
The various types of virtual objects (disk groups, VM disks, subdisks, plexes, and
volumes) are described in the following sections. Other types of objects exist in
Veritas Volume Manager, such as data change objects (DCOs), and volume sets,
to provide extended functionality.
54
change requests to the VxVM kernel, and modifies configuration information stored
on disk. vxconfigd also initializes VxVM when the system is booted.
The vxdctl command is the command-line interface to the vxconfigd daemon.
You can use vxdctl to:
In VxVM 4.0 and later releases, disk access records are no longer stored in the
/etc/vx/volboot file. Non-persistent disk access records are created by scanning
the disks at system startup. Persistent disk access records for simple and nopriv
disks are permanently stored in the /etc/vx/darecs file in the root file system.
The vxconfigd daemon reads the contents of this file to locate the disks and the
configuration databases for their disk groups.
The /etc/vx/darecs file is also used to store definitions of foreign devices that
are not autoconfigurable. Such entries may be added by using the vxddladm
addforeign command.
See the vxddladm(1M) manual page.
If your system is configured to use Dynamic Multi-Pathing (DMP), you can also use
vxdctl to:
Reconfigure the DMP database to include disk devices newly attached to, or
removed from the system.
Update the DMP database with changes in path type for active/passive disk
arrays. Use the utilities provided by the disk-array vendor to change the path
type between primary and secondary.
55
Non-layered volumes
In a non-layered volume, a subdisk maps directly to a VxVM disk. This allows the
subdisk to define a contiguous extent of storage space backed by the public region
of a VxVM disk. When active, the VxVM disk is directly associated with an underlying
physical disk. The combination of a volume layout and the physical disks therefore
determines the storage service available from a given virtual device.
Layered volumes
A layered volume is constructed by mapping its subdisks to underlying volumes.
The subdisks in the underlying volumes must map to VxVM disks, and hence to
attached physical storage.
Layered volumes allow for more combinations of logical compositions, some of
which may be desirable for configuring a virtual device. For example, layered
volumes allow for high availability when using striping. Because permitting free use
of layered volumes throughout the command level would have resulted in unwieldy
administration, some ready-made layered volume configurations are designed into
VxVM.
See About layered volumes on page 70.
These ready-made configurations operate with built-in rules to automatically match
desired levels of service within specified constraints. The automatic configuration
is done on a best-effort basis for the current command invocation working against
the current configuration.
To achieve the desired storage service from a set of virtual devices, it may be
necessary to include an appropriate set of VxVM disks into a disk group and to
execute multiple configuration commands.
To the extent that it can, VxVM handles initial configuration and on-line
re-configuration with its set of layouts and administration interface to make this job
easier and more deterministic.
56
Layout methods
Data in virtual objects is organized to create volumes by using the following layout
methods:
Striping (RAID-0)
See Striping (RAID-0) on page 59.
Mirroring (RAID-1)
See Mirroring (RAID-1) on page 62.
57
Example of concatenation
Figure 3-5
Data in Data in
disk01-01 disk01-03
n
Data blocks
disk01-01
disk01-03
disk01-01
disk01-03
Subdisks
disk01-01
disk01-02
disk01
disk01-03
VM disk
devname
n n+1 n+2
n+3
Physical disk
The blocks n, n+1, n+2 and n+3 (numbered relative to the start of the plex) are
contiguous on the plex, but actually come from two distinct subdisks on the same
physical disk.
The remaining free space in the subdisk disk01-02 on VxVM disk disk01 can be
put to other uses.
You can use concatenation with multiple subdisks when there is insufficient
contiguous space for the plex on any one disk. This form of concatenation can be
used for load balancing between disks, and for head movement optimization on a
particular disk.
Figure 3-6 shows data spread over two subdisks in a spanned plex.
58
Example of spanning
Figure 3-6
Data in
disk01-01
n
Data in
disk02-01
Data blocks
disk01-01
disk02-01
disk01-01
disk02-01
Subdisks
disk01-01
disk01
disk02-01
devname1
n n+1 n+2
disk02-02
VM disks
disk02
devname2
n+3
Physical disks
The blocks n, n+1, n+2 and n+3 (numbered relative to the start of the plex) are
contiguous on the plex, but actually come from two distinct subdisks from two distinct
physical disks.
The remaining free space in the subdisk disk02-02 on VxVM disk disk02 can be
put to other uses.
Warning: Spanning a plex across multiple disks increases the chance that a disk
failure results in failure of the assigned volume. Use mirroring or RAID-5 to reduce
the risk that a single disk failure results in a volume failure.
Striping (RAID-0)
Striping (RAID-0) is useful if you need large amounts of data written to or read from
physical disks, and performance is important. Striping is also helpful in balancing
the I/O load from multi-user applications across multiple disks. By using parallel
data transfer to and from multiple disks, striping significantly improves data-access
performance.
Striping maps data so that the data is interleaved among two or more physical disks.
A striped plex contains two or more subdisks, spread out over two or more physical
disks. Data is allocated alternately and evenly to the subdisks of a striped plex.
59
The subdisks are grouped into columns, with each physical disk limited to one
column. Each column contains one or more subdisks and can be derived from one
or more physical disks. The number and sizes of subdisks per column can vary.
Additional subdisks can be added to columns, as necessary.
Warning: Striping a volume, or splitting a volume across multiple disks, increases
the chance that a disk failure will result in failure of that volume.
If five volumes are striped across the same five disks, then failure of any one of the
five disks will require that all five volumes be restored from a backup. If each volume
is on a separate disk, only one volume has to be restored. (As an alternative to or
in conjunction with striping, use mirroring or RAID-5 to substantially reduce the
chance that a single disk failure results in failure of a large number of volumes.)
Data is allocated in equal-sized stripe units that are interleaved between the columns.
Each stripe unit is a set of contiguous blocks on a disk. The default stripe unit size
is 64 kilobytes.
Figure 3-7 shows an example with three columns in a striped plex, six stripe units,
and data striped over the three columns.
Figure 3-7
Column 1
Column 2
Stripe 1
stripe unit
1
stripe unit
2
stripe unit
3
Stripe 2
stripe unit
4
stripe unit
5
stripe unit
6
Subdisk
1
Subdisk
2
Subdisk
3
Plex
60
A stripe consists of the set of stripe units at the same positions across all columns.
In the figure, stripe units 1, 2, and 3 constitute a single stripe.
Viewed in sequence, the first stripe consists of:
Striping continues for the length of the columns (if all columns are the same length),
or until the end of the shortest column is reached. Any space remaining at the end
of subdisks in longer columns becomes unused space.
Figure 3-8 shows a striped plex with three equal sized, single-subdisk columns.
Example of a striped plex with one subdisk per column
Figure 3-8
su1
su2
su3
su4
su5
su6
Column 0
Column 1
Column 2
disk01-01
disk02-01
disk03-01
disk01-01
disk02-01
disk03-01
disk01-01
disk02-01
disk03-01
disk01
disk02
disk03
devname1
devname2
devname3
su1
su4
su2 su5
su3
su6
Stripe units
Striped plex
Subdisks
VM disks
Physical disk
There is one column per physical disk. This example shows three subdisks that
occupy all of the space on the VM disks. It is also possible for each subdisk in a
61
striped plex to occupy only a portion of the VM disk, which leaves free space for
other disk management tasks.
Figure 3-9 shows a striped plex with three columns containing subdisks of different
sizes.
Example of a striped plex with concatenated subdisks per column
Figure 3-9
su1
Column 0
su2
su3
su4
su5
su6
Column 1
Column 2
disk02-01
disk03-01
disk01-01
disk03-02
disk02-02
disk03-03
disk02-01
disk03-01
disk01-01
disk03-02
disk02-02
disk03-03
disk02-01
disk03-01
disk01-01
disk03-02
disk02-02
disk03-03
disk01
disk02
disk03
devname1
devname2
devname3
su1 su4
su2
su5
su3
su6
Stripe units
Striped plex
Subdisks
VM disks
Physical disks
Each column contains a different number of subdisks. There is one column per
physical disk. Striped plexes can be created by using a single subdisk from each
of the VM disks being striped across. It is also possible to allocate space from
different regions of the same disk or from another disk (for example, if the size of
the plex is increased). Columns can also contain subdisks from different VM disks.
See Creating a striped volume on page 142.
Mirroring (RAID-1)
Mirroring uses multiple mirrors (plexes) to duplicate the information contained in a
volume. In the event of a physical disk failure, the plex on the failed disk becomes
62
unavailable, but the system continues to operate using the unaffected mirrors.
Similarly, mirroring two LUNs from two separate controllers lets the system operate
if there is a controller failure.
Although a volume can have a single plex, at least two plexes are required to provide
redundancy of data. Each of these plexes must contain disk space from different
disks to achieve redundancy.
When striping or spanning across a large number of disks, failure of any one of
those disks can make the entire plex unusable. Because the likelihood of one out
of several disks failing is reasonably high, you should consider mirroring to improve
the reliability (and availability) of a striped or spanned volume.
See Creating a mirrored volume on page 140.
column 0
column 1
column 2
Mirrored-stripe
volume
Striped
plex
Mirror
column 0
column 1
column 2
Striped
plex
63
column 1
column 2
column 0
column 1
column 2
Mirror
Striped plex
64
Figure 3-12
Mirrored-stripe volume
with no
Striped plex redundancy
Mirror
Detached
striped plex
Striped plex
65
is reflected in all copies. If a portion of a mirrored volume fails, the system continues
to use the other copies of the data.
RAID-5 provides data redundancy by using parity. Parity is a calculated value used
to reconstruct data after a failure. While data is being written to a RAID-5 volume,
parity is calculated by doing an exclusive OR (XOR) procedure on the data. The
resulting parity is then written to the volume. The data and calculated parity are
contained in a plex that is striped across multiple disks. If a portion of a RAID-5
volume fails, the data that was on that portion of the failed volume can be recreated
from the remaining data and parity information. It is also possible to mix
concatenation and striping in the layout.
Figure 3-13 shows parity locations in a RAID-5 array configuration.
Figure 3-13
Stripe 1
Stripe 2
Stripe 3
Stripe 4
Data
Data
Parity
Data
Parity
Data
Data
Parity
Data
Data
Data
Data
Parity
Every stripe has a column containing a parity stripe unit and columns containing
data. The parity is spread over all of the disks in the array, reducing the write time
for large independent writes because the writes do not have to wait until a single
parity disk can accept the data.
RAID-5 volumes can additionally perform logging to minimize recovery time. RAID-5
volumes use RAID-5 logs to keep a copy of the data and parity currently being
written. RAID-5 logging is optional and can be created along with RAID-5 volumes
or added later.
See Veritas Volume Manager RAID-5 arrays on page 67.
Note: Veritas Volume Manager (VxVM) supports RAID-5 for private disk groups,
but not for shareable disk groups in a Cluster Volume Manager (CVM) environment.
In addition, VxVM does not support the mirroring of RAID-5 volumes that are
configured using VxVM software. RAID-5 LUNs hardware may be mirrored.
66
Figure 3-14 shows the row and column arrangement of a traditional RAID-5 array.
Figure 3-14
Stripe 1
Stripe3
Row 0
Stripe 2
Row 1
Column 0
Column 1
Column 2
Column 3
This traditional array structure supports growth by adding more rows per column.
Striping is accomplished by applying the first stripe across the disks in Row 0, then
the second stripe across the disks in Row 1, then the third stripe across the Row
0 disks, and so on. This type of array requires all disks columns and rows to be of
equal size.
67
Figure 3-15
Stripe 1
Stripe 2
SD
SD
SD
SD
SD
SD
SD
SD
Column 0
Column 1
Column 2
Column 3
SD = subdisk
Left-symmetric layout
There are several layouts for data and parity that can be used in the setup of a
RAID-5 array. The implementation of RAID-5 in VxVM uses a left-symmetric layout.
This provides optimal performance for both random I/O operations and large
sequential I/O operations. However, the layout selection is not as critical for
performance as are the number of columns and the stripe unit size.
Left-symmetric layout stripes both data and parity across columns, placing the parity
in a different column for every stripe of data. The first parity stripe unit is located in
the rightmost column of the first stripe. Each successive parity stripe unit is located
68
in the next stripe, shifted left one column from the previous parity stripe unit location.
If there are more stripes than columns, the parity stripe unit placement begins in
the rightmost column again.
Figure 3-16 shows a left-symmetric parity layout with five disks (one per column).
Left-symmetric layout
Figure 3-16
Column
Stripe
P0
P1
10
11
P2
15
P3
12
13
14
P4
16
17
18
19
For each stripe, data is organized starting to the right of the parity stripe unit. In the
figure, data organization for the first stripe begins at P0 and continues to stripe units
0-3. Data organization for the second stripe begins at P1, then continues to stripe
unit 4, and on to stripe units 5-7. Data organization proceeds in this manner for the
remaining stripes.
Each parity stripe unit contains the result of an exclusive OR (XOR) operation
performed on the data in the data stripe units within the same stripe. If one columns
data is inaccessible due to hardware or software failure, the data for each stripe
can be restored by XORing the contents of the remaining columns data stripe units
against their respective parity stripe units.
For example, if a disk corresponding to the whole or part of the far left column fails,
the volume is placed in a degraded mode. While in degraded mode, the data from
the failed column can be recreated by XORing stripe units 1-3 against parity stripe
unit P0 to recreate stripe unit 0, then XORing stripe units 4, 6, and 7 against parity
stripe unit P1 to recreate stripe unit 5, and so on.
Failure of more than one column in a RAID-5 plex detaches the volume. The volume
is no longer allowed to satisfy read or write requests. Once the failed columns have
been recovered, it may be necessary to recover user data from backups.
69
RAID-5 logging
Logging is used to prevent corruption of data during recovery by immediately
recording changes to data and parity to a log area on a persistent device such as
a volume on disk or in non-volatile RAM. The new data and parity are then written
to the disks.
Without logging, it is possible for data not involved in any active writes to be lost or
silently corrupted if both a disk in a RAID-5 volume and the system fail. If this
double-failure occurs, there is no way of knowing if the data being written to the
data portions of the disks or the parity being written to the parity portions have
actually been written. Therefore, the recovery of the corrupted disk may be corrupted
itself.
Figure 3-17 shows a RAID-5 volume configured across three disks (A, B, and C).
Figure 3-17
Disk A
Disk B
Completed
data write
Disk C
Corrupted data
Incomplete
parity write
In this volume, recovery of disk Bs corrupted data depends on disk As data and
disk Cs parity both being complete. However, only the data write to disk A is
complete. The parity write to disk C is incomplete, which would cause the data on
disk B to be reconstructed incorrectly.
This failure can be avoided by logging all data and parity writes before committing
them to the array. In this way, the log can be replayed, causing the data and parity
updates to be completed before the reconstruction of the failed drive takes place.
Logs are associated with a RAID-5 volume by being attached as log plexes. More
than one log plex can exist for each RAID-5 volume, in which case the log areas
are mirrored.
70
Figure 3-18 shows a typical striped-mirror layered volume where each column is
represented by a subdisk that is built from an underlying mirrored volume.
Figure 3-18
Striped mirror
volume
vol01-01
vol01-01
Managed
by user
Managed
by VxVM
Striped plex
Column 0
Column 1
vop01
vop02
Subdisks
vop01
vop02
Underlying
mirrored
volumes
disk04-01
disk05-01
disk06-01
disk07-01
Concatenated
plexes
disk04-01
disk05-01
disk06-01
disk07-01
Subdisks on
VM disks
The volume and striped plex in the Managed by user area allow you to perform
normal tasks in VxVM. User tasks can be performed only on the top-level volume
of a layered volume.
Underlying volumes in the Managed by VxVM area are used exclusively by VxVM
and are not designed for user manipulation. You cannot detach a layered volume
or perform any other operation on the underlying volumes by manipulating the
internal structure. You can perform all necessary operations in the Managed by
user area that includes the top-level volume and striped plex (for example, resizing
the volume, changing the column width, or adding a column).
System administrators can manipulate the layered volume structure for
troubleshooting or other operations (for example, to place data on specific disks).
Layered volumes are used by VxVM to perform the following tasks and operations:
71
Creating striped-mirrors
Creating concatenated-mirrors
Online Relayout
Creating Snapshots
Online relayout
Online relayout allows you to convert between storage layouts in VxVM, with
uninterrupted data access. Typically, you would do this to change the redundancy
or performance characteristics of a volume. VxVM adds redundancy to storage
either by duplicating the data (mirroring) or by adding parity (RAID-5). Performance
characteristics of storage in VxVM can be changed by changing the striping
parameters, which are the number of columns and the stripe width.
See Performing online relayout on page 606.
72
a given volume to the destination layout by using minimal temporary space that is
available in the disk group.
The transformation is done by moving one portion of data at a time in the source
layout to the destination layout. Data is copied from the source volume to the
temporary area, and data is removed from the source volume storage area in
portions. The source volume storage area is then transformed to the new layout,
and the data saved in the temporary area is written back to the new layout. This
operation is repeated until all the storage and data in the source volume has been
transformed to the new layout.
The default size of the temporary area used during the relayout depends on the
size of the volume and the type of relayout. For volumes larger than 50MB, the
amount of temporary space that is required is usually 10% of the size of the volume,
from a minimum of 50MB up to a maximum of 1GB. For volumes smaller than 50MB,
the temporary space required is the same as the size of the volume.
The following error message displays the number of blocks required if there is
insufficient free space available in the disk group for the temporary area:
tmpsize too small to perform this relayout (nblks minimum required)
You can override the default size used for the temporary area by using the tmpsize
attribute to vxassist.
See the vxassist(1M) manual page.
As well as the temporary area, space is required for a temporary intermediate
volume when increasing the column length of a striped volume. The amount of
space required is the difference between the column lengths of the target and source
volumes. For example, 20GB of temporary additional space is required to relayout
a 150GB striped volume with 5 columns of length 30GB as 3 columns of length
50GB. In some cases, the amount of temporary space that is required is relatively
large. For example, a relayout of a 150GB striped volume with 5 columns as a
concatenated volume (with effectively one column) requires 120GB of space for
the intermediate volume.
Additional permanent disk space may be required for the destination volumes,
depending on the type of relayout that you are performing. This may happen, for
example, if you change the number of columns in a striped volume.
Figure 3-19 shows how decreasing the number of columns can require disks to be
added to a volume.
73
Figure 3-19
Note that the size of the volume remains the same but an extra disk is needed to
extend one of the columns.
The following are examples of operations that you can perform using online relayout:
Figure 3-20
RAID-5 volume
Striped volume
Note that removing parity decreases the overall storage space that the volume
requires.
Figure 3-21
Concatenated
volume
RAID-5 volume
Note that adding parity increases the overall storage space that the volume requires.
74
Two columns
Three columns
Note that the length of the columns is reduced to conserve the size of the volume.
Figure 3-23
The usual restrictions apply for the minimum number of physical disks that are
required to create the destination layout. For example, mirrored volumes require
75
at least as many disks as mirrors, striped and RAID-5 volumes require at least
as many disks as columns, and striped-mirror volumes require at least as many
disks as columns multiplied by mirrors.
Online relayout cannot transform sparse plexes, nor can it make any plex sparse.
(A sparse plex is a plex that is not the same size as the volume, or that has
regions that are not mapped to any subdisk.)
Transformation characteristics
Transformation of data from one layout to another involves rearrangement of data
in the existing layout to the new layout. During the transformation, online relayout
retains data redundancy by mirroring any temporary space used. Read and write
access to data is not interrupted during the transformation.
Data is not corrupted if the system fails during a transformation. The transformation
continues after the system is restored and both read and write access are
maintained.
You can reverse the layout transformation process at any time, but the data may
not be returned to the exact previous storage location. Before you reverse a
transformation that is in process, you must stop it.
You can determine the transformation direction by using the vxrelayout status
volume command.
These transformations are protected against I/O failures if there is sufficient
redundancy and space to move the data.
76
Volume resynchronization
When storing data redundantly and using mirrored or RAID-5 volumes, VxVM
ensures that all copies of the data match exactly. However, under certain conditions
(usually due to complete system failures), some redundant data on a volume can
become inconsistent or unsynchronized. The mirrored data is not exactly the same
as the original data. Except for normal configuration changes (such as detaching
and reattaching a plex), this can only occur when a system crashes while data is
being written to a volume.
Data is written to the mirrors of a volume in parallel, as is the data and parity in a
RAID-5 volume. If a system crash occurs before all the individual writes complete,
it is possible for some writes to complete while others do not. This can result in the
data becoming unsynchronized. For mirrored volumes, it can cause two reads from
the same region of the volume to return different results, if different mirrors are used
to satisfy the read request. In the case of RAID-5 volumes, it can lead to parity
corruption and incorrect data reconstruction.
VxVM ensures that all mirrors contain exactly the same data and that the data and
parity in RAID-5 volumes agree. This process is called volume resynchronization.
For volumes that are part of the disk group that is automatically imported at boot
time (usually aliased as the reserved system-wide disk group, bootdg),
resynchronization takes place when the system reboots.
Not all volumes require resynchronization after a system failure. Volumes that were
never written or that were quiescent (that is, had no active I/O) when the system
failure occurred could not have had outstanding writes and do not require
resynchronization.
Dirty flags
VxVM records when a volume is first written to and marks it as dirty. When a volume
is closed by all processes or stopped cleanly by the administrator, and all writes
have been completed, VxVM removes the dirty flag for the volume. Only volumes
that are marked dirty require resynchronization.
Resynchronization process
The process of resynchronization depends on the type of volume. For mirrored
volumes, resynchronization is done by placing the volume in recovery mode (also
called read-writeback recovery mode). Resynchronization of data in the volume is
done in the background. This allows the volume to be available for use while
recovery is taking place. RAID-5 volumes that contain RAID-5 logs can replay
those logs. If no logs are available, the volume is placed in reconstruct-recovery
mode and all parity is regenerated.
77
Hot-relocation
Hot-relocation is a feature that allows a system to react automatically to I/O failures
on redundant objects (mirrored or RAID-5 volumes) in VxVM and restore redundancy
and access to those objects. VxVM detects I/O failures on objects and relocates
the affected subdisks. The subdisks are relocated to disks designated as spare
disks or to free space within the disk group. VxVM then reconstructs the objects
that existed before the failure and makes them accessible again.
When a partial disk failure occurs (that is, a failure affecting only some subdisks on
a disk), redundant data on the failed portion of the disk is relocated. Existing volumes
on the unaffected portions of the disk remain accessible.
See How hot-relocation works on page 544.
78
If an instant snap DCO volume is associated with a volume, a portion of the DCO
volume can be used to store the DRL log. There is no need to create a separate
DRL log for a volume which has an instant snap DCO volume.
Sequential DRL
Some volumes, such as those that are used for database replay logs, are written
sequentially and do not benefit from delayed cleaning of the DRL bits. For these
volumes, sequential DRL can be used to limit the number of dirty regions. This
allows for faster recovery. However, if applied to volumes that are written to
randomly, sequential DRL can be a performance bottleneck as it limits the number
of parallel writes that can be carried out.
The maximum number of dirty regions allowed for sequential DRL is controlled by
a tunable as detailed in the description of voldrl_max_seq_dirty.
79
Note: To use SmartSync with volumes that contain file systems, see the discussion
of the Oracle Resilvering feature of Veritas File System (VxFS).
The following section describes how to configure VxVM raw volumes and SmartSync.
The database uses the following types of volumes:
Data volumes are the volumes used by the database (control files and tablespace
files).
SmartSync works with these two types of volumes differently, so they must be
configured as described in the following sections.
80
Volume snapshots
Veritas Volume Manager provides the capability for taking an image of a volume
at a given point in time. Such an image is referred to as a volume snapshot. Such
snapshots should not be confused with file system snapshots, which are point-in-time
images of a Veritas File System.
Figure 3-24 shows how a snapshot volume represents a copy of an original volume
at a given point in time.
Figure 3-24
Time
T1
Original volume
T2
Original volume
Snapshot volume
T3
Original volume
Snapshot volume
T4
Original volume
Snapshot volume
Even though the contents of the original volume can change, the snapshot volume
preserves the contents of the original volume as they existed at an earlier time.
The snapshot volume provides a stable and independent base for making backups
of the contents of the original volume, or for other applications such as decision
support. In the figure, the contents of the snapshot volume are eventually
resynchronized with the original volume at a later point in time.
Another possibility is to use the snapshot volume to restore the contents of the
original volume. This may be useful if the contents of the original volume have
become corrupted in some way.
Warning: If you write to the snapshot volume, it may no longer be suitable for use
in restoring the contents of the original volume.
81
One type of volume snapshot in VxVM is the third-mirror break-off type. This name
comes from its implementation where a snapshot plex (or third mirror) is added to
a mirrored volume. The contents of the snapshot plex are then synchronized from
the original plexes of the volume. When this synchronization is complete, the
snapshot plex can be detached as a snapshot volume for use in backup or decision
support applications. At a later time, the snapshot plex can be reattached to the
original volume, requiring a full resynchronization of the snapshot plexs contents.
The FastResync feature was introduced to track writes to the original volume. This
tracking means that only a partial, and therefore much faster, resynchronization is
required on reattaching the snapshot plex. In later releases, the snapshot model
was enhanced to allow snapshot volumes to contain more than a single plex,
reattachment of a subset of a snapshot volumes plexes, and persistence of
FastResync across system reboots or cluster restarts.
Release 4.0 of VxVM introduced full-sized instant snapshots and space-optimized
instant snapshots, which offer advantages over traditional third-mirror snapshots
such as immediate availability and easier configuration and administration. You
can also use the third-mirror break-off usage model with full-sized snapshots, where
this is necessary for write-intensive applications.
For information about how and when to use volume snapshots, see the Symantec
Storage Foundation and High Availability Solutions Solutions Guide.
See the vxassist(1M) manual page.
See the vxsnap(1M) manual page.
Snapshot feature
Full-sized
Space-optimized Break-off
instant (vxsnap) instant (vxsnap) (vxassist or
vxsnap)
Yes
No
Yes
No
No
Yes
82
Table 3-2
Snapshot feature
Full-sized
Space-optimized Break-off
instant (vxsnap) instant (vxsnap) (vxassist or
vxsnap)
Yes
Yes
Yes
Yes
Yes
No
Yes
No
No
No
Yes
Yes
No
Yes
Yes
Yes
Yes
Synchronization can be
controlled
Yes
No
No
Yes
No
Yes
Full-sized instant snapshots are easier to configure and offer more flexibility of use
than do traditional third-mirror break-off snapshots. For preference, new volumes
should be configured to use snapshots that have been created using the vxsnap
command rather than using the vxassist command. Legacy volumes can also be
reconfigured to use vxsnap snapshots, but this requires rewriting of administration
scripts that assume the vxassist snapshot model.
FastResync
Note: Only certain Storage Foundation and High Availability Solutions products
have a license to use this feature.
83
84
Persistent FastResync
85
Mirrored volume
Data plex
Data plex
DCO volume
DCO plex
DCO plex
Associated with the volume are a DCO object and a DCO volume with two plexes.
Create an instant snapshot by using the vxsnap make command, or create a
traditional third-mirror snapshot by using the vxassist snapstart command.
Figure 3-26 shows how a snapshot plex is set up in the volume, and how a disabled
DCO plex is associated with it.
86
Figure 3-26
Data plex
Data plex
Data plex
DCO volume
Disabled
DCO plex
DCO plex
DCO plex
Multiple snapshot plexes and associated DCO plexes may be created in the volume
by re-running the vxassist snapstart command for traditional snapshots, or the
vxsnap make command for space-optimized snapshots. You can create up to a
total of 32 plexes (data and log) in a volume.
A traditional snapshot volume is created from a snapshot plex by running the
vxassist snapshot operation on the volume. For instant snapshots, however, the
vxsnap make command makes an instant snapshot volume immediately available
for use. There is no need to run an additional command.
Figure 3-27 shows how the creation of the snapshot volume also sets up a DCO
object and a DCO volume for the snapshot volume.
87
Figure 3-27
Mirrored volume
Data plex
Data plex
Snap object
DCO volume
DCO
log plex
DCO
log plex
Snapshot volume
Data plex
Snap object
DCO volume
DCO
log plex
The DCO volume contains the single DCO plex that was associated with the
snapshot plex. If two snapshot plexes were taken to form the snapshot volume, the
DCO volume would contain two plexes. For space-optimized instant snapshots,
the DCO object and DCO volume are associated with a snapshot volume that is
created on a cache object and not on a VxVM disk.
Associated with both the original volume and the snapshot volume are snap objects.
The snap object for the original volume points to the snapshot volume, and the
snap object for the snapshot volume points to the original volume. This allows VxVM
to track the relationship between volumes and their snapshots even if they are
moved into different disk groups.
The snap objects in the original volume and snapshot volume are automatically
deleted in the following circumstances:
For traditional snapshots, the vxassist snapback operation is run to return all
of the plexes of the snapshot volume to the original volume.
88
If the volumes are in different disk groups, the command must be run separately
on each volume.
For full-sized instant snapshots, the vxsnap reattach operation is run to return
all of the plexes of the snapshot volume to the original volume.
For full-sized instant snapshots, the vxsnap dis or vxsnap split operations
are run on a volume to break the association between the original volume and
the snapshot volume. If the volumes are in different disk groups, the command
must be run separately on each volume.
Note: The vxsnap reattach, dis and split operations are not supported for
space-optimized instant snapshots.
See the vxassist(1M) manual page.
See the vxsnap(1M) manual page.
Version 0 DCO
volume layout
This version of the DCO volume layout only supports legacy snapshots
(vxassist snapshots). The DCO object manages information about the
FastResync maps. These maps track writes to the original volume and
to each of up to 32 snapshot volumes since the last snapshot
operation. Each plex of the DCO volume on disk holds 33 maps, each
of which is 4 blocks in size by default.
VxVM software continues to support the version 0 (zero) layout for
legacy volumes.
89
where:
acmsize = (volume_size / (region_size*4))
per-volume_map_size = (volume_size/region_size*8)
drlmapsize = 1M, by default
90
For a 100GB volume, the size of the DCO volume with the default regionsize of
64KB is approximately 36MB.
Create the DCOs for instant snapshots by using the vxsnap prepare command or
by specifying the options logtype=dco dcoversion=20 while creating a volume
with the vxassist make command.
For an instant snap DCO volume, the size of the map is increased and the size
of the region that is tracked by each bit in the map stays the same.
For a version 0 DCO volume, the size of the map remains the same and the
region size is increased.
In either case, the part of the map that corresponds to the grown area of the volume
is marked as dirty so that this area is resynchronized. The snapback operation
fails if it attempts to create an incomplete snapshot plex. In such cases, you must
grow the replica volume, or the original volume, before invoking any of the
commands vxsnap reattach, vxsnap restore, or vxassist snapback. Growing
the two volumes separately can lead to a snapshot that shares physical disks with
another mirror in the volume. To prevent this, grow the volume after the snapback
command is complete.
91
FastResync limitations
The following limitations apply to FastResync:
Persistent FastResync is supported for RAID-5 volumes, but this prevents the
use of the relayout or resize operations on the volume while a DCO is associated
with it.
When a subdisk is relocated, the entire plex is marked dirty and a full
resynchronization becomes necessary.
Any operation that changes the layout of a replica volume can mark the
FastResync change map for that snapshot dirty and require a full
resynchronization during snapback. Operations that cause this include subdisk
split, subdisk move, and online relayout of the replica. It is safe to perform these
operations after the snapshot is completed.
See the vxassist (1M) manual page.
See the vxplex (1M) manual page.
See the vxvol (1M) manual page.
92
Volume sets
Volume sets are an enhancement to Veritas Volume Manager (VxVM) that allow
several volumes to be represented by a single logical object. All I/O from and to
the underlying volumes is directed by way of the I/O interfaces of the volume set.
Veritas File System (VxFS) uses volume sets to manage multi-volume file systems
and the SmartTier feature. This feature allows VxFS to make best use of the different
performance and availability characteristics of the underlying volumes. For example,
file system metadata can be stored on volumes with higher redundancy, and user
data on volumes with better performance.
See Creating a volume set on page 452.
93
Functionality
Description
Imports the hardware copies as a clone disk If you choose to import the hardware copies
group or as a new standard disk group.
of the disks of a VxVM disk group, VxVM
identifies the disks as clone disks. You can
choose whether to maintain the clone disk
status or create a new standard disk group.
Detects the LUN class of the array.
94
does not have a UDID, or when VxVM initializes the disk. The exact make-up of
the UDID depends on the array storage library (ASL). Future versions of VxVM may
use different formats for new arrays.
When VxVM discovers a disk with a UDID, VxVM compares the current UDID value
(the value determined from the hardware attributes) to the UDID that is already
stored on the disk. If the UDID values do not match between the UDID value
determined by the DDL and the on-disk UDID, VxVM sets the udid_mismatch flag
for the disk.
The udid_mismatch flag generally indicates that the disk is a hardware copy of a
VxVM disk. The hardware copy has a copy of the VxVM private region of the original
disk, including the UDID. The UDID already stored in the VxVM private region
matches the attributes of the original hardware disk, but does not match the value
on the hardware disk that is the copy.
With the UDID matching feature, VxVM can prevent the situation where the
inconsistent set of disks is presented to the host. This functionality enables you to
import a disk group composed of LUN snapshots on the same host as the original
LUNs. When you import the disks identified with the udid_mismatch flag, VxVM
sets the clone_disk flag on the disk. With care, multiple hardware images of the
original LUN can be simultaneously managed and imported on the same host as
the original LUN.
See Importing a disk group containing hardware cloned disks on page 637.
If a system only sees the copy (or clone) devices, you can remove the clone_disk
flags. Only remove the clone_disk flags if you are sure there is no risk. For example,
you must make sure that there are not two physical volumes that are copies of the
same base physical volume at different times.
If the udid_mismatch flag is set incorrectly on a disk that is not a clone disk, you
can remove the udid_mismatch flag and treat the disk as a standard disk.
See the Storage Foundation and High Availability Solutions Troubleshooting Guide.
95
Chapter
Table 4-1
Feature
Description
97
Table 4-1
Feature
Description
Cross-platform data
sharing
Data deduplication
Defragmentation
blkclear
Enhanced performance
mode
98
Table 4-1
Feature
Description
Extent attributes
Extent-based allocation
99
Table 4-1
Feature
Description
The VxFS File Change Log (FCL) tracks changes to files and
directories in a file system. The File Change Log can be used
by applications such as backup products, webcrawlers, search
and indexing engines, and replication software that typically
scan an entire file system searching for modifications since a
previous scan. FCL functionality is a separately licensed feature.
See About Veritas File System File Change Log on page 709.
File compression
File replication
100
Table 4-1
Feature
Description
FileSnaps
Improved synchronous
writes
101
Table 4-1
102
Feature
Description
maxlink support
Table 4-1
Feature
Description
Partitioned directories
Quotas
103
Table 4-1
Feature
Description
Reverse path name lookup The reverse path name lookup feature obtains the full path
name of a file or directory from the inode number of that file or
directory. The reverse path name lookup feature can be useful
for a variety of applications, such as for clients of the VxFS File
Change Log feature, in backup and restore utilities, and for
replication products. Typically, these applications store
information by inode numbers because a path name for a file
or directory can be very long, thus the need for an easy method
of obtaining a path name.
See About reverse path name lookup on page 718.
SmartTier
104
Table 4-1
Feature
Description
Support for large files and VxFS supports files larger than two gigabytes and large file
large file systems
systems up to 256 terabytes.
Thin Reclamation
105
Caching advisories
See Cache advisories on page 294.
Partitioned directories
See the vxtunefs(1M) and fsadm_vxfs(1M) manual pages.
106
107
copied to the page cache. The actual allocations to the file occur when the scheduler
thread picks the file for allocation. If the file is truncated or removed, allocations are
not required.
Delayed allocation is turned on by default for extending writes. Delayed allocation
is not dependent on the file system disk layout version. This feature does not require
any mount options. You can turn off and turn on this feature by using the vxtunefs
command. You can display the delayed allocation range in the file by using the
fsmap command.
See the vxtunefs(1M) and fsmap(1M) manual pages.
For instances where the file data must be written to the disk immediately, delayed
allocation is disabled on the file. The following are the examples of such instances:
direct I/O, concurrent I/O, FDD/ODM access, and synchronous I/O. Delayed
allocation is not supported on memory-mapped files, BSD quotas, and shared mount
points in a Cluster File System (CFS). When BSD quotas are enabled on a file
system, delayed allocation is turned off automatically for that file system.
About defragmentation
About defragmentation
Free resources are initially aligned and allocated to files in an order that provides
optimal performance. On an active file system, the original order of free resources
is lost over time as files are created, removed, and resized. The file system is spread
farther along the disk, leaving unused gaps or fragments between areas that are
in use. This process is known as fragmentation and leads to degraded performance
108
because the file system has fewer options when assigning a free extent to a file (a
group of contiguous data blocks).
VxFS provides the online administration utility fsadm to resolve the problem of
fragmentation.
The fsadm utility defragments a mounted file system by performing the following
actions:
This utility can run on demand and should be scheduled regularly as a cron job.
See the fsadm_vxfs(1M) manual page.
109
Because these functions are provided using VxFS-specific IOCTL system calls,
most existing UNIX system applications do not use them. For portability reasons,
these applications must check which file system type they are using before using
these functions.
110
Section
Provisioning storage
Chapter
Set up the LUN. See the documentation for your storage array for information
about how to create, mask, and bind the LUN.
Initialize the LUNs for Veritas Volume Manager (VxVM), using one of the
following commands.
The recommended method is to use the vxdisksetup command.
# vxdisksetup -i 3PARDATA0_1
# vxdisk init 3PARDATA0_1
If you do not have a disk group for your LUN, create the disk group:
# vxdg init dg1 dev1=3PARDATA0_1
If you already have a disk group for your LUN, add the LUN to the disk
group:
# vxdg -g dg1 adddisk 3PARDATA0_1
Grow the volume and the file system to the desired size. For example:
# vxresize -b -F vxfs -g dg1 vol1 200g
113
Grow the existing LUN. See the documentation for your storage array for
information about how to create, mask, and bind the LUN.
Make Veritas Volume Manager (VxVM) aware of the new LUN size.
# vxdisk -g dg1 resize 3PARDATA0_1
Grow the volume and the file system to the desired size:
# vxresize -b -F vxfs -g dg1 vol1 200g
114
Chapter
Advanced allocation
methods for configuring
storage
This chapter includes the following topics:
Site-based allocation
Additionally, when you modify existing volumes using the vxassist command, the
vxassist command automatically modifies underlying or associated objects. The
vxassist command uses default values for many volume attributes, unless you
provide specific values to the command line. You can customize the default behavior
of the vxassist command by customizing the default values.
See Setting default values for vxassist on page 117.
The vxassist command creates volumes in a default disk group according to the
default rules. To use a different disk group, specify the -g diskgroup option to the
vxassist command.
See Rules for determining the default disk group on page 585.
If you want to assign particular characteristics for a certain volume, you can specify
additional attributes on the vxassist command line. These can be storage
specifications to select certain types of disks for allocation, or other attributes such
as the stripe unit width, number of columns in a RAID-5 or stripe volume, number
of mirrors, number of logs, and log type.
For details of available vxassist keywords and attributes, refer to the vxassist(1M)
manual page.
You can use allocation attributes to specify the types of allocation behavior shown
in Table 6-1
Table 6-1
Allocation behavior
Procedures
Media types
Ordered allocation
Site-based allocation
The vxassist utility also provides various constructs to help define and manage
volume allocations, with efficiency and flexibility.
116
By default:
create unmirrored, unstriped volumes
allow allocations to span drives
with RAID-5 create a log, with mirroring dont create a log
align allocations on cylinder boundaries
layout=nomirror,nostripe,span,nocontig,raid5log,noregionlog,
diskalign
use the fsgen usage type, except when creating RAID-5 volumes
usetype=fsgen
allow only root access to a volume
117
118
mode=u=rw,g=,o=
user=root
group=root
#
#
#
by default, create 1 log copy for both mirroring and RAID-5 volumes
nregionlog=1
nraid5log=1
use 64K as the default stripe unit size for regular volumes
stripe_stwid=64k
use 16K as the default stripe unit size for RAID-5 volumes
raid5_stwid=16k
Rules streamline your typing and reduce errors. You can define relatively complex
allocation rules once in a single location and reuse them.
Rules let you standardize behaviors in your environment, including across a set
of servers.
For example, you can create allocation rules so that a set of servers can standardize
their storage tiering. Suppose you had the following requirements:
Tier 1
Tier 2
Tier 0
You can create rules for each volume allocation requirement and name the rules
tier1, tier2, and tier0.
You can also define rules so that each time you create a volume for a particular
purpose, the volume is created with the same attributes. For example, to create
the volume for a production database, you can create a rule called productiondb.
To create standardized volumes for home directories, you can create a rule called
homedir. To standardize your high performance index volumes, you can create a
rule called dbindex.
Use C language style quoting for the strings that may include embedded spaces,
new lines, or tabs. For example, use quotes around the text for the description
attribute.
Within the rule file, a volume allocation rule has the following format:
volume rule rulename vxassist_attributes
This syntax defines a rule named rulename which is a short-hand for the listed
vxassist attributes. Rules can reference other rules using an attribute of
rule=rulename[,rulename,...], which adds all the attributes from that rule into
the rule currently being defined. The attributes you specify in a rule definition override
any conflicting attributes that are in a rule that you specify by reference. You can
add a description to a rule with the attribute description=description_text.
119
The following is a basic rule file. The first rule in the file, base, defines the logtype
and persist attributes. The remaining rules in the file tier0, tier1, and tier2
reference this rule and also define their own tier-specific attributes. Referencing a
rule lets you define attributes in one place and reuse them in other rules.
# Create tier 1 volumes mirrored between disk arrays, tier 0 on SSD,
# and tier 2 as unmirrored. Always use FMR DCO objects.
volume rule base { logtype=dco persist=yes }
volume rule tier0 { rule=base mediatype:ssd tier=tier0 }
volume rule tier1 { rule=base mirror=enclosure tier=tier1 }
volume rule tier2 { rule=base tier=tier2 }
The following rule file contains a more complex definition that runs across several
lines.
volume rule appXdb_storage {
description="Create storage for the database of Application X"
rule=base
siteconsistent=yes
mirror=enclosure
}
In the following example, when you create the volume vol1 in disk group dg3, you
can specify the tier1 rule on the command line. In addition to the attributes you
enter on the command line, vol1 is given the attributes that you defined in tier1.
vxassist -g dg3 make vol1 200m rule=tier1
The following vxprint command displays the attributes of disk group dg3. The
output includes the new volume, vol1.
120
vxprint -g dg3
TY NAME
dg dg3
ASSOC
dg3
KSTATE
-
LENGTH
-
STATE
-
TUTIL0
-
PUTIL0
-
v
pl
sd
pl
sd
dc
v
pl
sd
pl
sd
ACTIVE
ACTIVE
ACTIVE
ACTIVE
ACTIVE
ACTIVE
-
vol1
fsgen
ENABLED 409600
vol1-01
vol1
ENABLED 409600
ibm_ds8x000_0266-01 vol1-01 ENABLED 409600
vol1-02
vol1
ENABLED 409600
ibm_ds8x000_0267-01 vol1-02 ENABLED 409600
vol1_dco
vol1
vol1_dcl
gen
ENABLED 144
vol1_dcl-01 vol1_dcl
ENABLED 144
ibm_ds8x000_0266-02 vol1_dcl-01 ENABLED 144
vol1_dcl-02 vol1_dcl
ENABLED 144
ibm_ds8x000_0267-02 vol1_dcl-02 ENABLED 144
PLOFFS
-
0
0
0
0
The following vxassist command confirms that vol1 is in the tier tier1. The
application of rule tier1 was successful.
vxassist -g dg3 listtag
TY NAME
DISKGROUP
TAG
=========================================================
v
vol1
dg3
vxfs.placement_class.tier1
move
relayout
121
mirror
add a log
NAME
vxmediatype
vxmediatype
VALUE
ssd
ssd
The following command creates a volume, vol1, in the disk group dg3. rule1 is
specified on the command line, so those attributes are also applied to vol1.
122
The following command shows that the volume vol1 is created off the SSD device
ibm_ds8x000_0266 as specified in rule1.
# vxprint -g dg3
TY NAME
dg dg3
ASSOC
dg3
KSTATE
-
LENGTH
-
PLOFFS
-
STATE
-
TUTIL0
-
PUTIL0
-
v vol1
fsgen
ENABLED 204800
pl vol1-01
vol1
ENABLED 204800
sd ibm_ds8x000_0266-01 vol1-01 ENABLED 204800
ACTIVE
ACTIVE
-
The following command displays the attributes that are defined in rule1.
# vxassist -g dg3 help showattrs rule=rule1
alloc=mediatype:ssd
persist=extended
If no persistent attributes are defined, the following command grows vol1 on Hard
Disk Drive (HDD) devices. However, at the beginning of this section, mediatype:ssd
was defined as a persistent attribute. Therefore, the following command honors
this original intent and grows the volume on SSD devices.
# vxassist -g dg3 growby vol1 1g
The following vxprint command confirms that the volume was grown on SSD
devices.
# vxprint -g dg3
TY NAME
dg dg3
ASSOC
dg3
KSTATE
-
LENGTH
-
STATE
-
TUTIL0
-
PUTIL0
-
v
pl
sd
sd
ACTIVE
ACTIVE
-
vol1
fsgen
ENABLED 2301952
vol1-01
vol1
ENABLED 2301952
ibm_ds8x000_0266-01 vol1-01 ENABLED 2027264
ibm_ds8x000_0268-01 vol1-01 ENABLED 274688
PLOFFS
-
0
2027264
123
When the above rule file is used, you can specify the alias atyp for allocation. For
example, the following constraint specification allocates storage from A/A arrays
for the volume creation.
# vxassist -g dgname make volname volsize use=atyp:A/A
124
Note: The site class always has the highest precedence, and its order cannot be
overridden.
Define the customized precedence order in a rule file. The higher the order number,
the higher is the class precedence.
The following shows the default precedence order, for the class names supported
with mirror and stripe separation or confinement constraints.
site
order=1000
vendor
order=900
arrayproduct
order=800
array
order=700
arrayport
order=600
hostport
order=400
The acceptable range for the precedence order is between 0 and 1000.
For example, the array class has a higher priority than the hostport class by default.
To make the hostport class have a higher priority, assign the hostport class a higher
order number. To define the order for the classes, include the following statement
in a rule file:
class define array order=400
class define hostport order=700
When the above rule is used, the following command mirrors across hostport class
rather than the array class.
# vxassist -g dgname make volname volsize mirror=array,hostport
125
You can use the custom disk classes like other storage-specification disk classes,
to specify vxassist allocation constraints. Define the custom disk classes in a rule
file.
Example
With the following definition in the rule file, the user-defined property poolname
is associated to the referenced disks. All devices that have the array vendor property
defined as HITACHI or IBM, are marked as poolname finance. All devices that
have the array vendor property defined as DGC or EMC, are marked as poolname
admin.
disk properties vendor:HITACHI {
poolname:finance
}
disk properties vendor:IBM {
poolname:finance
}
disk properties vendor:DGC {
poolname:admin
}
disk properties vendor:EMC {
poolname:admin
}
You can now use the user-defined disk class poolnamefor allocation. For example,
the following constraint specification allocates disks from the poolname admin for
the volume creation.
# vxassist -g dgname make volname volsize poolname:admin
126
All of the specifications in the constraint must be satisfied, or the allocation fails.
A require constraint behaves as an intersection set. For example, allocate disks
from a particular array vendor AND with a particular array type.
For disk group version of 180 or above, the use and require type of constraints are
persistent for the volume by default. The default preservation of these clauses
enables further allocation operations like grow, without breaking the specified intents.
You can specify multiple storage specifications, separated by commas, in a use or
require clause on the vxassist command line. You can also specify multiple use
or require clauses on the vxassistcommand line.
See Interaction of multiple require and use constraints on page 128.
Use the vxassist intent management operations (setrule, changerule, clearrule,
listrule) to manage persistent require and use constraints.
See Management of the use and require type of persistent attributes on page 134.
require
logrequire
datarequire
127
use
loguse
datause
Multiple use constraints of the same scope are unionized, so that at least one
of the storage specifications is satisfied. That is, multiple use clauses; multiple
datause clauses; or multiple loguse clauses.
Multiple require constraints of the same scope are intersected, so that all the
storage specifications are satisfied. That is, multiple require clauses; multiple
datarequire clauses; or multiple logrequire clauses.
128
Require and use constraints of the same scope are mutually intersected. That
is, require clauses and use clauses; datarequire clauses and datause clauses;
or logrequire clauses and loguse clauses. At least one of the use storage
specifications must be satisfied and all of the require storage specifications are
satisfied. For example, if a datause clause and a datarequire clause are used
together, the allocation for the data must meet at least one of the datause
specifications and all of the datarequire specifications.
The vxassist command does not support a mix of general scope constraints
with data-specific or log-specific constraints. For example, a require clause
cannot be used along with the logrequire clause or a datarequire clause.
However, all possible constraint specifications can be achieved with the
supported combinations.
Table 6-2 summarizes these rules for the interaction of each type of constraint if
multiple constraints are specified.
Table 6-2
Scope
Data
datause - datause
Applied
independently
datause - logrequire
datarequire - loguse
datarequire logrequire
Log
loguse - loguse
logrequire - loguse
loguse - datause
use - use
use - require
require - require
N/A
129
ASSOC
testdg
KSTATE
-
ams_wms0_359 ams_wms0_359 ams_wms0_360 ams_wms0_360 ams_wms0_361 ams_wms0_361 ams_wms0_362 ams_wms0_362 emc_clariion0_0 emc_clariion0_0
emc_clariion0_1 emc_clariion0_1
emc_clariion0_2 emc_clariion0_2
emc_clariion0_3 emc_clariion0_3
LENGTH
-
PLOFFS
-
STATE
-
TUTIL0
-
PUTIL0
-
2027264
2027264
2027264
2027264
4120320
4120320
4120320
4120320
To allocate both the data and the log on the disks that are attached to the particular
HBA and that have the array type A/A:
# vxassist -g testdg make v1 1G logtype=dco dcoversion=20 \
require=hostportid:06-08-02,arraytype:A/A
The following output shows the results of the above command. The command
allocated disk space for the data and the log on emc_clariion0 array disks, which
satisfy all the storage specifications in the require constraint:
# vxprint -g testdg
TY NAME
dg testdg
dm
dm
dm
dm
dm
dm
dm
ASSOC
testdg
KSTATE
-
ams_wms0_359 ams_wms0_359 ams_wms0_360 ams_wms0_360 ams_wms0_361 ams_wms0_361 ams_wms0_362 ams_wms0_362 emc_clariion0_0 emc_clariion0_0 emc_clariion0_1 emc_clariion0_1 emc_clariion0_2 emc_clariion0_2 -
LENGTH
-
PLOFFS
-
STATE
-
TUTIL0
-
PUTIL0
-
2027264
2027264
2027264
2027264
4120320
4120320
4120320
130
dm emc_clariion0_3 emc_clariion0_3 v
pl
sd
dc
v
pl
sd
4120320
v1
fsgen
ENABLED 2097152
v1-01
v1
ENABLED 2097152
emc_clariion0_0-01 v1-01 ENABLED 2097152
v1_dco
v1
v1_dcl
gen
ENABLED 67840
v1_dcl-01
v1_dcl
ENABLED 67840
emc_clariion0_0-02 v1_dcl-01 ENABLED 67840
0
0
ACTIVE
ACTIVE
ACTIVE
ACTIVE
-
ASSOC
testdg
KSTATE
-
ams_wms0_359 ams_wms0_359 ams_wms0_360 ams_wms0_360 ams_wms0_361 ams_wms0_361 ams_wms0_362 ams_wms0_362 emc_clariion0_0 emc_clariion0_0 hitachi_vsp0_3 hitachi_vsp0_3 -
LENGTH
-
PLOFFS
-
STATE
-
TUTIL0
-
PUTIL0
-
2027264
2027264
2027264
2027264
4120320
4120320
To allocate both the data and the log on the disks that belong to the array ams_wms0
or the array emc_clariion0:
# vxassist -g testdg make v1 3G logtype=dco dcoversion=20 \
use=array:ams_wms0,array:emc_clariion0
The following output shows the results of the above command. The command
allocated disk space for the data and the log on disks that satisfy the arrays specified
in the use constraint.
# vxprint -g testdg
TY NAME
dg testdg
ASSOC
testdg
KSTATE
-
LENGTH
-
PLOFFS
-
STATE
-
TUTIL0
-
PUTIL0
-
dm
dm
dm
dm
ams_wms0_359
ams_wms0_360
ams_wms0_361
ams_wms0_362
2027264
2027264
2027264
2027264
ams_wms0_359
ams_wms0_360
ams_wms0_361
ams_wms0_362
131
4120320
4120320
v
pl
sd
sd
sd
dc
v
pl
sd
6291456
6291456
2027264
143872
4120320
67840
67840
67840
0
2027264
2171136
0
ACTIVE
ACTIVE
ACTIVE
ACTIVE
-
v1
fsgen
v1-01
v1
ams_wms0_359-01 v1-01
ams_wms0_360-01 v1-01
emc_clariion0_0-01 v1-01
v1_dco
v1
v1_dcl
gen
v1_dcl-01
v1_dcl
ams_wms0_360-02 v1_dcl-01
ENABLED
ENABLED
ENABLED
ENABLED
ENABLED
ENABLED
ENABLED
ENABLED
132
LENGTH
2027264
2027264
2027264
PLOFFS
-
STATE
-
TUTIL0 PUTIL0
-
dm
dm
dm
dm
dm
dm
2027264
4120320
4120320
4120320
4120320
4120320
The following output shows the results of the above command. The command
allocated disk space for the data and the log independently. The data space is
allocated on emc_clariion0 disks that satisfy the datause constraint. The log space
is allocated on ams_wms0 disks that are A/A-A arraytype and that satisfy the
logrequire constraint:
# vxprint -g testdg
TY NAME
ASSOC
KSTATE
dg testdg
testdg
dm ams_wms0_359 ams_wms0_359 -
LENGTH
2027264
PLOFFS
-
STATE
-
TUTIL0
-
PUTIL0
-
dm
dm
dm
dm
dm
dm
dm
dm
2027264
2027264
2027264
4120320
4120320
4120320
4120320
4120320
v
pl
sd
dc
v
pl
sd
v1
fsgen
v1-01
v1
emc_clariion0_0-01 v1-01
v1_dco
v1
v1_dcl
gen
v1_dcl-01
v1_dcl
ams_wms0_359-01 v1_dcl-01
2097152
2097152
2097152
67840
67840
67840
0
0
ACTIVE
ACTIVE
ACTIVE
ACTIVE
-
ENABLED
ENABLED
ENABLED
ENABLED
ENABLED
ENABLED
NAME
ASSOC
KSTATE
testdg
testdg
ams_wms0_359 ams_wms0_359 ams_wms0_360 ams_wms0_360 ams_wms0_361 ams_wms0_361 ams_wms0_362 ams_wms0_362 emc_clariion0_0 emc_clariion0_0 emc_clariion0_1 emc_clariion0_1 emc_clariion0_2 emc_clariion0_2 emc_clariion0_3 emc_clariion0_3 hitachi_vsp0_3 hitachi_vsp0_3 -
LENGTH
2027264
2027264
2027264
2027264
4120320
4120320
4120320
4120320
4120320
PLOFFS
-
STATE
-
TUTIL0
-
PUTIL0
-
To allocate data and log space on disks from emc_clariion0 or ams_wms0 array,
and disks that are multi-pathed:
133
The following output shows the results of the allocation. The data and log space is
on ams_wms0 disks, which satisfy the use as well as the require constraints:
# vxprint -g testdg
TY
dg
dm
dm
dm
dm
dm
dm
dm
dm
dm
v
pl
sd
sd
dc
v
pl
sd
NAME
ASSOC
KSTATE
testdg
testdg
ams_wms0_359 ams_wms0_359 ams_wms0_360 ams_wms0_360 ams_wms0_361 ams_wms0_361 ams_wms0_362 ams_wms0_362 emc_clariion0_0 emc_clariion0_0 emc_clariion0_1 emc_clariion0_1 emc_clariion0_2 emc_clariion0_2 emc_clariion0_3 emc_clariion0_3 hitachi_vsp0_3 hitachi_vsp0_3 v1
fsgen
ENABLED
v1-01
v1
ENABLED
ams_wms0_359-01 v1-01
ENABLED
ams_wms0_360-01 v1-01
ENABLED
v1_dco
v1
v1_dcl
gen
ENABLED
v1_dcl-01
v1_dcl
ENABLED
ams_wms0_360-02 v1_dcl-01 ENABLED
LENGTH
2027264
2027264
2027264
2027264
4120320
4120320
4120320
4120320
4120320
2097152
2097152
2027264
69888
67840
67840
67840
PLOFFS
0
2027264
0
STATE
ACTIVE
ACTIVE
ACTIVE
ACTIVE
-
TUTIL0
-
PUTIL0
-
setrule
Replaces any existing saved intents with the specified intents for the specified
volume.
changerule
134
Appends the specified intents to the existing saved intents for the specified
volume.
clearrule
listrule
Lists any saved intents for the specified volume. If no volume name is specified,
the command shows the intents for all of the volumes.
The intent management operations only apply to the use or require type of persistent
constraints. The other type of persistent constraints are managed with the persist
attribute.
See Using persistent attributes on page 122.
To display the intents that are currently associated to a volume
To display the intents that are currently associated to a volume, use the
following command:
# vxassist [options] listrule [volume]
For example, to display the existing saved intents for the volume v1:
# vxassist -g testdg listrule v1
volume rule v1 {
require=array:ams_wms0
}
135
In this example, the volume v1 has an existing saved intent that requires the
array to be ams_wms0. For example, to display the existing saved intents for
the volume v1:
# vxassist -g testdg listrule v1
volume rule v1 {
require=array:ams_wms0
}
For example, to replace the array with the ds4100-0 array, specify the new
intent with the following command:
# vxassist -g testdg setrule v1 require=array:ds4100-0
136
In this example, the volume v1 has an existing saved intent that requires the
array to be ds4100-0. For example, to display the existing saved intents for
the volume v1:
# vxassist -g testdg listrule v1
volume rule v1 {
use=array:ds4100-0
}
For example, to add the ams_wms0 array in the use constraint, specify the
new intent with the following command:
# vxassist -g testdg changerule v1 use=array:ams_wms0
137
For example, to display the existing saved intents for the volume v1:
# vxassist -g testdg listrule v1
volume rule v1 {
require=multipathed:yes
use=array:emc_clariion0,array:ams_wms0
}
mirrored volumes
See Creating a mirrored volume on page 140.
striped volumes
See Creating a striped volume on page 142.
RAID-5 volumes
See Creating a RAID-5 volume on page 144.
138
Layout type
Description
Concatenated
Striped
Mirrored
RAID-5
A volume that uses striping to spread data and parity evenly across
multiple disks in an array. Each stripe contains a parity stripe unit
and data stripe units. Parity can be used to reconstruct data if one
of the disks fails. In comparison to the performance of striped
volumes, write throughput of RAID-5 volumes decreases since
parity information needs to be updated each time data is modified.
However, in comparison to mirroring, the use of parity to implement
data redundancy reduces the amount of space required.
See RAID-5 (striping with parity) on page 65.
139
Table 6-3
Layout type
Description
Mirrored-stripe
Layered Volume
Striped-mirror
A striped-mirror volume is created by configuring several
mirrored volumes as the columns of a striped volume. This
layout offers the same benefits as a non-layered mirrored-stripe
volume. In addition, it provides faster recovery as the failure of
single disk does not force an entire striped plex offline.
See Mirroring plus striping (striped-mirror, RAID-1+0, or
RAID-10) on page 64.
Concatenated-mirror
A concatenated-mirror volume is created by concatenating
several mirrored volumes. This provides faster recovery as the
failure of a single disk does not force the entire mirror offline.
140
vxassist selects the layout based on the size of the volume. For smaller volumes,
vxassist uses the simpler mirrored concatenated (mirror-concat) layout. For larger
Specify the -b option if you want to make the volume immediately available
for use.
For example, to create the mirrored volume, volmir, in the disk group, mydg,
use the following command:
# vxassist -b -g mydg make volmir 5g layout=mirror
The following example shows how to create a volume with 3 mirrors instead
of the default of 2 mirrors:
# vxassist -b -g mydg make volmir 5g layout=mirror nmirror=3
Specify the -b option if you want to make the volume immediately available
for use.
Alternatively, first create a concatenated volume, and then mirror it.
141
Specify the -b option if you want to make the volume immediately available
for use.
Specify the -b option if you want to make the volume immediately available for use.
For example, to create the 10-gigabyte striped volume volzebra, in the disk group,
mydg, use the following command:
# vxassist -b -g mydg make volzebra 10g layout=stripe
This creates a striped volume with the default stripe unit size (64 kilobytes) and the
default number of stripes (2).
You can specify the disks on which the volumes are to be created by including the
disk names on the command line. For example, to create a 30-gigabyte striped
volume on three specific disks, mydg03, mydg04, and mydg05, use the following
command:
# vxassist -b -g mydg make stripevol 30g layout=stripe \
mydg03 mydg04 mydg05
142
To change the number of columns or the stripe width, use the ncolumn and
stripeunit modifiers with vxassist. For example, the following command creates
a striped volume with 5 columns and a 32-kilobyte stripe size:
# vxassist -b -g mydg make stripevol 30g layout=stripe \
stripeunit=32k ncol=5
Specify the -b option if you want to make the volume immediately available for use.
Alternatively, first create a striped volume, and then mirror it. In this case, the
additional data plexes may be either striped or concatenated.
See Adding a mirror to a volume on page 612.
Specify the -b option if you want to make the volume immediately available for use.
By default, Veritas Volume Manager (VxVM) attempts to create the underlying
volumes by mirroring subdisks rather than columns if the size of each column is
greater than the value for the attribute stripe-mirror-col-split-trigger-pt
that is defined in the vxassist defaults file.
If there are multiple subdisks per column, you can choose to mirror each subdisk
individually instead of each column. To mirror at the subdisk level, specify the layout
143
Specify the -b option if you want to make the volume immediately available for use.
For example, to create the RAID-5 volume volraid together with 2 RAID-5 logs in
the disk group, mydg, use the following command:
# vxassist -b -g mydg make volraid 10g layout=raid5 nlog=2
This creates a RAID-5 volume with the default stripe unit size on the default number
of disks. It also creates two RAID-5 logs rather than the default of one log.
144
If you require RAID-5 logs, you must use the logdisk attribute to specify the disks
to be used for the log plexes.
RAID-5 logs can be concatenated or striped plexes, and each RAID-5 log associated
with a RAID-5 volume has a complete copy of the logging information for the volume.
To support concurrent access to the RAID-5 array, the log should be several times
the stripe size of the RAID-5 plex.
It is suggested that you configure a minimum of two RAID-5 log plexes for each
RAID-5 volume. These log plexes should be located on different disks. Having two
RAID-5 log plexes for each RAID-5 volume protects against the loss of logging
information due to the failure of a single disk.
If you use ordered allocation when creating a RAID-5 volume on specified storage,
you must use the logdisk attribute to specify on which disks the RAID-5 log plexes
should be created. Use the following form of the vxassist command to specify the
disks from which space for the logs is to be allocated:
# vxassist [-b] [-g diskgroup] -o ordered make volume length \
layout=raid5 [ncol=number_columns] [nlog=number] \
[loglen=log_length] logdisk=disk[,disk,...] \
storage_attributes
For example, the following command creates a 3-column RAID-5 volume with the
default stripe unit size on disks mydg04, mydg05 and mydg06. It also creates two
RAID-5 logs on disks mydg07 and mydg08.
# vxassist -b -g mydg -o ordered make volraid 10g layout=raid5 \
ncol=3 nlog=2 logdisk=mydg07,mydg08 mydg04 mydg05 mydg06
The number of logs must equal the number of disks that is specified to logdisk.
See Specifying ordered allocation of storage to volumes on page 147.
See the vxassist(1M) manual page.
You can add more logs to a RAID-5 volume at a later time.
To add a RAID-5 log to an existing volume, use the following command:
# vxassist [-b] [-g diskgroup] addlog volume [loglen=length]
If you specify the -b option, adding the new log is a background task.
When you add the first log to a volume, you can specify the log length. Any logs
that you add subsequently are configured with the same length as the existing log.
For example, to create a log for the RAID-5 volume volraid, in the disk group mydg,
use the following command:
145
Specify the -b option if you want to make the volume immediately available for use.
For example, to create the volume volspec with length 5 gigabytes on disks mydg03
and mydg04, use the following command:
# vxassist -b -g mydg make volspec 5g mydg03 mydg04
The vxassist command allows you to specify storage attributes. These give you
control over the devices, including disks and controllers, which vxassist uses to
configure a volume.
For example, you can specifically exclude the disk mydg05.
Note: The ! character is a special character in some shells. The following examples
show how to escape it in a bash shell.
# vxassist -b -g mydg make volspec 5g \!mydg05
The following example excludes all disks that are on controller c2:
# vxassist -b -g mydg make volspec 5g \!ctlr:c2
146
If you want a volume to be created using only disks from a specific disk group, use
the -g option to vxassist, for example:
# vxassist -g bigone -b make volmega 20g bigone10 bigone11
Any storage attributes that you specify for use must belong to the disk group.
Otherwise, vxassist will not use them to create a volume.
You can also use storage attributes to control how vxassist uses available storage,
for example, when calculating the maximum size of a volume, when growing a
volume or when removing mirrors or logs from a volume. The following example
excludes disks mydg07 and mydg08 when calculating the maximum size of a RAID-5
volume that vxassist can create using the disks in the disk group mydg:
# vxassist -b -g mydg maxsize layout=raid5 nlog=2 \!mydg07 \!mydg08
It is also possible to control how volumes are laid out on the specified storage.
See Specifying ordered allocation of storage to volumes on page 147.
vxassist also lets you select disks based on disk tags. The following command
147
number of disks that are required to create a volume. The order in which you specify
the disks to vxassist is also significant.
If you specify the -o ordered option to vxassist when creating a volume, any
storage that you also specify is allocated in the following order:
Concatenate disks
Form columns
Form mirrors
This command places columns 1, 2, and 3 of the first mirror on disks mydg01, mydg02,
and mydg03 respectively, and columns 1, 2, and 3 of the second mirror on disks
mydg04, mydg05, and mydg06 respectively.
Figure 6-1 shows an example of using ordered allocation to create a mirrored-stripe
volume.
Figure 6-1
column 1
column 2
column 3
mydg01-01
mydg02-01
mydg03-01
Mirrored-stripe
volume
Striped
plex
Mirror
column 1
mydg04-01
column 2
mydg05-01
column 3
mydg06-01
Striped
plex
For layered volumes, vxassist applies the same rules to allocate storage as for
non-layered volumes. For example, the following command creates a striped-mirror
volume with 2 columns:
# vxassist -b -g mydg -o ordered make strmirvol 10g \
layout=stripe-mirror ncol=2 mydg01 mydg02 mydg03 mydg04
148
This command mirrors column 1 across disks mydg01 and mydg03, and column 2
across disks mydg02 and mydg04.
Figure 6-2 shows an example of using ordered allocation to create a striped-mirror
volume.
Figure 6-2
column 2
mydg01-01
mydg02-01
column 1
mydg03-01
column 2
mydg04-01
Mirror
Striped plex
Additionally, you can use the col_switch attribute to specify how to concatenate
space on the disks into columns. For example, the following command creates a
mirrored-stripe volume with 2 columns:
# vxassist -b -g mydg -o ordered make strmir2vol 10g \
layout=mirror-stripe ncol=2 col_switch=3g,2g \
mydg01 mydg02 mydg03 mydg04 mydg05 mydg06 mydg07 mydg08
This command allocates 3 gigabytes from mydg01 and 2 gigabytes from mydg02 to
column 1, and 3 gigabytes from mydg03 and 2 gigabytes from mydg04 to column 2.
The mirrors of these columns are then similarly formed from disks mydg05 through
mydg08.
Figure 6-3 shows an example of using concatenated disk space to create a
mirrored-stripe volume.
149
Figure 6-3
column 1
column 2
mydg01-01
mydg03-01
mydg02-01
mydg04-01
column 1
column 1
mydg05-01
mydg07-01
mydg06-01
mydg08-01
Mirrored-stripe
volume
Striped
plex
Mirror
Striped
plex
Other storage specification classes for controllers, enclosures, targets and trays
can be used with ordered allocation. For example, the following command creates
a 3-column mirrored-stripe volume between specified controllers:
# vxassist -b -g mydg -o ordered make mirstr2vol 80g \
layout=mirror-stripe ncol=3 \
ctlr:c1 ctlr:c2 ctlr:c3 ctlr:c4 ctlr:c5 ctlr:c6
This command allocates space for column 1 from disks on controllers c1, for column
2 from disks on controller c2, and so on.
Figure 6-4 shows an example of using storage allocation to create a mirrored-stripe
volume across controllers.
Example of storage allocation used to create a mirrored-stripe
volume across controllers
Figure 6-4
c1
c2
c3
Controllers
Mirrored-stripe volume
column 1
column 2
column 3
column 1
column 2
column 3
Striped plex
Mirror
Striped plex
c4
c5
c6
Controllers
150
There are other ways in which you can control how vxassist lays out mirrored
volumes across controllers.
Site-based allocation
In a Remote Mirror configuration (also known as a campus cluster or stretch cluster),
the hosts and storage of a cluster are divided between two or more sites. These
sites are typically connected through a redundant high-capacity network that provides
access to storage and private link communication between the cluster nodes.
Configure the disk group in a Remote Mirror site to be site-consistent. When you
create volumes in such a disk group, the volumes are mirrored across all sites by
default.
prefer
Reads first from a plex that has been named as the preferred
plex.
select
siteread
split
Divides the read requests and distributes them across all the
available plexes.
151
For example, to set the read policy for the volume vol01 in disk group mydg to
round-robin, use the following command:
# vxvol -g mydg rdpol round vol01
For example, to set the policy for vol01 to read preferentially from the plex vol01-02,
use the following command:
# vxvol -g mydg rdpol prefer vol01 vol01-02
152
Chapter
Note: Creating a VxFS file system on a Logical Volume Manager (LVM) or Multiple
Device (MD) driver volume is not supported in this release. You also must convert
an underlying LVM to a VxVM volume before converting an ext2 or ext3 file system
to a VxFS file system. See the vxvmconvert(1M) manual page.
See the mkfs(1M) and mkfs_vxfs(1M) manual pages.
When you create a file system with the mkfs command, you can select the following
characteristics:
-m
Displays the command line that was used to create the file
system. The file system must already exist. This option enables
you to determine the parameters used to construct the file
system.
generic_options
-o specific_options
-o N
Displays the geometry of the file system and does not write
to the device.
-o largefiles
special
size
The following example creates a VxFS file system of 12288 sectors in size on a
VxVM volume.
154
155
With larger intent log sizes, recovery time is proportionately longer and the file
system may consume more system resources (such as memory) during normal
operation.
There are several system performance benchmark suites for which VxFS performs
better with larger log sizes. As with block sizes, the best way to pick the log size is
to try representative system loads against various sizes and pick the fastest.
Use the vxfsconvert command to convert a ext2 or ext3 file system to VxFS:
vxfsconvert [-l logsize] [-s size] [-efnNvyY] special
-e
-f
-l logsize
-n|N
-s size
Directs vxfsconvert to use free disk space past the current end of the
file system to store VxFS metadata.
-v
-y|Y
special
Specifies the name of the character (raw) device that contains the file
system to convert.
The following example converts a ext2 or ext3 file system to a VxFS file system
with an intent log size of 16384 blocks.
To convert an ext2 or ext3 file system to a VxFS file system
156
Caching behavior can be altered with the mincache option, and the behavior of
O_SYNC and D_SYNC writes can be altered with the convosync option.
See the fcntl(2) manual page.
The delaylog and tmplog modes can significantly improve performance. The
improvement over log mode is typically about 15 to 20 percent with delaylog; with
tmplog, the improvement is even higher. Performance improvement varies,
157
depending on the operations being performed and the workload. Read/write intensive
loads should show less improvement, while file system structure intensive loads,
such as mkdir, create, and rename, may show over 100 percent improvement.
The best way to select a mode is to test representative system loads against the
logging modes and compare the performance results.
Most of the modes can be used in combination. For example, a desktop machine
might use both the blkclear and mincache=closesync modes.
The mount command automatically runs the VxFS fsck command to clean up the
intent log if the mount command detects a dirty log in the file system. This
functionality is only supported on file systems mounted on a Veritas Volume Manager
(VxVM) volume.
See the mount_vxfs(1M) manual page.
To mount a file system
vxfs
generic_options
specific_options
Mounts a file system in shared mode. Available only with the VxFS
cluster file system feature.
special
mount_point
-r
158
159
160
mincache=closesync
mincache=direct
mincache=dsync
mincache=unbuffered
mincache=tmpcache
161
To improve performance, most file systems do not synchronously update data and
inode changes to disk. If the system crashes, files that have been updated within
the past minute are in danger of losing data. With the mincache=closesync mode,
if the system crashes or is switched off, only open files can lose data. A
mincache=closesync mode file system could be approximately 15 percent slower
than a standard mode VxFS file system, depending on the workload.
The following describes where to use the mincache modes:
convosync=closesync
162
convosync=delay
convosync=direct
convosync=dsync
convosync=unbuffered
163
disable policy
nodisable policy
mdisable policy
disable policy
If disable is selected, VxFS disables the file system after detecting any I/O error.
You must then unmount the file system and correct the condition causing the I/O
error. After the problem is repaired, run fsck and mount the file system again. In
most cases, replay fsck to repair the file system. A full fsck is required only in
cases of structural damage to the file system's metadata. Select disable in
environments where the underlying storage is redundant, such as RAID-5 or mirrored
disks.
nodisable policy
If nodisable is selected, when VxFS detects an I/O error, it sets the appropriate
error flags to contain the error, but continues running. Note that the degraded
condition indicates possible data or metadata corruption, not the overall performance
of the file system.
For file data read and write errors, VxFS sets the VX_DATAIOERR flag in the
super-block. For metadata read errors, VxFS sets the VX_FULLFSCK flag in the
super-block. For metadata write errors, VxFS sets the VX_FULLFSCK and
VX_METAIOERR flags in the super-block and may mark associated metadata as bad
on disk. VxFS then prints the appropriate error messages to the console.
You should stop the file system as soon as possible and repair the condition causing
the I/O error. After the problem is repaired, run fsck and mount the file system
again. Select nodisable if you want to implement the policy that most closely
resembles the error handling policy of the previous VxFS release.
164
mdisable policy
If mdisable (metadata disable) is selected, the file system is disabled if a metadata
read or write fails. However, the file system continues to operate if the failure is
confined to data extents. mdisable is the default ioerror mount option for cluster
mounts.
Specifying largefiles sets the largefiles flag. This enables the file system to
hold files that are two gigabytes or larger. This is the default option.
To clear the flag and prevent large files from being created:
165
To determine the current status of the largefiles flag, type either of the following
commands:
# mkfs -t vxfs -m special_device
# /opt/VRTS/bin/fsadm mount_point | special_device
166
This guarantees that when a file is closed, its data is synchronized to disk and
cannot be lost. Thus, after an application has exited and its files are closed, no data
is lost even if the system is immediately turned off.
To mount a temporary file system or to restore from backup:
167
This combination might be used for a temporary file system where performance is
more important than absolute data integrity. Any O_SYNC writes are performed as
delayed writes and delayed extending writes are not handled. This could result in
a file that contains corrupted data if the system crashes. Any file written 30 seconds
or so before a crash may contain corrupted data or be missing if this mount
combination is in effect. However, such a file system does significantly less disk
writes than a log file system, and should have significantly better performance,
depending on the application.
To mount a file system for synchronous writes:
# mount -t vxfs -o log,convosync=dsync \
/dev/vx/dsk/diskgroup/volume /mnt
168
layout version. A file system using the Version 7 or later disk layout can be up to
256 terabytes in size. The size to which a Version 7 or later disk layout file system
can be increased depends on the file system block size.
See the fsadm_vxfs(1M) and fdisk(8) manual pages.
vxfs
newsize
The size to which the file system will increase. The default units is
sectors, but you can specify k or K for kilobytes, m or M for megabytes,
or g or G for gigabytes.
mount_point
-r rawdev
The following example extends a file system mounted at /mnt1 to 22528 sectors.
Example of extending a file system to 22528 sectors
The following example extends a file system mounted at /mnt1 to 500 gigabytes.
Example of extending a file system to 500 gigabytes
169
Use the fsadm command to decrease the size of a VxFS file system:
fsadm
vxfs
newsize
The size to which the file system will shrink. The default units is
sectors, but you can specify k or K for kilobytes, m or M for
megabytes, or g or G for gigabytes.
mount_point
-r rawdev
The following example shrinks a VxFS file system mounted at /mnt1 to 20480
sectors.
Example of shrinking a file system to 20480 sectors
The following example shrinks a file system mounted at /mnt1 to 450 gigabytes.
Example of shrinking a file system to 450 gigabytes
170
171
vxfs
-d
-D
-e
-E
-H
mount_point
-r rawdev
Use the fsadm command to perform free space defragmentation of a VxFS file
system:
fsadm [-t vxfs] [-C] mount_point
vxfs
-C
mount_point
The following example minimizes the free space fragmentation of the file system
mounted at /mnt1.
Example of running free space defragmentation
Minimize the free space of the the VxFS file system mounted at /mnt1:
# fsadm -t vxfs -C /mnt1
Use the mount command to view the status of mounted file systems:
mount
This shows the file system type and mount options for all mounted file systems.
The following example displays information on mounted file systems by invoking
the mount command without options.
To display information on mounted file systems
172
special
-v
The following example uses the fstyp command to determine the file system type
of the /dev/vx/dsk/fsvol/vol1 device.
173
Use the fstyp command to determine the file system type of the device
/dev/vx/dsk/fsvol/vol1:
# fstyp -v /dev/vx/dsk/fsvol/vol1
The output indicates that the file system type is vxfs, and displays file system
information similar to the following:
vxfs
magic a501fcf5 version 7 ctime Tue Jun 23 18:29:39 2004
logstart 17 logend 1040
bsize 1024 size 1048576 dsize 1047255 ninode 0 nau 8
defiextsize 64 ilbsize 0 immedlen 96 ndaddr 10
aufirst 1049 emap 2 imap 0 iextop 0 istart 0
bstart 34 femap 1051 fimap 0 fiextop 0 fistart 0 fbstart
1083
nindir 2048 aulen 131106 auimlen 0 auemlen 32
auilen 0 aupad 0 aublocks 131072 maxtier 17
inopb 4 inopau 0 ndiripau 0 iaddrlen 8
bshift 10
inoshift 2 bmask fffffc00 boffmask 3ff checksum d7938aa1
oltext1 9 oltext2 1041 oltsize 8 checksum2 52a
free 382614 ifree 0
efree 676 413 426 466 612 462 226 112 85 35 14 3 6 5 4 4 0 0
174
Monitoring fragmentation
Fragmentation reduces performance and availability. Symantec recommends regular
use of the fragmentation reporting and reorganization facilities of the fsadm
command.
The easiest way to ensure that fragmentation does not become a problem is to
schedule regular defragmentation runs using the cron command.
Defragmentation scheduling should range from weekly (for frequently used file
systems) to monthly (for infrequently used file systems). Extent fragmentation should
be monitored with the fsadm command.
To determine the degree of fragmentation, use the following factors:
Less than 1 percent of free space in extents of less than 8 blocks in length
Less than 5 percent of free space in extents of less than 64 blocks in length
More than 5 percent of the total file system size available as free extents in
lengths of 64 or more blocks
Greater than 5 percent of free space in extents of less than 8 blocks in length
More than 50 percent of free space in extents of less than 64 blocks in length
Less than 5 percent of the total file system size available as free extents in
lengths of 64 or more blocks
175
The optimal period for scheduling of extent reorganization runs can be determined
by choosing a reasonable interval, scheduling fsadm runs at the initial interval, and
running the extent fragmentation report feature of fsadm before and after the
reorganization.
The before" result is the degree of fragmentation prior to the reorganization. If the
degree of fragmentation is approaching the figures for bad fragmentation, reduce
the interval between fsadm runs. If the degree of fragmentation is low, increase the
interval between fsadm runs.
The after" result is an indication of how well the reorganizer has performed. The
degree of fragmentation should be close to the characteristics of an unfragmented
file system. If not, it may be a good idea to resize the file system; full file systems
tend to fragment and are difficult to defragment. It is also possible that the
reorganization is not being performed at a time during which the file system in
question is relatively idle.
Directory reorganization is not nearly as critical as extent reorganization, but regular
directory reorganization improves performance. It is advisable to schedule directory
reorganization for file systems when the extent reorganization is scheduled. The
following is a sample script that is run periodically at 3:00 A.M. from cron for a
number of file systems:
outfile=/var/spool/fsadm/out./bin/date +'%m%d'
for i in /home /home2 /project /db
do
/bin/echo "Reorganizing $i"
/usr/bin/time /opt/VRTS/bin/fsadm -t vxfs -e -E -s $i
/usr/bin/time /opt/VRTS/bin/fsadm -t vxfs -s -d -D $i
done > $outfile 2>&1
176
Chapter
Extent attributes
This chapter includes the following topics:
Extent attributes
About extent attributes
The file size will be changed to incorporate the allocated space immediately
Some of the extent attributes are persistent and become part of the on-disk
information about the file, while other attributes are temporary and are lost after the
file is closed or the system is rebooted. The persistent attributes are similar to the
file's permissions and are written in the inode for the file. When a file is copied,
moved, or archived, only the persistent attributes of the source file are preserved
in the new file.
See Other extent attribute controls on page 179.
In general, the user will only set extent attributes for reservation. Many of the
attributes are designed for applications that are tuned to a particular pattern of I/O
or disk alignment.
See About Veritas File System I/O on page 290.
178
Extent attributes
About extent attributes
because the unused space fragments free space by breaking large extents into
smaller pieces. By erring on the side of minimizing fragmentation for the file system,
files may become so non-contiguous that their I/O characteristics would degrade.
Fixed extent sizes are particularly appropriate in the following situations:
If a file is large and sparse and its write size is fixed, a fixed extent size that is
a multiple of the write size can minimize space wasted by blocks that do not
contain user data as a result of misalignment of write and extent sizes. The
default extent size for a sparse file is 8K.
If a file is large and contiguous, a large fixed extent size can minimize the number
of extents in the file.
Custom applications may also use fixed extent sizes for specific reasons, such as
the need to align extents to cylinder or striping boundaries on disk.
How the fixed extent size works with the shared extents
Veritas File System (VxFS) allows the user to set the fixed extent size option on a
file that controls the minimum allocation size of the file. If a file has shared extents
that must be unshared, the allocation that is done as a part of the unshare operation
ignores the fixed extent size option that is set on the file. The allocation size during
the unshare operation, is dependent on the size of the write operation on the shared
region.
When the space reserved for a file will actually become part of the file
See Including an extent attribute reservation in the file on page 180.
179
Extent attributes
About extent attributes
180
Extent attributes
Commands related to extent attributes
increases the size of a file, this type of reservation does not perform zeroing of the
blocks included in the file and limits this facility to users with appropriate privileges.
The data that appears in the file may have been previously contained in another
file. For users who do not have the appropriate privileges, there is a variant request
that prevents such users from viewing uninitialized data.
force
ignore
The following example creates a file named file1 and preallocates 2 GB of disk
space for the file.
181
Extent attributes
Commands related to extent attributes
1024
Reserve 2097152
Extent Size
The file file1 has a block size of 1024 bytes, 36 blocks reserved, a fixed extent
size of 3 blocks, and all extents aligned to 3 block boundaries. The file size
cannot be increased after the current reservation is exhausted. Reservations
and fixed extent sizes are allocated in units of the file system block size.
The file system receiving a copied, moved, or restored file from an archive is
not a VxFS file system. Since other file system types do not support the extent
attributes of the VxFS file system, the attributes of the source file are lost during
the migration.
The file system receiving a copied, moved, or restored file is a VxFS type but
does not have enough free space to satisfy the extent attributes. For example,
consider a 50 KB file and a reservation of 1 MB. If the target file system has 500
KB free, it could easily hold the file but fail to satisfy the reservation.
The file system receiving a copied, moved, or restored file from an archive is a
VxFS type but the different block sizes of the source and target file system make
extent attributes impossible to maintain. For example, consider a source file
182
Extent attributes
Commands related to extent attributes
system of block size 1024, a target file system of block size 4096, and a file that
has a fixed extent size of 3 blocks (3072 bytes). This fixed extent size adapts
to the source file system but cannot translate onto the target file system.
The same source and target file systems in the preceding example with a file
carrying a fixed extent size of 4 could preserve the attribute; a 4 block (4096
byte) extent on the source file system would translate into a 1 block extent on
the target.
On a system with mixed block sizes, a copy, move, or restoration operation may
or may not succeed in preserving attributes. It is recommended that the same
block size be used for all file systems on a given system.
183
Section
Administering multi-pathing
with DMP
Chapter
Administering Dynamic
Multi-Pathing
This chapter includes the following topics:
About enabling and disabling I/O for controllers and storage processors
# vxdctl -f enable
# vxdisk -f scandisks
However, a complete scan is initiated if the system configuration has been modified
by changes to:
The following command scans for the devices sdm and sdn:
# vxdisk scandisks device=sdm,sdn
Alternatively, you can specify a ! prefix character to indicate that you want to scan
for all devices except those that are listed.
Note: The ! character is a special character in some shells. The following examples
show how to escape it in a bash shell.
# vxdisk scandisks \!device=sdm,sdn
You can also scan for devices that are connected (or not connected) to a list of
logical or physical controllers. For example, this command discovers and configures
all devices except those that are connected to the specified logical controllers:
186
The next command discovers only those devices that are connected to the specified
physical controller:
# vxdisk scandisks pctlr=c1+c2
187
disk) devices used by DMP. If the JBOD definition includes a cabinet number, DDL
uses the cabinet number to group the LUNs into enclosures.
See Adding unsupported disk arrays to the DISKS category on page 199.
DMP can provide basic multi-pathing to arrays that comply with the Asymmetric
Logical Unit Access (ALUA) standard, even if there is no ASL or JBOD definition.
DDL claims the LUNs as part of the aluadisk enclosure. The array type is shown
as ALUA. Adding a JBOD definition also enables you to group the LUNs into
enclosures.
Disk categories
Disk arrays that have been certified for use with Dynamic Multi-Pathing (DMP) are
supported by an array support library (ASL), and are categorized by the vendor ID
string that is returned by the disks (for example, HITACHI).
Disks in JBODs that are capable of being multi-pathed by DMP, are placed in the
DISKS category. Disks in unsupported arrays can also be placed in the DISKS
category.
See Adding unsupported disk arrays to the DISKS category on page 199.
Disks in JBODs that do not fall into any supported category, and which are not
capable of being multi-pathed by DMP are placed in the OTHER_DISKS category.
188
VxVM device list. For other Linux flavors, reboot the system to make Linux recognize
the new disks, and then use the vxdctl enable command to include the new disks
in the VxVM device list.
If you need to remove the latest VRTSaslapm rpm, you can revert to the previously
installed version. For the detailed procedure, refer to the Symantec Storage
Foundation and High Availability Solutions Troubleshooting Guide.
189
PowerPath
DMP
Array configuration
mode
Installed.
Active/Active
Active/Passive (A/P),
Active/Passive in Explicit
Failover mode (A/PF)
and ALUA
If any EMCpower disks are configured as foreign disks, use the vxddladm
rmforeign command to remove the foreign definitions, as shown in this example:
# vxddladm rmforeign blockpath=/dev/emcpowera10 \
charpath=/dev/emcpowera10
To allow DMP to receive correct inquiry data, the Common Serial Number (C-bit)
Symmetrix Director parameter must be set to enabled.
190
List the hierarchy of all the devices discovered by DDL including iSCSI devices.
191
HBA information
Field
Description
Driver
Firmware
Firmware version.
Discovery
State
Address
Use the following command to list all of the HBAs, including iSCSI devices,
configured on the system:
# vxddladm list hbas
192
State
Address
Target information
Field
Description
Alias
HBA-ID
State
Address
193
194
You can filter based on a HBA or port, using the following command:
# vxddladm list targets [hba=hba_name|port=port_name]
For example, to obtain the targets configured from the specified HBA:
# vxddladm list targets hba=c2
TARGET-ID ALIAS HBA-ID
STATE
ADDRES
-------------------------------------------------------------c2_p0_t0
c2
Online
50:0A:09:80:85:84:9D:84
Device information
Field
Description
Device
Target-ID
State
DDL status
To list the devices configured from a Host Bus Adapter and target
To obtain the devices configured from a particular HBA and target, use the
following command:
# vxddladm list devices target=target_name
Parameter
Default value
Minimum value
Maximum value
DataPDUInOrder
yes
no
yes
DataSequenceInOrder
yes
no
yes
DefaultTime2Retain
20
3600
DefaultTime2Wait
3600
ErrorRecoveryLevel
FirstBurstLength
65535
512
16777215
InitialR2T
yes
no
yes
ImmediateData
yes
no
yes
195
Table 9-5
Parameter
Default value
Minimum value
Maximum value
MaxBurstLength
262144
512
16777215
MaxConnections
65535
MaxOutStandingR2T
65535
512
16777215
MaxRecvDataSegmentLength 8182
To get the iSCSI operational parameters on the initiator for a specific iSCSI target
You can use this command to obtain all the iSCSI operational parameters.
# vxddladm getiscsi target=c2_p2_t0
To set the iSCSI operational parameters on the initiator for a specific iSCSI target
196
This command displays the vendor ID (VID), product IDs (PIDs) for the arrays,
array types (for example, A/A or A/P), and array names. The following is sample
output.
# vxddladm listsupport libname=libvxfujitsu.so
ATTR_NAME
ATTR_VALUE
=================================================
LIBNAME
libvxfujitsu.so
VID
vendor
PID
GR710, GR720, GR730
GR740, GR820, GR840
ARRAY_TYPE
A/A, A/P
ARRAY_NAME
FJ_GR710, FJ_GR720, FJ_GR730
FJ_GR740, FJ_GR820, FJ_GR840
197
Before excluding the PowerPath array support library (ASL), you must remove
the devices from PowerPath control.
Verify that the devices on the system are not managed by PowerPath. The
following command displays the devices that are not managed by PowerPath.
# powermt display unmanaged
If any devices on the system do not display, remove the devices from
PowerPath control with the following command:
# powermt unmanage dev=pp_device_name
To exclude support for a disk array library, specify the array library to the
following command.
# vxddladm excludearray libname=libname
You can also exclude support for disk arrays from a particular vendor, as shown
in this example:
# vxddladm excludearray vid=ACME pid=X1
If you have excluded support for all arrays that depend on a particular disk
array library, you can use the includearray keyword to remove the entry from
the exclude list.
# vxddladm includearray libname=libname
This command adds the array library to the database so that the library can
once again be used in device discovery. If vxconfigd is running, you can use
the vxdisk scandisks command to discover the arrays and add their details
to the database.
198
199
Use the following command to identify the vendor ID and product ID of the
disks in the array:
# /etc/vx/diag.d/vxscsiinq device_name
where device_name is the device name of one of the disks in the array. Note
the values of the vendor ID (VID) and product ID (PID) in the output from this
command. For Fujitsu disks, also note the number of characters in the serial
number that is displayed.
The following example output shows that the vendor ID is SEAGATE and the
product ID is ST318404LSUN18G.
Vendor id (VID)
Product id (PID)
Revision
Serial Number
:
:
:
:
SEAGATE
ST318404LSUN18G
8507
0025T0LA3H
Stop all applications, such as databases, from accessing VxVM volumes that
are configured on the array, and unmount all file systems and Storage
Checkpoints that are configured on the array.
where vendorid and productid are the VID and PID values that you found from
the previous step. For example, vendorid might be FUJITSU, IBM, or SEAGATE.
For Fujitsu devices, you must also specify the number of characters in the
serial number as the length argument (for example, 10). If the array is of type
A/A-A, A/P, or A/PF, you must also specify the policy=ap attribute.
Continuing the previous example, the command to define an array of disks of
this type as a JBOD would be:
# vxddladm addjbod vid=SEAGATE pid=ST318404LSUN18G
Use the vxdctl enable command to bring the array under VxVM control.
# vxdctl enable
200
To verify that the array is now supported, enter the following command:
# vxddladm listjbod
The following is sample output from this command for the example array:
VID
PID
SerialNum
CabinetNum
Policy
(Cmd/PageCode/off/len) (Cmd/PageCode/off/len)
==============================================================
SEAGATE ALL PIDs 18/-1/36/12
18/-1/10/11
Disk
SUN
SESS01
18/-1/36/12
18/-1/12/11
Disk
201
The enclosure name and type for the array are both shown as being set to
Disk. You can use the vxdisk list command to display the disks in the array:
# vxdisk list
DEVICE
Disk_0
Disk_1
...
TYPE
auto:none
auto:none
DISK
-
GROUP
-
STATUS
online invalid
online invalid
To verify that the DMP paths are recognized, use the vxdmpadm getdmpnode
command as shown in the following sample output for the example array:
# vxdmpadm getdmpnode enclosure=Disk
NAME
STATE
ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME
=====================================================
Disk_0 ENABLED Disk
2
2
0
Disk
Disk_1 ENABLED Disk
2
2
0
Disk
...
The output in this example shows that there are two paths to the disks in the
array.
For more information, enter the command vxddladm help addjbod.
See the vxddladm(1M) manual page.
See the vxdmpadm(1M) manual page.
Use the vxddladm command with the rmjbod keyword. The following example
illustrates the command for removing disks that have the vendor id of SEAGATE:
# vxddladm rmjbod vid=SEAGATE
202
Foreign devices
The Device Discovery Layer (DDL) may not be able to discover some devices that
are controlled by third-party drivers, such as those that provide multi-pathing or
RAM disk capabilities. For these devices it may be preferable to use the multi-pathing
capability that is provided by the third-party drivers for some arrays rather than
using Dynamic Multi-Pathing (DMP). Such foreign devices can be made available
as simple disks to Veritas Volume Manager (VxVM) by using the vxddladm
addforeign command. This also has the effect of bypassing DMP for handling I/O.
The following example shows how to add entries for block and character devices
in the specified directories:
# vxddladm addforeign blockdir=/dev/foo/dsk chardir=/dev/foo/rdsk
By default, this command suppresses any entries for matching devices in the
OS-maintained device tree that are found by the autodiscovery mechanism. You
can override this behavior by using the -f and -n options as described on the
vxddladm(1M) manual page.
After adding entries for the foreign devices, use either the vxdisk scandisks or
the vxdctl enable command to discover the devices as simple disks. These disks
then behave in the same way as autoconfigured disks.
The foreign device feature was introduced in VxVM 4.0 to support non-standard
devices such as RAM disks, some solid state disks, and pseudo-devices such as
EMC PowerPath.
Foreign device support has the following limitations:
Enclosure information is not available to VxVM. This can reduce the availability
of any disk groups that are created using such devices.
The I/O fencing and Cluster File System features are not supported for foreign
devices.
203
If a suitable ASL is available and installed for an array, these limitations are removed.
See About third-party driver coexistence on page 189.
204
Select the operation you want to perform from the following options:
Option 1
Suppresses all paths through the specified controller from the view of
VxVM.
Option 2
Option 3
Suppresses disks from the view of VxVM that match a specified Vendor
ID and Product ID combination.
The root disk cannot be suppressed.
The operation fails if the VID:PID of an external disk is the same VID:PID
as the root disk and the root disk is encapsulated under VxVM.
Option 4
Deprecated
Suppresses all but one path to a disk. Only one path is made visible to
VxVM.
This operation is deprecated, since it can lead to unsupported
configurations.
Option 5
Deprecated
Option 6
Deprecated
Option 8
205
Select the operation you want to perform from the following options:
Option 1
Unsuppresses all paths through the specified controller from the view
of VxVM.
Option 2
Option 3
Option 4
Deprecated
Allows multi-pathing of all disks that have paths through the specified
controller.
This operation is deprecated.
Option 6
Deprecated
Option 7
Deprecated
206
207
Use the vxdisk path command to display the relationships between the device
paths, disk access names, disk media names, and disk groups on a system
as shown here:
# vxdisk path
SUBPATH
sda
sdi
sdb
sdj
.
.
.
DANAME
sda
sdi
sdb
sdj
DMNAME
mydg01
mydg01
mydg02
mydg02
GROUP
mydg
mydg
mydg
mydg
STATE
ENABLED
ENABLED
ENABLED
ENABLED
This shows that two paths exist to each of the two disks, mydg01 and mydg02,
and also indicates that each disk is in the ENABLED state.
208
For example, to view multi-pathing information for the device sdl, use the
following command:
# vxdisk list sdl
The output from the vxdisk list command displays the multi-pathing
information, as shown in the following example:
Device:
sdl
devicetag: sdl
type:
sliced
hostid:
sys1
.
.
.
Multipathing information:
numpaths:
2
sdl
state=enabled
type=secondary
sdp
state=disabled
type=primary
The numpaths line shows that there are 2 paths to the device. The next two
lines in the "Multipathing information" section of the output show that one path
is active (state=enabled) and that the other path has failed (state=disabled).
The type field is shown for disks on Active/Passive type disk arrays such as
the EMC CLARiiON, Hitachi HDS 9200 and 9500, Sun StorEdge 6xxx, and
Sun StorEdge T3 array. This field indicates the primary and secondary paths
to the disk.
The type field is not displayed for disks on Active/Active type disk arrays such
as the EMC Symmetrix, Hitachi HDS 99xx and Sun StorEdge 99xx Series, and
IBM ESS Series. Such arrays have no concept of primary and secondary paths.
209
List all paths under a DMP device node, HBA controller, enclosure, or array
port.
See Displaying paths controlled by a DMP node, controller, enclosure, or array
port on page 214.
210
Display information about array ports that are connected to the storage
processors of enclosures.
See Displaying information about array ports on page 219.
Display or set the I/O policy that is used for the paths to an enclosure.
See Specifying the I/O policy on page 233.
Enable or disable I/O for a path, HBA controller or array port on the system.
See Disabling I/O for paths, controllers, array ports, or DMP nodes on page 239.
Rename an enclosure.
See Renaming an enclosure on page 242.
211
The physical path is specified by argument to the nodename attribute, which must
be a valid path listed in the device directory.
The device directory is the /dev directory.
The command displays output similar to the following example output.
# vxdmpadm getdmpnode nodename=sdbc
NAME
STATE
ENCLR-TYPE
PATHS ENBL DSBL ENCLR-NAME
====================================================================
emc_clariion0_89 ENABLED EMC_CLARiiON 6
6
0
emc_clariion0
Use the -v option to display the LUN serial number and the array volume ID.
# vxdmpadm -v getdmpnode nodename=sdbc
NAME
STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME SERIAL-NO ARRAY_VOL_ID
=====================================================================================
emc_clariion0_89 ENABLED EMC_CLARiiON 6
6
0
emc_clariion0 600601601 893
Use the enclosure attribute with getdmpnode to obtain a list of all DMP nodes for
the specified enclosure.
# vxdmpadm getdmpnode enclosure=enc0
NAME
STATE
ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME
==========================================================
sdm
ENABLED ACME
2
2
0
enc0
sdn
ENABLED ACME
2
2
0
enc0
sdo
ENABLED ACME
2
2
0
enc0
sdp
ENABLED ACME
2
2
0
enc0
Use the dmpnodename attribute with getdmpnode to display the DMP information for
a given DMP node.
# vxdmpadm getdmpnode dmpnodename=emc_clariion0_158
212
NAME
STATE
ENCLR-TYPE
PATHS ENBL DSBL ENCLR-NAME
==================================================================
emc_clariion0_158 ENABLED EMC_CLARiiON 1
1
0
emc_clariion0
Use the enclosure attribute with list dmpnode to obtain a list of all DMP nodes
for the specified enclosure.
# vxdmpadm list dmpnode enclosure=enclosurename
For example, the following command displays the consolidated information for all
of the DMP nodes in the enc0 enclosure.
# vxdmpadm list dmpnode enclosure=enc0
Use the dmpnodename attribute with list dmpnode to display the DMP information
for a given DMP node. The DMP node can be specified by name or by specifying
a path name. The detailed information for the specified DMP node includes path
information for each subpath of the listed DMP node.
The path state differentiates between a path that is disabled due to a failure and a
path that has been manually disabled for administrative purposes. A path that has
been manually disabled using the vxdmpadm disable command is listed as
disabled(m).
# vxdmpadm list dmpnode dmpnodename=dmpnodename
For example, the following command displays the consolidated information for the
DMP node emc_clariion0_158.
# vxdmpadm list dmpnode dmpnodename=emc_clariion0_158
dmpdev
state
enclosure
cab-sno
asl
=
=
=
=
=
emc_clariion0_158
enabled
emc_clariion0
CK200070400359
libvxCLARiiON.so
213
vid
pid
array-name
array-type
iopolicy
avid
lun-sno
udid
dev-attr
###path
path
path
path
path
path
path
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
DGC
DISK
EMC_CLARiiON
CLR-A/PF
MinimumQ
158
600601601A141B001D4A32F92B49DE11
DGC%5FDISK%5FCK200070400359%5F600601601A141B001D4A32F92B49DE11
lun
name state type transport ctlr hwpath aportID aportWWN attr
sdck enabled(a) primary FC c2 c2 A5 50:06:01:61:41:e0:3b:33 sdde enabled(a) primary FC c2 c2 A4 50:06:01:60:41:e0:3b:33 sdcu enabled secondary FC c2 c2 B4 50:06:01:68:41:e0:3b:33 sdbm enabled secondary FC c3 c3 B4 50:06:01:68:41:e0:3b:33 sdbw enabled(a) primary FC c3 c3 A4 50:06:01:60:41:e0:3b:33 sdbc enabled(a) primary FC c3 c3 A5 50:06:01:61:41:e0:3b:33 -
For example:
# vxdmpadm getlungroup dmpnodename=sdq
NAME
STATE
ENCLR-TYPE PATHS
ENBL
DSBL
ENCLR-NAME
===============================================================
sdo
ENABLED ACME
2
2
0
enc1
sdp
ENABLED ACME
2
2
0
enc1
sdq
ENABLED ACME
2
2
0
enc1
sdr
ENABLED ACME
2
2
0
enc1
214
STATE[A]
PATH-TYPE[M] DMPNODENAME
ENCLR-NAME
CTLR
ATTRS
=============================================================================
sdaf
ENABLED(A) PRIMARY
ams_wms0_130 ams_wms0
c2
sdc
ENABLED
SECONDARY
ams_wms0_130 ams_wms0
c3
sdb
ENABLED(A)
disk_24
disk
c0
sda
ENABLED(A)
disk_25
disk
c0
sdav
ENABLED(A) PRIMARY
emc_clariion0_1017 emc_clariion0 c3
sdbf
ENABLED
SECONDARY
emc_clariion0_1017 emc_clariion0 c3
-
For A/A arrays, all enabled paths that are available for I/O are shown as ENABLED(A).
For A/P arrays in which the I/O policy is set to singleactive, only one path is
shown as ENABLED(A). The other paths are enabled but not available for I/O. If the
I/O policy is not set to singleactive, DMP can use a group of paths (all primary
or all secondary) for I/O, which are shown as ENABLED(A).
See Specifying the I/O policy on page 233.
Paths that are in the DISABLED state are not available for I/O operations.
A path that was manually disabled by the system administrator displays as
DISABLED(M). A path that failed displays as DISABLED.
You can use getsubpaths to obtain information about all the paths that are
connected to a particular HBA controller:
# vxdmpadm getsubpaths ctlr=c2
NAME STATE[-]
PATH-TYPE[-] DMPNODENAME ENCLR-TYPE ENCLR-NAME ATTRS
===================================================================
sdk
ENABLED(A) PRIMARY
sdk
ACME
enc0
sdl
ENABLED(A) PRIMARY
sdl
ACME
enc0
-
215
sdm
sdn
DISABLED
ENABLED
SECONDARY
SECONDARY
sdm
sdn
ACME
ACME
enc0
enc0
You can also use getsubpaths to obtain information about all the paths that are
connected to a port on an array. The array port can be specified by the name of
the enclosure and the array port ID, or by the WWN identifier of the array port:
# vxdmpadm getsubpaths enclosure=enclosure portid=portid
# vxdmpadm getsubpaths pwwn=pwwn
For example, to list subpaths through an array port through the enclosure and the
array port ID:
# vxdmpadm getsubpaths enclosure=emc_clariion0 portid=A5
NAME
STATE[A]
PATH-TYPE[M] DMPNODENAME ENCLR-NAME
CTLR
ATTRS
================================================================================
sdav
ENABLED(A) PRIMARY
emc_clariion0_1017 emc_clariion0 c3
sdcd
ENABLED(A) PRIMARY
emc_clariion0_1017 emc_clariion0 c2
sdau
ENABLED(A) PRIMARY
emc_clariion0_1018 emc_clariion0 c3
sdcc
ENABLED(A) PRIMARY
emc_clariion0_1018 emc_clariion0 c2
-
For example, to list subpaths through an array port through the WWN:
# vxdmpadm getsubpaths pwwn=50:06:01:61:41:e0:3b:33
NAME
STATE[A]
PATH-TYPE[M] CTLR-NAME ENCLR-TYPE
ENCLR-NAME
ATTRS
================================================================================
sdav
ENABLED(A) PRIMARY
c3
EMC_CLARiiON emc_clariion0
sdcd
ENABLED(A) PRIMARY
c2
EMC_CLARiiON emc_clariion0
sdau
ENABLED(A) PRIMARY
c3
EMC_CLARiiON emc_clariion0
sdcc
ENABLED(A) PRIMARY
c2
EMC_CLARiiON emc_clariion0
You can use getsubpaths to obtain information about all the subpaths of an
enclosure.
# vxdmpadm getsubpaths enclosure=enclosure_name [ctlr=ctlrname]
216
sdau
sdbe
ENABLED(A) PRIMARY
ENABLED
SECONDARY
emc_clariion0_1018 emc_clariion0 c3
emc_clariion0_1018 emc_clariion0 c3
This output shows that the controller c1 is connected to disks that are not in any
recognized DMP category as the enclosure type is OTHER.
The other controllers are connected to disks that are in recognized DMP categories.
All the controllers are in the ENABLED state, which indicates that they are available
for I/O operations.
The state DISABLED is used to indicate that controllers are unavailable for I/O
operations. The unavailability can be due to a hardware failure or due to I/O
operations being disabled on that controller by using the vxdmpadm disable
command.
217
or
# vxdmpadm listctlr type=ACME
CTLR-NAME
ENCLR-TYPE
STATE
ENCLR-NAME
PATH_COUNT
===============================================================
c2
ACME
ENABLED enc0
10
c3
ACME
ENABLED enc0
24
The vxdmpadm getctlr command displays HBA vendor details and the Controller
ID. For iSCSI devices, the Controller ID is the IQN or IEEE-format based name.
For FC devices, the Controller ID is the WWN. Because the WWN is obtained from
ESD, this field is blank if ESD is not running. ESD is a daemon process used to
notify DDL about occurrence of events. The WWN shown as Controller ID maps
to the WWN of the HBA port associated with the host controller.
# vxdmpadm getctlr c5
LNAME
PNAME VENDOR CTLR-ID
===================================================
c5
c5
qlogic
20:07:00:a0:b8:17:e1:37
To display the attrtibutes for all enclosures in a system, use the following DMP
command:
# vxdmpadm listenclosure all
ENCLR_NAME
ENCLR_TYPE
ENCLR_SNO
STATUS
ARRAY_TYPE LUN_COUNT FIRMWARE
====================================================================================
218
Disk
emc0
hitachi_usp-vm0
emc_clariion0
Disk
EMC
Hitachi_USP-VM
EMC_CLARiiON
DISKS
000292601383
25847
CK20007040035
CONNECTED
CONNECTED
CONNECTED
CONNECTED
Disk
A/A
A/A
CLR-A/PF
6
1
1
2
5875
6008
0324
The following form of the command displays information about all of the array ports
within the specified enclosure:
# vxdmpadm getportids enclosure=enclr_name
The following example shows information about the array port that is accessible
through DMP node sdg:
# vxdmpadm getportids dmpnodename=sdg
NAME
ENCLR-NAME ARRAY-PORT-ID pWWN
==============================================================
sdg
HDS9500V0
1A
20:00:00:E0:8B:06:5F:19
219
For example, consider the following disks in an EMC Symmetrix array controlled
by PowerPath, which are known to DMP:
# vxdisk list
DEVICE
emcpower10
emcpower11
emcpower12
emcpower13
emcpower14
emcpower15
emcpower16
emcpower17
emcpower18
emcpower19
TYPE
auto:sliced
auto:sliced
auto:sliced
auto:sliced
auto:sliced
auto:sliced
auto:sliced
auto:sliced
auto:sliced
auto:sliced
DISK
disk1
disk2
disk3
disk4
disk5
disk6
disk7
disk8
disk9
disk10
GROUP
ppdg
ppdg
ppdg
ppdg
ppdg
ppdg
ppdg
ppdg
ppdg
ppdg
STATUS
online
online
online
online
online
online
online
online
online
online
The following command displays the paths that DMP has discovered, and which
correspond to the PowerPath-controlled node, emcpower10:
# vxdmpadm getsubpaths tpdnodename=emcpower10
NAME
TPDNODENAME PATH-TYPE[-]DMP-NODENAME ENCLR-TYPE ENCLR-NAME
===================================================================
sdq
emcpower10s2 emcpower10
PP_EMC
pp_emc0
sdr
emcpower10s2 emcpower10
PP_EMC
pp_emc0
Conversely, the next command displays information about the PowerPath node
that corresponds to the path, sdq, discovered by DMP:
# vxdmpadm gettpdnode nodename=sdq
NAME
STATE
PATHS ENCLR-TYPE
ENCLR-NAME
===================================================================
emcpower10s2 ENABLED 2
PP_EMC
pp_emc0
220
Table 9-6
Category
Description
Storage-based Snapshot/Clone
Storage-based replication
Transport
Each LUN can have one or more of these extended attributes. DDL discovers the
extended attributes during device discovery from the Array Support Library (ASL).
If Veritas Operations Manager (VOM) is present, DDL can also obtain extended
attributes from the VOM Management Server for hosts that are configured as
managed hosts.
The vxdisk -p list command displays DDL extended attributes. For example,
the following command shows attributes of std, fc, and RAID_5 for this LUN:
# vxdisk -p list
DISK
:
DISKID
:
VID
:
UDID
:
REVISION
:
PID
:
PHYS_CTLR_NAME :
LUN_SNO_ORDER :
LUN_SERIAL_NO :
LIBNAME
:
HARDWARE_MIRROR:
DMP_DEVICE
:
DDL_THIN_DISK :
DDL_DEVICE_ATTR:
tagmastore-usp0_0e18
1253585985.692.rx2600h11
HITACHI
HITACHI%5FOPEN-V%5F02742%5F0E18
5001
OPEN-V
0/4/1/1.0x50060e8005274246
411
0E18
libvxhdsusp.sl
no
tagmastore-usp0_0e18
thick
std fc RAID_5
221
CAB_SERIAL_NO :
ATYPE
:
ARRAY_VOLUME_ID:
ARRAY_PORT_PWWN:
ANAME
:
TRANSPORT
:
02742
A/A
0E18
50:06:0e:80:05:27:42:46
TagmaStore-USP
FC
The vxdisk -x attribute -p list command displays the one-line listing for the
property list and the attributes. The following example shows two Hitachi LUNs that
support Thin Reclamation through the attribute hdprclm:
# vxdisk -x DDL_DEVICE_ATTR -p list
DEVICE
tagmastore-usp0_0a7a
tagmastore-usp0_065a
tagmastore-usp0_065b
DDL_DEVICE_ATTR
std fc RAID_5
hdprclm fc
hdprclm fc
User can specify multiple -x options in the same command to display multiple entries.
For example:
# vxdisk -x DDL_DEVICE_ATTR -x VID -p list
DEVICE
tagmastore-usp0_0a7a
tagmastore-usp0_0a7b
tagmastore-usp0_0a78
tagmastore-usp0_0a79
tagmastore-usp0_065a
tagmastore-usp0_065b
tagmastore-usp0_065c
tagmastore-usp0_065d
DDL_DEVICE_ATTR
std fc RAID_5
std fc RAID_5
std fc RAID_5
std fc RAID_5
hdprclm fc
hdprclm fc
hdprclm fc
hdprclm fc
VID
HITACHI
HITACHI
HITACHI
HITACHI
HITACHI
HITACHI
HITACHI
HITACHI
TYPE
auto
auto
auto
auto
auto
auto
DISK
-
GROUP
-
STATUS
online
online
online
online
online
online
OS_NATIVE_NAME
c10t0d2
c10t0d3
c10t0d0
c13t2d7
c13t3d0
c13t3d1
ATTR
std fc RAID_5
std fc RAID_5
std fc RAID_5
hdprclm fc
hdprclm fc
hdprclm fc
222
For a list of ASLs that supports Extended Attributes, and descriptions of these
attributes, refer to the hardware compatibility list (HCL) at the following URL:
http://www.symantec.com/docs/TECH211575
where:
all
all devices
product=VID:PID
ctlr=ctlrname
dmpnodename=diskname
dmpnodename=diskname path=\!pathname all paths under the DMP node except the one
specified
223
The memory attribute limits the maximum amount of memory that is used to record
I/O statistics for each CPU. The default limit is 32k (32 kilobytes) per CPU.
To reset the I/O counters to zero, use this command:
# vxdmpadm iostat reset
To display the accumulated statistics at regular intervals, use the following command:
# vxdmpadm iostat show {filter} [interval=seconds [count=N]]
The above command displays I/O statistics for the devices specified by the filter.
The filter is one of the following:
all
ctlr=ctlr-name
dmpnodename=dmp-node
pathname=path-name
pwwn=array-port-wwn[ctlr=ctlr-name]
Use the interval and count attributes to specify the interval in seconds between
displaying the I/O statistics, and the number of lines to be displayed. The actual
interval may be smaller than the value specified if insufficient memory is available
to record the statistics.
DMP also provides a groupby option to display cumulative I/O statistics, aggregated
by the specified criteria.
See Displaying cumulative I/O statistics on page 225.
To disable the gathering of statistics, enter this command:
# vxdmpadm iostat stop
224
To compare I/O load across HBAs, enclosures, or array ports, use the groupby
clause with the specified attribute.
To analyze I/O load across a given I/O channel (HBA to array port link), use
filter by HBA and PWWN or enclosure and array port.
To analyze I/O load distribution across links to an HBA, use filter by HBA and
groupby array port.
Use the following format of the iostat command to analyze the I/O loads:
# vxdmpadm [-u unit] iostat show [groupby=criteria] {filter}
[interval=seconds [count=N]]
The above command displays I/O statistics for the devices specified by the filter.
The filter is one of the following:
all
ctlr=ctlr-name
dmpnodename=dmp-node
pathname=path-name
pwwn=array-port-wwn[ctlr=ctlr-name]
arrayport
ctlr
dmpnode
enclosure
225
meaning that the small values are displayed in small units and the larger values
are displayed in bigger units, keeping significant digits constant. You can specify
the units in which the statistics data is displayed. The -u option accepts the following
options:
h or H
bytes| b
us
To group by controller:
# vxdmpadm [-u unit] iostat show groupby=ctlr [ all | ctlr=ctlr ]
For example:
# vxdmpadm iostat show groupby=ctlr ctlr=c5
CTLRNAME
c5
OPERATIONS
READS
WRITES
224
14
BLOCKS
READS WRITES
54
7
AVG TIME(ms)
READS
WRITES
4.20
11.10
To group by arrayport:
# vxdmpadm [-u unit] iostat show groupby=arrayport [ all \
| pwwn=array_pwwn | enclosure=enclr portid=array-port-id ]
For example:
# vxdmpadm -u m iostat show groupby=arrayport \
enclosure=HDS9500-ALUA0 portid=1A
PORTNAME
1A
OPERATIONS
READS
WRITES
743
1538
To group by enclosure:
BYTES
READS WRITES
11m
24m
AVG TIME(ms)
READS
WRITES
17.13
8.61
226
227
For example:
# vxdmpadm -u h iostat show groupby=enclosure enclosure=EMC_CLARiiON0
OPERATIONS
ENCLOSURENAME
EMC_CLARiiON0
BLOCKS
AVG TIME(ms)
READS WRITES READS WRITES READS
743
1538
11392k 24176k 17.13
WRITES
8.61
You can also filter out entities for which all data entries are zero. This option is
especially useful in a cluster environment that contains many failover devices. You
can display only the statistics for the active paths.
To filter all zero entries from the output of the iostat show command:
# vxdmpadm [-u unit] -z iostat show [all|ctlr=ctlr_name |
dmpnodename=dmp_device_name | enclosure=enclr_name [portid=portid] |
pathname=path_name|pwwn=port_WWN][interval=seconds [count=N]]
For example:
# vxdmpadm -z iostat show dmpnodename=emc_clariion0_893
cpu usage = 9852us
per
OPERATIONS
PATHNAME READS WRITES
sdbc
32
0
sdbw
27
0
sdck
8
0
sdde
11
0
DMP node that were sent to underlying layers. If a path or controller is specified,
the -q option displays I/Os that were sent to the given path or controller and not
yet returned to DMP.
See the vxdmpadm(1m) manual page for more information about the vxdmpadm
iostat command.
To display queued I/O counts on a DMP node:
# vxdmpadm -q iostat show [filter] [interval=n [count=m]]
For example:
# vxdmpadm -q iostat show dmpnodename=emc_clariion0_352
cpu usage = 338us
DMPNODENAME
emc_clariion0_352
To display the count of I/Os that returned with errors on a DMP node, path, or
controller:
# vxdmpadm -e iostat show [filter] [interval=n [count=m]]
For example, to show the I/O counts that returned errors on a path:
# vxdmpadm -e iostat show pathname=sdo
cpu usage = 637us
PATHNAME
sdo
The next command displays the current statistics including the accumulated total
numbers of read and write operations, and the kilobytes read and written, on all
paths.
228
229
87
0
87
0
87
0
87
0
87
0
87
0
0
0
0
0
0
0
0
0
0
0
0
0
44544k
0
44544k
0
44544k
0
44544k
0
44544k
0
44544k
0
0
0
0
0
0
0
0
0
0
0
0
0
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
The following command changes the amount of memory that vxdmpadm can use to
accumulate the statistics:
# vxdmpadm iostat start memory=4096
The displayed statistics can be filtered by path name, DMP node name, and
enclosure name (note that the per-CPU memory has changed following the previous
command):
# vxdmpadm -u k iostat show pathname=sdk
PATHNAME
sdk
PATHNAME
sdf
PATHNAME
sdf
You can also specify the number of times to display the statistics and the time
interval. Here the incremental statistics for a path are displayed twice with a 2-second
interval:
# vxdmpadm iostat show pathname=sdk interval=2 count=2
PATHNAME
sdk
sdk
0.00
0.00
nomanual
nopreferred
230
preferred
[priority=N]
primary
Defines a path as being the primary path for a JBOD disk array. The
following example specifies a primary path for a JBOD disk array:
# vxdmpadm setattr path sdm pathtype=primary
secondary
Defines a path as being the secondary path for a JBOD disk array. The
following example specifies a secondary path for a JBOD disk array:
# vxdmpadm setattr path sdn pathtype=secondary
standby
Marks a standby (failover) path that it is not used for normal I/O
scheduling. This path is used if there are no active paths available for
I/O. The next example specifies a standby path for an A/P-C disk array:
# vxdmpadm setattr path sde pathtype=standby
For example, to list the devices with fewer than 3 enabled paths, use the following
command:
# vxdmpadm getdmpnode enclosure=EMC_CLARiiON0 redundancy=3
231
232
NAME
STATE
ENCLR-TYPE
PATHS ENBL DSBL ENCLR-NAME
=====================================================================
emc_clariion0_162 ENABLED EMC_CLARiiON 3
2
1
emc_clariion0
emc_clariion0_182 ENABLED EMC_CLARiiON 2
2
0
emc_clariion0
emc_clariion0_184 ENABLED EMC_CLARiiON 3
2
1
emc_clariion0
emc_clariion0_186 ENABLED EMC_CLARiiON 2
2
0
emc_clariion0
To display the minimum redundancy level for a particular device, use the vxdmpadm
getattr command, as follows:
# vxdmpadm getattr enclosure|arrayname|arraytype \
component-name redundancy
For example, to show the minimum redundancy level for the enclosure
HDS9500-ALUA0:
# vxdmpadm getattr enclosure HDS9500-ALUA0 redundancy
ENCLR_NAME DEFAULT CURRENT
=============================================
HDS9500-ALUA0
0
4
Use the vxdmpadm setattr command with the redundancy attribute as follows:
# vxdmpadm setattr enclosure|arrayname|arraytype component-name
redundancy=value
The next example displays the setting of partitionsize for the enclosure enc0,
on which the balanced I/O policy with a partition size of 2MB has been set:
# vxdmpadm getattr enclosure enc0 partitionsize
ENCLR_NAME
DEFAULT
CURRENT
--------------------------------------enc0
512
4096
233
Warning: I/O policies are recorded in the file /etc/vx/dmppolicy.info, and are
persistent across reboots of the system.
Do not edit this file yourself.
Table 9-7 describes the I/O policies that may be set.
Table 9-7
Policy
Description
adaptive
This policy attempts to maximize overall I/O throughput from/to the disks by dynamically
scheduling I/O on the paths. It is suggested for use where I/O loads can vary over time.
For example, I/O from/to a database may exhibit both long transfers (table scans) and
short transfers (random look ups). The policy is also useful for a SAN environment where
different paths may have different number of hops. No further configuration is possible
as this policy is automatically managed by DMP.
In this example, the adaptive I/O policy is set for the enclosure enc1:
# vxdmpadm setattr enclosure enc1 \
iopolicy=adaptive
adaptiveminq
Similar to the adaptive policy, except that I/O is scheduled according to the length of
the I/O queue on each path. The path with the shortest queue is assigned the highest
priority.
234
Table 9-7
Policy
Description
balanced
This policy is designed to optimize the use of caching in disk drives and RAID controllers.
[partitionsize=size] The size of the cache typically ranges from 120KB to 500KB or more, depending on the
characteristics of the particular hardware. During normal operation, the disks (or LUNs)
are logically divided into a number of regions (or partitions), and I/O from/to a given region
is sent on only one of the active paths. Should that path fail, the workload is automatically
redistributed across the remaining paths.
You can use the partitionsize attribute to specify the size for the partition. The partition
size in blocks is adjustable in powers of 2 from 2 up to 231. A value that is not a power
of 2 is silently rounded down to the nearest acceptable value.
Specifying a partition size of 0 is equivalent to specifying the default partition size.
The default value for the partition size is 512 blocks (256k). Specifying a partition size
of 0 is equivalent to the default partition size of 512 blocks (256k).
The default value can be changed by adjusting the value of the
dmp_pathswitch_blks_shift tunable parameter.
See DMP tunable parameters on page 723.
Note: The benefit of this policy is lost if the value is set larger than the cache size.
For example, the suggested partition size for an Hitachi HDS 9960 A/A array is from
32,768 to 131,072 blocks (16MB to 64MB) for an I/O activity pattern that consists mostly
of sequential reads or writes.
The next example sets the balanced I/O policy with a partition size of 4096 blocks (2MB)
on the enclosure enc0:
# vxdmpadm setattr enclosure enc0 \
iopolicy=balanced partitionsize=4096
minimumq
This policy sends I/O on paths that have the minimum number of outstanding I/O requests
in the queue for a LUN. No further configuration is possible as DMP automatically
determines the path with the shortest queue.
The following example sets the I/O policy to minimumq for a JBOD:
# vxdmpadm setattr enclosure Disk \
iopolicy=minimumq
This is the default I/O policy for all arrays.
235
Table 9-7
Policy
Description
priority
This policy is useful when the paths in a SAN have unequal performance, and you want
to enforce load balancing manually. You can assign priorities to each path based on your
knowledge of the configuration and performance characteristics of the available paths,
and of other aspects of your system.
See Setting the attributes of the paths to an enclosure on page 230.
In this example, the I/O policy is set to priority for all SENA arrays:
# vxdmpadm setattr arrayname SENA \
iopolicy=priority
round-robin
This policy shares I/O equally between the paths in a round-robin sequence. For example,
if there are three paths, the first I/O request would use one path, the second would use
a different path, the third would be sent down the remaining path, the fourth would go
down the first path, and so on. No further configuration is possible as this policy is
automatically managed by DMP.
The next example sets the I/O policy to round-robin for all Active/Active arrays:
# vxdmpadm setattr arraytype A/A \
iopolicy=round-robin
singleactive
This policy routes I/O down the single active path. This policy can be configured for A/P
arrays with one active path per controller, where the other paths are used in case of
failover. If configured for A/A arrays, there is no load balancing across the paths, and
the alternate paths are only used to provide high availability (HA). If the current active
path fails, I/O is switched to an alternate active path. No further configuration is possible
as the single active path is selected by DMP.
The following example sets the I/O policy to singleactive for JBOD disks:
# vxdmpadm setattr arrayname Disk \
iopolicy=singleactive
236
increase the total I/O throughput. However, this feature should only be enabled if
recommended by the array vendor. It has no effect for array types other than A/A-A
or ALUA.
For example, the following command sets the balanced I/O policy with a partition
size of 4096 blocks (2MB) on the enclosure enc0, and allows scheduling of I/O
requests on the secondary paths:
# vxdmpadm setattr enclosure enc0 iopolicy=balanced \
partitionsize=4096 use_all_paths=yes
The use_all_paths attribute only applies to A/A-A arrays and ALUA arrays. For
other arrays, the above command displays the message:
Attribute is not applicable for this array.
237
sdm
sdn
sdo
sdp
sdq
state=enabled
state=enabled
state=enabled
state=enabled
state=enabled
type=primary
type=primary
type=primary
type=primary
type=primary
In addition, the device is in the enclosure ENC0, belongs to the disk group mydg, and
contains a simple concatenated volume myvol1.
The first step is to enable the gathering of DMP statistics:
# vxdmpadm iostat start
Next, use the dd command to apply an input workload from the volume:
# dd if=/dev/vx/rdsk/mydg/myvol1 of=/dev/null &
By running the vxdmpadm iostat command to display the DMP statistics for the
device, it can be seen that all I/O is being directed to one path, sdq:
# vxdmpadm iostat show dmpnodename=sdq interval=5 count=2
.
.
.
cpu usage = 11294us per cpu memory = 32768b
OPERATIONS
KBYTES
PATHNAME
READS
WRITES
READS
WRITES
sdj
0
0
0
0
sdk
sdl
sdm
sdn
sdo
sdp
sdq
0
0
0
0
0
0
10986
0
0
0
0
0
0
0
0
0
0
0
0
0
5493
0
0
0
0
0
0
0
AVG TIME(ms)
READS
WRITES
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.41
0.00
0.00
0.00
0.00
0.00
0.00
0.00
The vxdmpadm command is used to display the I/O policy for the enclosure that
contains the device:
# vxdmpadm getattr enclosure ENC0 iopolicy
ENCLR_NAME
DEFAULT
CURRENT
============================================
ENC0
MinimumQ
Single-Active
238
This shows that the policy for the enclosure is set to singleactive, which explains
why all the I/O is taking place on one path.
To balance the I/O load across the multiple primary paths, the policy is set to
round-robin as shown here:
# vxdmpadm setattr enclosure ENC0 iopolicy=round-robin
# vxdmpadm getattr enclosure ENC0 iopolicy
ENCLR_NAME
DEFAULT
CURRENT
============================================
ENC0
MinimumQ
Round-Robin
With the workload still running, the effect of changing the I/O policy to balance the
load across the primary paths can now be seen.
# vxdmpadm iostat show dmpnodename=sdq interval=5 count=2
.
.
.
cpu usage = 14403us per cpu memory = 32768b
OPERATIONS
KBYTES
PATHNAME
READS
WRITES
READS
WRITES
sdj
2041
0
1021
0
sdk
1894
0
947
0
sdl
2008
0
1004
0
sdm
2054
0
1027
0
sdn
2171
0
1086
0
sdo
2095
0
1048
0
sdp
2073
0
1036
0
sdq
2042
0
1021
0
AVG TIME(ms)
READS
WRITES
0.39
0.00
0.39
0.00
0.39
0.00
0.40
0.00
0.39
0.00
0.39
0.00
0.39
0.00
0.39
0.00
The enclosure can be returned to the single active I/O policy by entering the following
command:
# vxdmpadm setattr enclosure ENC0 iopolicy=singleactive
239
If the specified paths have pending I/Os, the vxdmpadm disable command waits
until the I/Os are completed before disabling the paths.
Note: From release 5.0 of Veritas Volume Manager (VxVM), this operation is
supported for controllers that are used to access disk arrays on which
cluster-shareable disk groups are configured.
DMP does not support the operation to disable I/O for the controllers that use
Third-Party Drivers (TPD) for multi-pathing.
To disable I/O for one or more paths, use the following command:
# vxdmpadm [-c|-f] disable path=path_name1[,path_name2,path_nameN]
To disable I/O for the paths connected to one or more HBA controllers, use the
following command:
# vxdmpadm [-c|-f] disable ctlr=ctlr_name1[,ctlr_name2,ctlr_nameN]
To disable I/O for the paths connected to an array port, use one of the following
commands:
# vxdmpadm [-c|-f] disable enclosure=enclr_name portid=array_port_ID
# vxdmpadm [-c|-f] disable pwwn=array_port_WWN
where the array port is specified either by the enclosure name and the array port
ID, or by the array ports worldwide name (WWN) identifier.
The following examples show how to disable I/O on an array port:
# vxdmpadm disable enclosure=HDS9500V0 portid=1A
# vxdmpadm disable pwwn=20:00:00:E0:8B:06:5F:19
To disable I/O for a particular path, specify both the controller and the portID, which
represent the two ends of the fabric:
# vxdmpadm [-c|-f] disable ctlr=ctlr_name enclosure=enclr_name \
portid=array_port_ID
To disable I/O for a particular DMP node, specify the DMP node name.
# vxdmpadm [-c|-f] disable dmpnodename=dmpnode
You can use the -c option to check if there is only a single active path to the disk.
The last path disable operation fails with -f option irrespective whether the device
is in use or not.
240
To enable I/O for the paths connected to one or more HBA controllers, use the
following command:
# vxdmpadm enable ctlr=ctlr_name1[,ctlr_name2,ctlr_nameN]
To enable I/O for the paths connected to an array port, use one of the following
commands:
# vxdmpadm enable enclosure=enclr_name portid=array_port_ID
# vxdmpadm enable pwwn=array_port_WWN
where the array port is specified either by the enclosure name and the array port
ID, or by the array ports worldwide name (WWN) identifier.
The following are examples of using the command to enable I/O on an array port:
# vxdmpadm enable enclosure=HDS9500V0 portid=1A
# vxdmpadm enable pwwn=20:00:00:E0:8B:06:5F:19
To enable I/O for a particular path, specify both the controller and the portID, which
represent the two ends of the fabric:
241
To enable I/O for a particular DMP node, specify the DMP node name.
# vxdmpadm enable dmpnodename=dmpnode
Renaming an enclosure
The vxdmpadm setattr command can be used to assign a meaningful name to an
existing enclosure, for example:
# vxdmpadm setattr enclosure emc0 name=GRP1
242
# vxdmpadm setattr \
{enclosure enc-name|arrayname name|arraytype type} \
recoveryoption=fixedretry retrycount=n
The default value of iotimeout is 300 seconds. For some applications such as
Oracle, it may be desirable to set iotimeout to a larger value. The iotimeout value
for DMP should be greater than the I/O service time of the underlying operating
system layers.
Note: The fixedretry and timebound settings are mutually exclusive.
The following example configures time-bound recovery for the enclosure enc0, and
sets the value of iotimeout to 360 seconds:
# vxdmpadm setattr enclosure enc0 recoveryoption=timebound \
iotimeout=360
The next example sets a fixed-retry limit of 10 for the paths to all Active/Active
arrays:
# vxdmpadm setattr arraytype A/A recoveryoption=fixedretry \
retrycount=10
The above command also has the effect of configuring I/O throttling with the default
settings.
See Configuring the I/O throttling mechanism on page 244.
243
Note: The response to I/O failure settings is persistent across reboots of the system.
The following example shows how to disable I/O throttling for the paths to the
enclosure enc0:
# vxdmpadm setattr enclosure enc0 recoveryoption=nothrottle
The vxdmpadm setattr command can be used to enable I/O throttling on the paths
to a specified enclosure, disk array name, or type of array:
# vxdmpadm setattr \
{enclosure enc-name|arrayname name|arraytype type}\
recoveryoption=throttle [iotimeout=seconds]
If the iotimeout attribute is specified, its argument specifies the time in seconds
that DMP waits for an outstanding I/O request to succeed before invoking I/O
throttling on the path. The default value of iotimeout is 10 seconds. Setting
iotimeout to a larger value potentially causes more I/O requests to become queued
up in the SCSI driver before I/O throttling is invoked.
The following example sets the value of iotimeout to 60 seconds for the enclosure
enc0:
# vxdmpadm setattr enclosure enc0 recoveryoption=throttle \
iotimeout=60
244
245
Path probing will be optimized by probing a subset of paths connected to the same
HBA and array port. The size of the subset of paths can be controlled by the
dmp_probe_threshold tunable. The default value is set to 5.
# vxdmpadm settune dmp_probe_threshold=N
To turn on the feature, set the dmp_sfg_threshold value to the required number
of path failures that triggers SFG.
# vxdmpadm settune dmp_sfg_threshold=N
To see the Subpaths Failover Groups ID, use the following command:
# vxdmpadm getportids {ctlr=ctlr_name | dmpnodename=dmp_device_name \
| enclosure=enclr_name | path=path_name}
The following example shows the vxdmpadm getattr command being used to
display the recoveryoption option values that are set on an enclosure.
# vxdmpadm getattr enclosure HDS9500-ALUA0 recoveryoption
ENCLR-NAME
RECOVERY-OPTION DEFAULT[VAL]
CURRENT[VAL]
===============================================================
HDS9500-ALUA0 Throttle
Nothrottle[0] Timebound[60]
HDS9500-ALUA0 Error-Retry
Fixed-Retry[5] Timebound[20]
The command output shows the default and current policy options and their values.
Table 9-8 summarizes the possible recovery option settings for retrying I/O after
an error.
Table 9-8
Recovery option
Possible settings
Description
recoveryoption=fixedretry
Fixed-Retry (retrycount)
recoveryoption=timebound
Timebound (iotimeout)
Table 9-9 summarizes the possible recovery option settings for throttling I/O.
Table 9-9
Recovery option
Possible settings
Description
recoveryoption=nothrottle
None
recoveryoption=throttle
Timebound (iotimeout)
246
check_all
The path restoration thread analyzes all paths in the system and revives the
paths that are back online, as well as disabling the paths that are inaccessible.
The command to configure this policy is:
# vxdmpadm settune dmp_restore_policy=check_all
check_alternate
The path restoration thread checks that at least one alternate path is healthy.
It generates a notification if this condition is not met. This policy avoids inquiry
commands on all healthy paths, and is less costly than check_all in cases
where a large number of paths are available. This policy is the same as
check_all if there are only two paths per DMP node. The command to configure
this policy is:
# vxdmpadm settune dmp_restore_policy=check_alternate
check_disabled
This is the default path restoration policy. The path restoration thread checks
the condition of paths that were previously disabled due to hardware failures,
and revives them if they are back online. The command to configure this policy
is:
# vxdmpadm settune dmp_restore_policy=check_disabled
check_periodic
247
The default number of cycles between running the check_all policy is 10.
The dmp_restore_interval tunable parameter specifies how often the path
restoration thread examines the paths. For example, the following command sets
the polling interval to 400 seconds:
# vxdmpadm settune dmp_restore_interval=400
The settings are immediately applied and are persistent across reboots. Use the
vxdmpadm gettune command to view the current settings.
See DMP tunable parameters on page 723.
If the vxdmpadm start restore command is given without specifying a policy or
interval, the path restoration thread is started with the persistent policy and interval
settings previously set by the administrator with the vxdmpadm settune command.
If the administrator has not set a policy or interval, the system defaults are used.
The system default restore policy is check_disabled. The system default interval
is 300 seconds.
Warning: Decreasing the interval below the system default can adversely affect
system performance.
Warning: Automatic path failback stops if the path restoration thread is stopped.
248
Select an I/O path when multiple paths to a disk within the array are available.
DMP supplies default procedures for these functions when an array is registered.
An APM may modify some or all of the existing procedures that DMP provides, or
that another version of the APM provides.
You can use the following command to display all the APMs that are configured for
a system:
# vxdmpadm listapm all
The output from this command includes the file name of each module, the supported
array type, the APM name, the APM version, and whether the module is currently
loaded and in use.
To see detailed information for an individual module, specify the module name as
the argument to the command:
# vxdmpadm listapm module_name
249
The optional configuration attributes and their values are specific to the APM for
an array. Consult the documentation from the array vendor for details.
Note: By default, DMP uses the most recent APM that is available. Specify the -u
option instead of the -a option if you want to force DMP to use an earlier version
of the APM. The current version of an APM is replaced only if it is not in use.
Specify the -r option to remove an APM that is not currently loaded:
# vxdmpadm -r cfgapm module_name
250
Chapter
10
Dynamic Reconfiguration of
devices
This chapter includes the following topics:
If the device is part of a disk group, move the disk out of the disk group.
# vxdg -g dgname rmdisk daname
252
# vxdisk rm da-name
For example:
# vxdisk rm eva4k6k0_0
For LUNs using Linux LVM over DMP devices, remove the device from the
LVM volume group.
# vgreduce vgname devicepath
Type list or press Return to display a list of LUNs that are available for removal.
A LUN is available for removal if it is not in a disk group and the state is online,
nolabel, online invalid, or online thinrclm.
The following shows an example output:
Select disk devices to remove: [<pattern-list>,all,list]: list
LUN(s) available for removal:
eva4k6k0_0
eva4k6k0_1
eva4k6k0_2
eva4k6k0_3
eva4k6k0_4
emc0_017e
253
Return to the Dynamic Reconfiguration tool and select y to continue the removal
process.
DMP completes the removal of the device from VxVM usage. Output similar
to the following displays:
Luns Removed
------------------------emc0_017e
DMP updates the operating system device tree and the VxVM device tree.
254
When the prompt displays, add the LUNs from the array.
If the device is part of a disk group, move the disk out of the disk group.
# vxdg -g dgname rmdisk daname
255
For example:
# vxdisk rm eva4k6k0_0
For LUNs using Linux LVM over DMP devices, remove the device from the
LVM volume group
# vgreduce vgname devicepath
Return to the Dynamic Reconfiguration tool and select y to continue the removal.
After the removal completes successfully, the Dynamic Reconfiguration tool
prompts you to add a LUN.
When the prompt displays, add the LUNs from the array.
256
Resizing should only be performed on LUNs that preserve data. Consult the array
documentation to verify that data preservation is supported and has been qualified.
The operation also requires that only storage at the end of the LUN is affected.
Data at the beginning of the LUN must not be altered. No attempt is made to verify
the validity of pre-existing data on the LUN. The operation should be performed on
the host where the disk group is imported (or on the master node for a cluster-shared
disk group).
VxVM does not support resizing of LUNs that are not part of a disk group. It is not
possible to resize LUNs that are in the boot disk group (aliased as bootdg), in a
deported disk group, or that are offline, uninitialized, being reinitialized, or in an
error state.
VxVM does not support resizing of a disk with the VxVM cdsdisk layout to a size
greater than 1 TB if the disk group version is less than 160. VxVM added support
for cdsdisk disks greater than 1 TB starting in disk group version 160.
When a disk is resized from the array side, you must also resize the disk from the
VxVM side to make VxVM aware of the new size.
Use the following form of the vxdisk command to make VxVM aware of the new
size of a LUN that has been resized:
# vxdisk [-f] [-g diskgroup] resize {accessname|medianame} \
[length=value]
If a disk media name rather than a disk access name is specified, a disk group
name is required. Specify a disk group with the -g option or configure a default disk
group. If the default disk group is not configured, the above command generates
an error message.
Following a resize operation to increase the length that is defined for a device,
additional disk space on the device is available for allocation. You can optionally
specify the new size by using the length attribute.
Any volumes on the device should only be grown after the LUN itself has first been
grown.
Warning: Do not perform this operation when replacing a physical disk with a disk
of a different size as data is not preserved.
Before shrinking a LUN, first shrink any volumes on the LUN or move those volumes
off of the LUN. Then, resize the device using vxdisk resize. Finally, resize the
LUN itself using the storage array's management utilities. By default, the resize
fails if any subdisks would be disabled as a result of their being removed in whole
or in part during a shrink operation.
257
If the device that is being resized has the only valid configuration copy for a disk
group, the -f option may be specified to forcibly resize the device. Note the following
exception. For disks with the VxVM cdsdisk layout, disks larger than 1 TB in size
have a different internal layout than disks smaller than 1 TB. Therefore, resizing a
cdsdisk disk from less than 1 TB to greater than 1 TB requires special care if the
disk group only has one disk. In this case, you must add a second disk (of any size)
to the disk group prior to performing the vxdisk resize command on the original
disk. You can remove the second disk from the disk group after the resize operation
has completed.
Caution: Resizing a device that contains the only valid configuration copy for a disk
group can result in data loss if a system crash occurs during the resize.
Resizing a virtual disk device is a non-transactional operation outside the control
of VxVM. This means that the resize command may have to be re-issued following
a system crash. In addition, a system crash may leave the private region on the
device in an unusable state. If this occurs, the disk must be reinitialized, reattached
to the disk group, and its data resynchronized or recovered from a backup.
If the device is part of a disk group, move the disk out of the disk group.
# vxdg -g dgname rmdisk daname
For example:
# vxdisk rm eva4k6k0_0
258
For LUNs using Linux LVM over DMP devices, remove the device from the
LVM volume group
# vgreduce vgname devicepath
259
DMP does not fail the I/Os for 300 seconds, or until the I/O succeeds.
To verify which arrays support Online Controller Upgrade or NDU, see the hardware
compatibility list (HCL) at the following URL:
http://www.symantec.com/docs/TECH211575
260
Chapter
11
Managing devices
This chapter includes the following topics:
Renaming a disk
Managing devices
Displaying disk information
TYPE
auto:sliced
auto:sliced
auto:sliced
auto:sliced
DISK
mydg04
mydg03
-
GROUP
mydg
mydg
-
STATUS
online
online
online invalid
online thinrclm
The phrase online invalid in the STATUS line indicates that a disk has not
yet been added to VxVM control. These disks may or may not have been
initialized by VxVM previously. Disks that are listed as online are already
under VxVM control.
To display information about an individual disk
The -v option causes the command to additionally list all tags and tag values
that are defined for the disk. By default, tags are not displayed.
Start the vxdiskadm program, and select list (List disk information)
from the main menu.
At the following prompt, enter the name of the device you want to see, or enter
all for a list of all devices:
List disk information
Menu: VolumeManager/Disk/ListDisk
VxVM INFO V-5-2-475 Use this menu operation to display a list of
262
Managing devices
Changing the disk device naming scheme
If you enter all, VxVM displays the device name, disk name, group, and
status of all the devices.
If you enter the name of a device, VxVM displays complete disk information
(including the device name, the type of disk, and information about the
public and private areas of the disk) of that device.
Once you have examined this information, press Return to return to the main
menu.
263
Managing devices
Changing the disk device naming scheme
Select Change the disk naming scheme from the vxdiskadm main menu to
change the disk-naming scheme that you want SF to use. When prompted,
enter y to change the naming scheme.
OR
Change the naming scheme from the command line. Use the following
command to select enclosure-based naming:
# vxddladm set namingscheme=ebn [persistence={yes|no}] \
[use_avid={yes|no}] [lowercase={yes|no}]
The optional persistence argument allows you to select whether the names
of disk devices that are displayed by SF remain unchanged after disk hardware
has been reconfigured and the system rebooted. By default, enclosure-based
naming is persistent. Operating system-based naming is not persistent by
default.
To change only the naming persistence without changing the naming scheme,
run the vxddladm set namingscheme command for the current naming scheme,
and specify the persistence attribute.
By default, the names of the enclosure are converted to lowercase, regardless
of the case of the name specified by the ASL. The enclosure-based device
names are therefore in lowercase. Set the lowercase=no option to suppress
the conversion to lowercase.
For enclosure-based naming, the use_avid option specifies whether the Array
Volume ID is used for the index number in the device name. By default,
use_avid=yes, indicating the devices are named as enclosure_avid. If use_avid
is set to no, DMP devices are named as enclosure_index. The index number
is assigned after the devices are sorted by LUN serial number.
The change is immediate whichever method you use.
See Regenerating persistent device names on page 266.
264
Managing devices
Changing the disk device naming scheme
You can also assign names from an input file. This enables you to customize the
DMP nodes on the system with meaningful names.
To specify a custom name for an enclosure
265
Managing devices
Changing the disk device naming scheme
To obtain a file populated with the names of the devices in your configuration,
use the following command:
# vxddladm -l assign names
> filename
The sample file shows the format required and serves as a template to specify
your customized names.
You can also use the script vxgetdmpnames to get a sample file populated from
the devices in your configuration.
Modify the file as required. Be sure to maintain the correct format in the file.
To assign the names, specify the name and path of the file to the following
command:
# vxddladm assign names file=pathname
To clear the names, and use the default operating system-based naming or
enclosure-based naming, use the following command:
# vxddladm -c assign names
266
Managing devices
Changing the disk device naming scheme
267
The -c option clears all user-specified names and replaces them with
autogenerated names.
If the -c option is not specified, existing user-specified names are maintained,
but operating system-based and enclosure-based names are regenerated.
The disk names now correspond to the new path names.
For disk enclosures that are controlled by third-party drivers (TPD) whose
coexistence is supported by an appropriate Array Support Library (ASL), the
default behavior is to assign device names that are based on the TPD-assigned
node names. You can use the vxdmpadm command to switch between these
names and the device names that are known to the operating system:
# vxdmpadm setattr enclosure enclosure_name tpdmode=native|pseudo
The argument to the tpdmode attribute selects names that are based on those
used by the operating system (native), or TPD-assigned node names (pseudo).
The use of this command to change between TPD and operating system-based
naming is illustrated in the following example for the enclosure named EMC0.
In this example, the device-naming scheme is set to OSN.
# vxdisk list
DEVICE
emcpower10
emcpower11
emcpower12
TYPE
auto:sliced
auto:sliced
auto:sliced
DISK
disk1
disk2
disk3
GROUP
mydg
mydg
mydg
STATUS
online
online
online
Managing devices
Changing the disk device naming scheme
emcpower13
emcpower14
emcpower15
emcpower16
emcpower17
emcpower18
emcpower19
auto:sliced
auto:sliced
auto:sliced
auto:sliced
auto:sliced
auto:sliced
auto:sliced
disk4
disk5
disk6
disk7
disk8
disk9
disk10
mydg
mydg
mydg
mydg
mydg
mydg
mydg
online
online
online
online
online
online
online
TYPE
auto:sliced
auto:sliced
auto:sliced
auto:sliced
auto:sliced
auto:sliced
auto:sliced
auto:sliced
auto:sliced
auto:sliced
DISK
disk1
disk2
disk3
disk4
disk5
disk6
disk7
disk8
disk9
disk10
GROUP
mydg
mydg
mydg
mydg
mydg
mydg
mydg
mydg
mydg
mydg
STATUS
online
online
online
online
online
online
online
online
online
online
If tpdmode is set to native, the path with the smallest device number is
displayed.
268
Managing devices
Changing the disk device naming scheme
269
auto:cdsdisk
auto:cdsdisk
auto:cdsdisk
emc_clariion0_91
emc_clariion0_92
emc_clariion0_93
dg1
dg1
dg1
online shared
online shared
online shared
Managing devices
About disk installation and formatting
emc_clariion0_282 auto:cdsdisk
emc_clariion0_283 auto:cdsdisk
emc_clariion0_284 auto:cdsdisk
emc_clariion0_282
emc_clariion0_283
emc_clariion0_284
dg1
dg1
dg1
270
online shared
online shared
online shared
Managing devices
Adding and removing disks
When initializing multiple disks at one time, it is possible to exclude certain disks
or certain controllers.
You can also exclude certain disks or certain controllers when encapsulating multiple
disks at one time.
To exclude a device from the view of VxVM, select Prevent
multipathing/Suppress devices from VxVMs view from the vxdiskadm main
menu.
Warning: Initialization does not preserve the existing data on the disks.
A disk cannot be initialized if it does not have a valid useable partition table. You
can use the fdisk command to create an empty partition table on a disk as shown
here:
# fdisk /dev/sdX
Command (m for help): o
Command (m for help): w
where /dev/sdX is the name of the disk device, for example, /dev/sdi.
Warning: The fdisk command can destroy data on the disk. Do not use this
command if the disk contains data that you want to preserve.
See Making devices invisible to VxVM on page 204.
271
Managing devices
Adding and removing disks
Select Add or initialize one or more disks from the vxdiskadm main
menu.
At the following prompt, enter the disk device name of the disk to be added to
VxVM control (or enter list for a list of disks):
Select disk devices to add:
[<pattern-list>,all,list,q,?]
If you enter list at the prompt, the vxdiskadm program displays a list of the
disks available to the system:
DEVICE
sdb
sdc
sdd
sde
sdf
sdg
DISK
mydg01
mydg02
mydg03
mydg04
-
GROUP
mydg
mydg
mydg
mydg
-
STATUS
online
online
online
online
online
online invalid
The phrase online invalid in the STATUS line indicates that a disk has yet
to be added or initialized for VxVM control. Disks that are listed as online with
a disk name and disk group are already under VxVM control.
Enter the device name or pattern of the disks that you want to initialize at the
prompt and press Return.
To continue with the operation, enter y (or press Return) at the following prompt:
Here are the disks selected. Output format: [Device]
list of device names
Continue operation? [y,n,q,?] (default: y) y
272
Managing devices
Adding and removing disks
At the following prompt, specify the disk group to which the disk should be
added, or none to reserve the disks for future use:
for use
disk
To
group
If you specified the name of a disk group that does not already exist, vxdiskadm
prompts for confirmation that you really want to create this new disk group:
There is no active disk group named disk group name.
Create a new group named disk group name? [y,n,q,?]
(default: y)y
You are then prompted to confirm whether the disk group should support the
Cross-platform Data Sharing (CDS) feature:
Create the disk group as a CDS disk group? [y,n,q,?]
(default: n)
If the new disk group may be moved between different operating system
platforms, enter y. Otherwise, enter n.
At the following prompt, either press Return to accept the default disk name
or enter n to allow you to define your own disk names:
Use default disk names for the disks? [y,n,q,?] (default: y) n
When prompted whether the disks should become hot-relocation spares, enter
n (or press Return):
Add disks as spare disks for disk group name? [y,n,q,?]
(default: n) n
When prompted whether to exclude the disks from hot-relocation use, enter n
(or press Return).
Exclude disks from hot-relocation use? [y,n,q,?}
(default: n) n
273
Managing devices
Adding and removing disks
You are next prompted to choose whether you want to add a site tag to the
disks:
Add site tag to disks? [y,n,q,?] (default: n)
A site tag is usually applied to disk arrays or enclosures, and is not required
unless you want to use the Remote Mirror feature.
If you enter y to choose to add a site tag, you are prompted to the site name
at step 11.
10 To continue with the operation, enter y (or press Return) at the following prompt:
The selected disks will be added to the disk group
disk group name with default disk names.
list of device names
Continue with operation? [y,n,q,?] (default: y) y
11 If you chose to tag the disks with a site in step 9, you are now prompted to
enter the site name that should be applied to the disks in each enclosure:
The following disk(s):
list of device names
belong to enclosure(s):
list of enclosure names
Enter site tag for disks on enclosure enclosure name
[<name>,q,?] site_name
274
Managing devices
Adding and removing disks
12 If you see the following prompt, it lists any disks that have already been
initialized for use by VxVM:
The following disk devices appear to have been initialized
already.
The disks are currently available as replacement disks.
Output format: [Device]
list of device names
Use these devices? [Y,N,S(elect),q,?] (default: Y) Y
This prompt allows you to indicate yes or no for all of these disks (Y or N)
or to select how to process each of these disks on an individual basis (S).
If you are sure that you want to reinitialize all of these disks, enter Y at the
following prompt:
VxVM NOTICE V-5-2-366 The following disks you selected for use
appear to already have been initialized for the Volume
Manager. If you are certain the disks already have been
initialized for the Volume Manager, then you do not need to
reinitialize these disk devices.
Output format: [Device]
list of device names
Reinitialize these devices? [Y,N,S(elect),q,?] (default: Y) Y
275
Managing devices
Adding and removing disks
13
vxdiskadm may now indicate that one or more disks is a candidate for
276
Managing devices
Adding and removing disks
277
14 If you choose to encapsulate the disk, vxdiskadm confirms its device name
and prompts you for permission to proceed. Enter y (or press Return) to
continue encapsulation:
VxVM NOTICE V-5-2-311 The following disk device has been
selected for encapsulation.
Output format: [Device]
device name
Continue with encapsulation? [y,n,q,?] (default: y) y
vxdiskadm now displays an encapsulation status and informs you
that you must perform a shutdown and reboot as soon as
possible:
VxVM INFO V-5-2-333 The disk device device name will be
encapsulated and added to the disk group disk group name with the
disk name disk name.
You can now choose whether the disk is to be formatted as a CDS disk that
is portable between different operating systems, or as a non-portable sliced or
simple disk:
Enter the desired format [cdsdisk,sliced,simple,q,?]
(default: cdsdisk)
Enter the format that is appropriate for your needs. In most cases, this is the
default format, cdsdisk.
At the following prompt, vxdiskadm asks if you want to use the default private
region size of 65536 blocks (32MB). Press Return to confirm that you want to
use the default value, or enter a different value. (The maximum value that you
can specify is 524288 blocks.)
Enter desired private region length [<privlen>,q,?]
(default: 65536)
If you entered cdsdisk as the format, you are prompted for the action to be
taken if the disk cannot be converted to this format:
Do you want to use sliced as the format should cdsdisk fail?
[y,n,q,?] (default: y)
If you enter y, and it is not possible to encapsulate the disk as a CDS disk, it
is encapsulated as a sliced disk. Otherwise, the encapsulation fails.
Managing devices
Adding and removing disks
vxdiskadm then proceeds to encapsulate the disks. You should now reboot
your system at the earliest possible opportunity, for example by running this
command:
# shutdown -r now
The /etc/fstab file is updated to include the volume devices that are used to
mount any encapsulated file systems. You may need to update any other
references in backup scripts, databases, or manually created swap devices.
The original /etc/fstab file is saved as /etc/fstab.b4vxvm.
15 If you choose not to encapsulate the disk, vxdiskadm asks if you want to
initialize the disk instead. Enter y to confirm this:
Instead of encapsulating, initialize? [y,n,q,?] (default: n) yvxdiskadm now
confirms those disks that are being initialized and added to VxVM control with
messages similar to the following. In addition, you may be prompted to perform
surface analysis.
VxVM INFO V-5-2-205 Initializing device device name.
16 You can now choose whether the disk is to be formatted as a CDS disk that
is portable between different operating systems, or as a non-portable sliced or
simple disk:
Enter the desired format [cdsdisk,sliced,simple,q,?]
(default: cdsdisk)
Enter the format that is appropriate for your needs. In most cases, this is the
default format, cdsdisk.
17 At the following prompt, vxdiskadm asks if you want to use the default private
region size of 65536 blocks (32MB). Press Return to confirm that you want to
use the default value, or enter a different value. (The maximum value that you
can specify is 524288 blocks.)
Enter desired private region length [<privlen>,q,?]
(default: 65536)
vxdiskadm then proceeds to add the disks.
VxVM INFO V-5-2-88 Adding disk device device name to disk group
disk group name with disk name disk name.
.
.
.
278
Managing devices
Adding and removing disks
18 If you choose not to use the default disk names, vxdiskadm prompts you to
enter the disk name.
19 At the following prompt, indicate whether you want to continue to initialize more
disks (y) or return to the vxdiskadm main menu (n):
Add or initialize other disks? [y,n,q,?] (default: n)
You can change the default layout for disks using the vxdisk command or the
vxdiskadm utility.
See the vxdisk(1M) manual page.
See the vxdiskadm(1M) manual page.
Disk reinitialization
You can reinitialize a disk that has previously been initialized for use by Veritas
Volume Manager (VxVM) by putting it under VxVM control as you would a new
disk.
See Adding a disk to VxVM on page 270.
Warning: Reinitialization does not preserve data on the disk. If you want to reinitialize
the disk, make sure that it does not contain data that should be preserved.
If the disk you want to add has been used before, but not with a volume manager,
you can encapsulate the disk to preserve its information. If the disk you want to add
has previously been under LVM control, you can preserve the data it contains on
a VxVM disk by the process of conversion.
For detailed information about migrating volumes, see the Symantec Storage
Foundation and High Availability Solutions Solutions Guide.
279
Managing devices
Adding and removing disks
Removing disks
This section describes how to remove a Veritas Volume Manager (VxVM) disk.
You must disable a disk group before you can remove the last disk in that group.
See Disabling a disk group on page 653.
As an alternative to disabling the disk group, you can destroy the disk group.
See Destroying a disk group on page 653.
You can remove a disk from a system and move it to another system if the disk is
failing or has failed.
280
Managing devices
Adding and removing disks
To remove a disk
Stop all activity by applications to volumes that are configured on the disk that
is to be removed. Unmount file systems and shut down databases that are
configured on the volumes.
Move the volumes to other disks or back up the volumes. To move a volume,
use vxdiskadm to mirror the volume on one or more disks, then remove the
original copy of the volume. If the volumes are no longer needed, they can be
removed instead of moved.
Check that any data on the disk has either been moved to other disks or is no
longer needed.
At the following prompt, enter the disk name of the disk to be removed:
Enter disk name [<disk>,list,q,?] mydg01
If there are any volumes on the disk, VxVM asks you whether they should be
evacuated from the disk. If you wish to keep the volumes, answer y. Otherwise,
answer n.
The vxdiskadm utility removes the disk from the disk group and displays the
following success message:
VxVM INFO V-5-2-268 Removal of disk mydg01 is complete.
You can now remove the disk or leave it on your system as a replacement.
At the following prompt, indicate whether you want to remove other disks (y)
or return to the vxdiskadm main menu (n):
Remove another disk? [y,n,q,?] (default: n)
281
Managing devices
Adding and removing disks
There is not enough space on the remaining disks in the subdisks disk group.
If the vxdiskadm program cannot move some subdisks, remove some plexes from
some disks to free more space before proceeding with the disk removal operation.
See Removing a volume on page 662.
To remove a disk with subdisks
Run the vxdiskadm program and select Remove a disk from the main menu.
If the disk is used by some subdisks, the following message is displayed:
VxVM ERROR V-5-2-369 The following volumes currently use part of
disk mydg02:
home usrvol
Volumes must be moved from mydg02 before it can be removed.
Move volumes to other disks? [y,n,q,?] (default: n)
282
Managing devices
Renaming a disk
Run the vxdiskadm program and select Remove a disk from the main menu,
and respond to the prompts as shown in this example to remove mydg02:
Enter disk name [<disk>,list,q,?] mydg02
VxVM NOTICE V-5-2-284 Requested operation is to remove disk
mydg02 from group mydg.
Continue with operation? [y,n,q,?] (default: y) y
VxVM INFO V-5-2-268 Removal of disk mydg02 is complete.
Clobber disk headers? [y,n,q,?] (default: n) y
Enter y to remove the disk completely from VxVM control. If you do not want
to remove the disk completely from VxVM control, enter n.
Renaming a disk
Veritas Volume Manager (VxVM) gives the disk a default name when you add the
disk to VxVM control, unless you specify a VxVM disk name. VxVM uses the VxVM
disk name to identify the location of the disk or the disk type.
283
Managing devices
Renaming a disk
To rename a disk
By default, VxVM names subdisk objects after the VxVM disk on which they
are located. Renaming a VxVM disk does not automatically rename the subdisks
on that disk.
For example, you might want to rename disk mydg03, as shown in the following
output from vxdisk list, to mydg02:
# vxdisk list
DEVICE
sdb
sdc
sdd
TYPE
auto:sliced
auto:sliced
auto:sliced
DISK
mydg01
mydg03
-
GROUP
mydg
mydg
-
STATUS
online
online
online
To confirm that the name change took place, use the vxdisk list command
again:
# vxdisk list
DEVICE
sdb
sdc
sdd
TYPE
auto:sliced
auto:sliced
auto:sliced
DISK
mydg01
mydg02
-
GROUP
mydg
mydg
-
STATUS
online
online
online
284
Chapter
12
Event monitoring
This chapter includes the following topics:
Starting and stopping the Dynamic Multi-Pathing (DMP) event source daemon
Monitoring of SAN fabric events and proactive error detection (SAN event)
See Fabric Monitoring and proactive error detection on page 286.
Event monitoring
Fabric Monitoring and proactive error detection
See Starting and stopping the Dynamic Multi-Pathing (DMP) event source daemon
on page 288.
286
Event monitoring
Dynamic Multi-Pathing (DMP) discovery of iSCSI and SAN Fibre Channel topology
To display the current value of the dmp_monitor_fabric tunable, use the following
command:
# vxdmpadm gettune dmp_monitor_fabric
Throttling of paths
287
Event monitoring
Starting and stopping the Dynamic Multi-Pathing (DMP) event source daemon
For details on the various log levels, see the vxdmpadm(1M) manual page.
To view the status of the vxesd daemon, use the vxddladm utility:
# vxddladm status eventsource
288
Section
Chapter
13
Concurrent I/O
Cache advisories
Sequential
For sequential I/O, VxFS employs a read-ahead policy by default when the
application is reading data. For writing, it allocates contiguous blocks if possible.
In most cases, VxFS handles I/O that is sequential through buffered I/O. VxFS
handles random or nonsequential I/O using direct I/O without buffering.
VxFS provides a set of I/O cache advisories for use when accessing files.
See the Veritas File System Programmer's Reference Guide.
See the vxfsio(7) manual page.
Direct I/O
Direct I/O is an unbuffered form of I/O. If the VX_DIRECT advisory is set, the user is
requesting direct data transfer between the disk and the user-supplied buffer for
reads and writes. This bypasses the kernel buffering of data, and reduces the CPU
overhead associated with I/O by eliminating the data copy between the kernel buffer
and the user's buffer. This also avoids taking up space in the buffer cache that
might be better used for something else. The direct I/O feature can provide significant
performance gains for some applications.
The direct I/O and VX_DIRECT advisories are maintained on a per-file-descriptor
basis.
The ending file offset must be aligned to a 512-byte boundary, or the length
must be a multiple of 512 bytes.
291
Unbuffered I/O
If the VX_UNBUFFERED advisory is set, I/O behavior is the same as direct I/O with
the VX_DIRECT advisory set, so the alignment constraints that apply to direct I/O
also apply to unbuffered I/O. For unbuffered I/O, however, if the file is being
extended, or storage is being allocated to the file, inode changes are not updated
synchronously before the write returns to the user. The VX_UNBUFFERED advisory
is maintained on a per-file-descriptor basis.
292
Concurrent I/O
Concurrent I/O (VX_CONCURRENT) allows multiple processes to read from or write to
the same file without blocking other read(2) or write(2) calls. POSIX semantics
requires read and write calls to be serialized on a file with other read and write
calls. With POSIX semantics, a read call either reads the data before or after the
write call occurred. With the VX_CONCURRENT advisory set, the read and write
operations are not serialized as in the case of a character device. This advisory is
generally used by applications that require high performance for accessing data
and do not perform overlapping writes to the same file. It is the responsibility of the
application or the running threads to coordinate the write activities to the same
file when using Concurrent I/O.
Concurrent I/O can be enabled in the following ways:
By specifying the VX_CONCURRENT advisory flag for the file descriptor in the
VX_SETCACHE ioctl command. Only the read(2) and write(2) calls occurring
through this file descriptor use concurrent I/O. The read and write operations
occurring through other file descriptors for the same file will still follow the POSIX
semantics.
See vxfsio(7) manual page.
By using the cio mount option. The read(2) and write(2) operations occurring
on all of the files in this particular file system will use concurrent I/O.
See cio mount option on page 167.
See the mount_vxfs(1M) manual page.
293
Cache advisories
VxFS allows an application to set cache advisories for use when accessing files.
VxFS cache advisories enable applications to help monitor the buffer cache and
provide information on how better to tune the buffer cache to improve performance
gain.
The basic function of the cache advisory is to let you know whether you could have
avoided a later re-read of block X if the buffer cache had been a little larger.
Conversely, the cache advisory can also let you know that you could safely reduce
the buffer cache size without putting block X into jeopardy.
These advisories are in memory only and do not persist across reboots. Some
advisories are currently maintained on a per-file, not a per-file-descriptor, basis.
Only one set of advisories can be in effect for all accesses to the file. If two conflicting
applications set different advisories, both must use the advisories that were last
set.
All advisories are set using the VX_SETCACHE ioctl command. The current set of
advisories can be obtained with the VX_GETCACHE ioctl command.
See the vxfsio(7) manual page.
294
SFHA Solutions
Supported
database accelerator databases
Oracle
Oracle
295
Table 13-1
SFHA Solutions
Supported
database accelerator databases
Concurrent I/O
DB2
Sybase
296
Chapter
14
vxassist mirror
vxassist snapcreate
vxevac
vxplex att
vxplex cp
vxplex mv
vxsnap addmir
vxsnap reattach
vxsd mv
The administrative I/O operations allocate memory for I/O from a separate memory
pool. You can tune the maximum size of this pool with the tunable parameter,
vol_max_adminio_poolsz.
298
Section
Chapter
15
Understanding
point-in-time copy methods
This chapter includes the following topics:
Volume-level snapshots
Storage Checkpoints
About FileSnaps
the point-in-time copies. The point-in-time copies can be processed on the same
host as the active data, or a different host. If required, you can offload processing
of the point-in-time copies onto another host to avoid contention for system resources
on your production server. This method is called off-host processing. If implemented
correctly, off-host processing solutions have almost no impact on the performance
of the primary production system.
For more information about how to use point-in-time copies for particular use cases,
see the Symantec Storage Foundation and High Availability Solutions Solutions
Guide.
301
Cloning dataYou can clone your file system or application data. This
functionality enable you to quickly and efficiently provision virtual desktops.
All of the snapshot solutions mentioned above are also available on the disaster
recovery site, in conjunction with Volume Replicator.
For more information about snapshots with replication, see the Symantec Storage
Foundation and High Availability Solutions Replication Administrator's Guide.
Symantec Storage Foundation provides several point-in-time copy solutions that
support your needs, including the following use cases:
302
Figure 15-1
303
Volume
Cache or
empty
volume
Volume
Snapshot
volume
Volume
Snapshot
volume
Volume
Snapshot
volume
4. Apply processing
Apply the desired processing application
to the snapshot volumes.
Repeat steps
3 and 4 as
required.
Note: The Disk Group Split/Join functionality is not used. As all processing takes
place in the same disk group, synchronization of the contents of the snapshots from
the original volumes is not usually required unless you want to prevent disk
contention. Snapshot creation and updating are practically instantaneous.
Figure 15-2 shows the suggested arrangement for implementing solutions where
the primary host is used and disk contention is to be avoided.
Figure 15-2
Primary host
SCSI or Fibre
Channel connectivity
304
Figure 15-3
Primary Host
Network
SCSI or Fibre
Channel connectivity
Also, if you place the snapshot volumes on disks that are attached to host controllers
other than those for the disks in the primary volumes, it is possible to avoid
contending with the primary host for I/O resources. To implement this, paths 1 and
2 shown in the Figure 15-3 should be connected to different controllers.
Figure 15-4 shows an example of how you might achieve such connectivity using
Fibre Channel technology with 4 Fibre Channel controllers in the primary host.
305
Figure 15-4
Primary host
Network
c1 c2
c1 c2
c3 c4
c3 c4
Fibre Channel
hubs or switches
Disk arrays
This layout uses redundant-loop access to deal with the potential failure of any
single component in the path between a system and a disk array.
Note: On some operating systems, controller names may differ from what is shown
here.
Figure 15-5 shows how off-host processing might be implemented in a cluster by
configuring one of the cluster nodes as the OHP node.
306
Figure 15-5
Cluster
Figure 15-6 shows an alternative arrangement, where the OHP node could be a
separate system that has a network connection to the cluster, but which is not a
cluster node and is not connected to the clusters private network.
Figure 15-6
Cluster
Network
SCSI or Fibre
Channel connectivity
307
Note: For off-host processing, the example scenarios in this document assume that
a separate OHP host is dedicated to the backup or decision support role. For
clusters, it may be simpler, and more efficient, to configure an OHP host that is not
a member of the cluster.
Figure 15-7 illustrates the steps that are needed to set up the processing solution
on the primary host.
308
Figure 15-7
309
Volume
Empty
volume
Volume
Snapshot
volume
Volume
Snapshot
volume
Volume
Snapshot
volume
Volume
Snapshot
volume
Volume
Snapshot
volume
Volume
Snapshot
volume
deport
import
deport
Volume
Snapshot
volume
Volume
Snapshot
volume
import
Repeat steps 3
through 9 as required
Disk Group Split/Join is used to split off snapshot volumes into a separate disk
group that is imported on the OHP host.
Note: As the snapshot volumes are to be moved into another disk group and then
imported on another host, their contents must first be synchronized with the parent
volumes. On reimporting the snapshot volumes, refreshing their contents from the
original volume is speeded by using FastResync.
File system-level solutions use the Storage Checkpoint feature of Veritas File
System. Storage Checkpoints are suitable for implementing solutions where
storage space is critical for:
Applications where multiple writable copies of a file system are required for
testing or versioning.
See Storage Checkpoints on page 316.
310
Solution
Internal
content
Exported
content
Can be
moved
off-host
Instant
full-sized
snapshot
Volume
Separate
volume
Changed
regions/ Full
volume
Read/Write
volume
Yes, after
Immediate
synchronization
Instant
space
optimized
snapshot
Volume
Cache
Copy on write
object
(Separate
cache
volume)
Changed
regions
Read/Write
volume
No
Linked plex
break-off
Volume
Separate
volume
Copy on write/
Full copy
Changed
regions/ Full
volume
Read/Write
volume
Yes, after
Immediate
synchronization
Plex
break-off
using
vxsnap
Volume
Separate
volume
Copy on write/
Full copy
Changed
regions/ Full
volume
Read/Write
volume
Yes, after
Immediate
synchronization
Traditional
plex
break-off
using
vxassist
Volume
Separate
volume
Full copy
Full volume
Read/Write
volume
Yes, after
After full
synchronization synchronization
Storage
Checkpoint
File system
Space
within file
system
Copy on write
No
Immediate
File system
snapshot
File system
Separate
volume
Copy on write
Immediate
FileSnap
File
Space
within file
system
Copy on
Changed file Read/Write
write/Lazy copy system blocks file system
on write
Copy on write/
Full copy
No
Availability
Immediate
Immediate
311
Volume-level snapshots
A volume snapshot is an image of a Veritas Volume Manager (VxVM) volume at a
given point in time. You can also take a snapshot of a volume set.
Volume snapshots allow you to make backup copies of your volumes online with
minimal interruption to users. You can then use the backup copies to restore data
that has been lost due to disk failure, software errors or human mistakes, or to
create replica volumes for the purposes of report generation, application
development, or testing.
Volume snapshots can also be used to implement off-host online backup.
Physically, a snapshot may be a full (complete bit-for-bit) copy of the data set, or
it may contain only those elements of the data set that have been updated since
snapshot creation. The latter are sometimes referred to as allocate-on-first-write
snapshots, because space for data elements is added to the snapshot image only
when the elements are updated (overwritten) for the first time in the original data
set. Storage Foundation allocate-on-first-write snapshots are called space-optimized
snapshots.
312
coordinates with VxFS to flush data that is in the cache to the volume. Therefore,
these snapshots are always VxFS consistent and require no VxFS recovery while
mounting.
For databases, a suitable mechanism must additionally be used to ensure the
integrity of tablespace data when the volume snapshot is taken. The facility to
temporarily suspend file system I/O is provided by most modern database software.
The examples provided in this document illustrate how to perform this operation.
For ordinary files in a file system, which may be open to a wide variety of different
applications, there may be no way to ensure the complete integrity of the file data
other than by shutting down the applications and temporarily unmounting the file
system. In many cases, it may only be important to ensure the integrity of file data
that is not in active use at the time that you take the snapshot. However, in all
scenarios where application coordinate, snapshots are crash-recoverable.
313
314
Figure 15-8
Start
vxsnap prepare
vxsnap make
Original volume
Snapshot volume
vxsnap refresh
Backup
cycle
315
Storage Checkpoints
A Storage Checkpoint is a persistent image of a file system at a given instance in
time. Storage Checkpoints use a copy-on-write technique to reduce I/O overhead
by identifying and maintaining only those file system blocks that have changed
since a previous Storage Checkpoint was taken. Storage Checkpoints have the
following important features:
A Storage Checkpoint can preserve not only file system metadata and the
directory hierarchy of the file system, but also user data as it existed when the
Storage Checkpoint was taken.
After creating a Storage Checkpoint of a mounted file system, you can continue
to create, remove, and update files on the file system without affecting the image
of the Storage Checkpoint.
To minimize disk space usage, Storage Checkpoints use free space in the file
system.
316
Can have multiple, read-only Storage Checkpoints that reduce I/O operations
and required storage space because the most recent Storage Checkpoint is the
only one that accumulates updates from the primary file system.
Can restore the file system to its state at the time that the Storage Checkpoint
was taken.
317
the same creation timestamp. The Storage Checkpoint facility guarantees that
multiple file system Storage Checkpoints are created on all or none of the specified
file systems, unless there is a system crash while the operation is in progress.
Note: The calling application is responsible for cleaning up Storage Checkpoints
after a system crash.
A Storage Checkpoint of the primary fileset initially contains only pointers to the
existing data blocks in the primary fileset, and does not contain any allocated data
blocks of its own.
Figure 15-9 shows the file system /database and its Storage Checkpoint. The
Storage Checkpoint is logically identical to the primary fileset when the Storage
Checkpoint is created, but it does not contain any actual data blocks.
Figure 15-9
Primary fileset
Storage Checkpoint
/database
emp.dbf
/database
jun.dbf
emp.dbf
jun.dbf
In Figure 15-10, a square represents each block of the file system. This figure shows
a Storage Checkpoint containing pointers to the primary fileset at the time the
Storage Checkpoint is taken, as in Figure 15-9.
318
Figure 15-10
Primary fileset
Storage Checkpoint
The Storage Checkpoint presents the exact image of the file system by finding the
data from the primary fileset. VxFS updates a Storage Checkpoint by using the
copy-on-write technique.
See Copy-on-write on page 319.
Copy-on-write
In Figure 15-11, the third data block in the primary fileset originally containing C is
updated.
Before the data block is updated with new data, the original data is copied to the
Storage Checkpoint. This is called the copy-on-write technique, which allows the
Storage Checkpoint to preserve the image of the primary fileset when the Storage
Checkpoint is taken.
Every update or write operation does not necessarily result in the process of copying
data to the Storage Checkpoint because the old data needs to be saved only once.
As blocks in the primary fileset continue to change, the Storage Checkpoint
accumulates the original data blocks. In this example, subsequent updates to the
third data block, now containing C', are not copied to the Storage Checkpoint
because the original image of the block containing C is already saved.
319
Figure 15-11
Primary fileset
Storage Checkpoint
External applications, such as NFS, see the files as part of the original mount
point. Thus, no additional NFS exports are necessary.
The Storage Checkpoints are automounted internally, but the operating system
does not know about the automounting. This means that Storage Checkpoints
cannot be mounted manually, and they do not apear in the list of mounted file
systems. When Storage Checkpoints are created or deleted, entries in the Storage
Checkpoint directory are automatically updated. If a Storage Checkpoint is removed
with the -f option while a file in the Storage Checkpoint is still in use, the Storage
Checkpoint is force unmounted, and all operations on the file fail with the EIO error.
320
If there is already a file or directory named .checkpoint in the root directory of the
file system, such as a directory created with an older version of Veritas File System
(VxFS) or when Storage Checkpoint visibility feature was disabled, the fake directory
providing access to the Storage Checkpoints is not accessible. With this feature
enabled, attempting to create a file or directory in the root directory with the name
.checkpoint fails with the EEXIST error.
Note: If an auto-mounted Storage Checkpoint is in use by an NFS mount, removing
the Storage Checkpoint might succeed even without the forced (-f) option.
321
322
Figure 15-12
Primary fileset
See Showing the difference between a data and a nodata Storage Checkpoint
on page 391.
323
About FileSnaps
A FileSnap is an atomic space-optimized copy of a file in the same name space,
stored in the same file system. Veritas File System (VxFS) supports snapshots on
file system disk layout Version 8 and later.
FileSnaps provide an ability to snapshot objects that are smaller in granularity than
a file system or a volume. The ability to snapshot parts of a file system name space
is required for application-based or user-based management of data stored in a file
system. This is useful when a file system is shared by a set of users or applications
or the data is classified into different levels of importance in the same file system.
All regular file operations are supported on the FileSnap, and VxFS does not
distinguish the FileSnap in any way.
Properties of FileSnaps
FileSnaps provide non-root users the ability to snapshot data that they own, without
requiring administrator privileges. This enables users and applications to version,
backup, and restore their data by scheduling snapshots at appropriate points of
their application cycle. Restoring from a FileSnap is as simple as specifying a
snapshot as the source file and the original file as the destination file as the
arguments for the vxfilesnap command.
FileSnap creation locks the source file as read-only and locks the destination file
exclusively for the duration of the operation, thus creating the snapshots atomically.
The rest of the files in the file system can be accessed with no I/O pause while
FileSnap creation is in progress. Read access to the source file is also uninterrupted
while the snapshot creation is in progress. This allows for true sharing of a file
system by multiple users and applications in a non-intrusive fashion.
324
The name space relationship between source file and destination file is defined by
the user-issued vxfilesnap command by specifying the destination file path. Veritas
File System (VxFS) neither differentiates between the source file and the destination
file, nor does it maintain any internal relationships between these two files. Once
the snapshot is completed, the only shared property between the source file and
destination file are the data blocks and block map shared by them.
The number of FileSnaps of a file is practically unlimited. The technical limit is the
maximum number of files supported by the VxFS file system, which is one billion
files per file set. When thousands of FileSnaps are created from the same file and
each of these snapshot files is simultaneously read and written to by thousands of
threads, FileSnaps scale very well due to the design that results in no contention
of the shared blocks when unsharing happens due to an overwrite. The performance
seen for the case of unsharing shared blocks due to an overwrite with FileSnaps
is closer to that of an allocating write than that of a traditional copy-on-write.
In disk layout Version 8, to support block or extent sharing between the files,
reference counts are tracked for each shared extent. VxFS processes reference
count updates due to sharing and unsharing of extents in a delayed fashion. Also,
an extent that is marked shared once will not go back to unshared until all the
references are gone. This is to improve the FileSnap creation performance and
performance of data extent unsharing. However, this in effect results in the shared
block statistics for the file system to be only accurate to the point of the processing
of delayed reclamation. In other words, the shared extent statistics on the file system
and a file could be stale, depending on the state of the file system.
325
to read the old data either, as long as the new data covers the entire block. This
behavior combined with delayed processing of shared extent accounting makes
the lazy copy-on-write complete in times comparable to that of an allocating write.
However, in the event of a server crash, when the server has not flushed the new
data to the newly allocated blocks, the data seen on the overwritten region would
be similar to what you would find in the case of an allocating write where the server
has crashed before the data is flushed. This is not the default behavior and with
the default behavior the data that you find in the overwritten region will be either
the new data or the old data.
326
327
update or a write changes the data in block n of the snapped file system, the old
data is first read and copied to the snapshot before the snapped file system is
updated. The bitmap entry for block n is changed from 0 to 1, indicating that the
data for block n can be found on the snapshot file system. The blockmap entry for
block n is changed from 0 to the block number on the snapshot file system containing
the old data.
A subsequent read request for block n on the snapshot file system will be satisfied
by checking the bitmap entry for block n and reading the data from the indicated
block on the snapshot file system, instead of from block n on the snapped file
system. This technique is called copy-on-write. Subsequent writes to block n on
the snapped file system do not result in additional copies to the snapshot file system,
since the old data only needs to be saved once.
All updates to the snapped file system for inodes, directories, data in files, extent
maps, and so forth, are handled in this fashion so that the snapshot can present a
consistent view of all file system structures on the snapped file system for the time
when the snapshot was created. As data blocks are changed on the snapped file
system, the snapshot gradually fills with data copied from the snapped file system.
The amount of disk space required for the snapshot depends on the rate of change
of the snapped file system and the amount of time the snapshot is maintained. In
the worst case, the snapped file system is completely full and every file is removed
and rewritten. The snapshot file system would need enough blocks to hold a copy
of every block on the snapped file system, plus additional blocks for the data
structures that make up the snapshot file system. This is approximately 101 percent
of the size of the snapped file system. Normally, most file systems do not undergo
changes at this extreme rate. During periods of low activity, the snapshot should
only require two to six percent of the blocks of the snapped file system. During
periods of high activity, the snapshot might require 15 percent of the blocks of the
snapped file system. These percentages tend to be lower for larger file systems
and higher for smaller ones.
Warning: If a snapshot file system runs out of space for changed data blocks, it is
disabled and all further attempts to access it fails. This does not affect the snapped
file system.
328
Chapter
16
Administering volume
snapshots
This chapter includes the following topics:
Cascaded snapshots
snapshots that are created using the vxassist command may be removed in a
future release.
To recover from the failure of instant snapshot commands, see the Symantec
Storage Foundation and High Availability Solutions Troubleshooting Guide.
Figure 16-1
Start
vxassist
snapstart
Original volume
vxassist
snapshot
Snapshot mirror
Backup
cycle
Original volume
Snapshot volume
Refresh on snapback
vxsassist
snapback
Independent volume
vxsassist snapclear
Back up to disk, tape or
other media, or use to
replicate database or file
The vxassist snapstart command creates a mirror to be used for the snapshot,
and attaches it to the volume as a snapshot mirror. As is usual when creating a
mirror, the process of copying the volumes contents to the new snapshot plexes
can take some time to complete. (The vxassist snapabort command cancels this
operation and removes the snapshot mirror.)
When the attachment is complete, the vxassist snapshot command is used to
create a new snapshot volume by taking one or more snapshot mirrors to use as
330
its data plexes. The snapshot volume contains a copy of the original volumes data
at the time that you took the snapshot. If more than one snapshot mirror is used,
the snapshot volume is itself mirrored.
The command, vxassist snapback, can be used to return snapshot plexes to the
original volume from which they were snapped, and to resynchronize the data in
the snapshot mirrors from the data in the original volume. This enables you to
refresh the data in a snapshot after you use it to make a backup. You can use a
variation of the same command to restore the contents of the original volume from
a snapshot previously taken.
The FastResync feature minimizes the time and I/O needed to resynchronize the
data in the snapshot. If FastResync is not enabled, a full resynchronization of the
data is required.
Finally, you can use the vxassist snapclear command to break the association
between the original volume and the snapshot volume. Because the snapshot
relationship is broken, no change tracking occurs. Use this command if you do not
need to reuse the snapshot volume to create a new point-in-time copy.
Figure 16-2
Start
vxsnap make
vxsnap refresh
vxsnap prepare
Original volume
Snapshot volume
vxsnap reattach
Back up to disk, tape or other media
The snapshot volume can also be used to create a replica
database or file system when synchronization is complete.
Backup
cycle
vxsnap dis
or
vxsnap split
Independent volume
To create an instant snapshot, use the vxsnap make command. This command can
either be applied to a suitably prepared empty volume that is to be used as the
331
snapshot volume, or it can be used to break off one or more synchronized plexes
from the original volume.
You can make a backup of a full-sized instant snapshot, instantly refresh its contents
from the original volume, or attach its plexes to the original volume, without
completely synchronizing the snapshot plexes from the original volume.
VxVM uses a copy-on-write mechanism to ensure that the snapshot volume
preserves the contents of the original volume at the time that the snapshot is taken.
Any time that the original contents of the volume are about to be overwritten, the
original data in the volume is moved to the snapshot volume before the write
proceeds. As time goes by, and the contents of the volume are updated, its original
contents are gradually relocated to the snapshot volume.
If a read request comes to the snapshot volume, yet the data resides on the original
volume (because it has not yet been changed), VxVM automatically and
transparently reads the data from the original volume.
If desired, you can perform either a background (non-blocking) or foreground
(blocking) synchronization of the snapshot volume. This is useful if you intend to
move the snapshot volume into a separate disk group for off-host processing, or
you want to turn the snapshot volume into an independent volume.
The vxsnap refresh command allows you to update the data in a snapshot, for
example, before taking a backup.
The command vxsnap reattach attaches snapshot plexes to the original volume,
and resynchronizes the data in these plexes from the original volume. Alternatively,
you can use the vxsnap restore command to restore the contents of the original
volume from a snapshot that you took at an earlier point in time. You can also
choose whether or not to keep the snapshot volume after restoration of the original
volume is complete.
By default, the FastResync feature of VxVM is used to minimize the time and I/O
needed to resynchronize the data in the snapshot mirror. FastResync must be
enabled to create instant snapshots.
See Creating and managing full-sized instant snapshots on page 349.
An empty volume must be prepared for use by full-sized instant snapshots and
linked break-off snapshots.
See Creating a volume for use as a full-sized instant or linked break-off snapshot
on page 344.
332
The mirror volume has been fully synchronized from the original volume.
The vxsnap make command can be run to create a snapshot.
ATTACHING
BROKEN
The mirror volume has been detached from the original volume because
of an I/O error or an unsuccessful attempt to grow the mirror volume.
The vxrecover command can be used to recover the mirror volume in
the same way as for a DISABLED volume.
If you resize (grow or shrink) a volume, all its ACTIVE linked mirror volumes are also
resized at the same time. The volume and its mirrors can be in the same disk group
or in different disk groups. If the operation is successful, the volume and its mirrors
will have the same size.
If a volume has been grown, a resynchronization of the grown regions in its linked
mirror volumes is started, and the links remain in the ATTACHING state until
resynchronization is complete. The vxsnap snapwait command can be used to
wait for the state to become ACTIVE.
333
When you use the vxsnap make command to create the snapshot volume, this
removes the link, and establishes a snapshot relationship between the snapshot
volume and the original volume.
The vxsnap reattach operation re-establishes the link relationship between the
two volumes, and starts a resynchronization of the mirror volume.
See Creating and managing linked break-off snapshot volumes on page 354.
An empty volume must be prepared for use by linked break-off snapshots.
See Creating a volume for use as a full-sized instant or linked break-off snapshot
on page 344.
Cascaded snapshots
Figure 16-3 shows a snapshot hierarchy, known as a snapshot cascade, that can
improve write performance for some applications.
Figure 16-3
Snapshot cascade
Most recent
snapshot
Original volume
V
Snapshot volume
Sn
Oldest
snapshot
Snapshot volume
Sn-1
Snapshot volume
S1
334
Deletion of a snapshot in the cascade takes time to copy the snapshots data
to the next snapshot in the cascade.
The reliability of a snapshot in the cascade depends on all the newer snapshots
in the chain. Thus the oldest snapshot in the cascade is the most vulnerable.
Reading from a snapshot in the cascade may require data to be fetched from
one or more other snapshots in the cascade.
For these reasons, it is recommended that you do not attempt to use a snapshot
cascade with applications that need to remove or split snapshots from the cascade.
In such cases, it may be more appropriate to create a snapshot of a snapshot as
described in the following section.
See Adding a snapshot to a cascaded snapshot hierarchy on page 360.
Note: Only unsynchronized full-sized or space-optimized instant snapshots are
usually cascaded. It is of little utility to create cascaded snapshots if the infrontof
snapshot volume is fully synchronized (as, for example, with break-off type
snapshots).
Figure 16-4
Original volume
V
Snapshot volume
S1
Snapshot volume
S2
Even though the arrangement of the snapshots in this figure appears similar to a
snapshot cascade, the relationship between the snapshots is not recursive. When
reading from the snapshot S2, data is obtained directly from the original volume, V,
if it does not exist in S1 itself.
See Figure 16-3 on page 334.
Such an arrangement may be useful if the snapshot volume, S1, is critical to the
operation. For example, S1 could be used as a stable copy of the original volume,
V. The additional snapshot volume, S2, can be used to restore the original volume
if that volume becomes corrupted. For a database, you might need to replay a redo
log on S2 before you could use it to restore V.
335
Figure 16-5 shows the sequence of steps that would be required to restore a
database.
Figure 16-5
Original volume
V
Snapshot volume of V:
S1
Original volume
V
Snapshot volume of V:
S1
3 After contents of V have gone bad, apply the database to redo logs to S2
Apply redo logs
Original volume
V
Snapshot volume of V:
S1
Original volume
V
Snapshot volume of V:
S1
If you have configured snapshots in this way, you may wish to make one or more
of the snapshots into independent volumes. There are two vxsnap commands that
you can use to do this:
The snapshot to be dissociated must have been fully synchronized from its
parent. If a snapshot volume has a child snapshot volume, the child must also
have been fully synchronized. If the command succeeds, the child snapshot
becomes a snapshot of the original volume.
Figure 16-6 shows the effect of applying the vxsnap dis command to snapshots
with and without dependent snapshots.
336
Figure 16-6
vxsnap dis is applied to snapshot S2, which has no snapshots of its own
Original volume
V
Snapshot volume of V:
S1
Original volume
V
Snapshot volume of V:
S1
Volume
S2
S1 remains owned by V
S2 is independent
Snapshot volume of V:
S1
vxsnap dis S1
Original volume
V
Volume
S1
Snapshot volume of V:
S2
S1 is independent
S2 is adopted by V
vxsnap split dissociates a snapshot and its dependent snapshots from its
parent volume. The snapshot that is to be split must have been fully synchronized
from its parent volume.
Figure 16-7 shows the operation of the vxsnap split command.
Figure 16-7
Original volume
V
Splitting snapshots
Snapshot volume of V:
S1
vxsnap split S1
Original volume
V
Volume
S1
S1 is independent
337
For traditional snapshots, you can create snapshots of all the volumes in a single
disk group by specifying the option -o allvols to the vxassist snapshot
command.
By default, each replica volume is named SNAPnumber-volume, where number is
a unique serial number, and volume is the name of the volume for which a snapshot
is being taken. This default can be overridden by using the option -o name=pattern.
See the vxassist(1M) manual page.
See the vxsnap(1M) manual page.
You can create a snapshot of all volumes that form a logical group; for example,
all the volumes that conform to a database instance.
Refresh on snapback
Original volume
Snapshot mirror
Snapshot volume
-o resyncfromreplica snapback
338
log replay or by running a file system checking utility such as fsck). All
synchronization of the contents of this backup must have been completed before
the original volume can be restored from it. The original volume is immediately
available for use while its contents are being restored.
See Restoring a volume from an instant space-optimized snapshot on page 363.
a backup. After the snapshot has been taken, read requests for data in the instant
snapshot volume are satisfied by reading either from a non-updated region of the
original volume, or from the copy of the original contents of an updated region that
have been recorded by the snapshot.
Note: Synchronization of a full-sized instant snapshot from the original volume is
enabled by default. If you specify the syncing=no attribute to vxsnap make, this
disables synchronization, and the contents of the instant snapshot are unlikely ever
to become fully synchronized with the contents of the original volume at the point
in time that the snapshot was taken. In such a case, the snapshot cannot be used
for off-host processing, nor can it become an independent volume.
You can immediately retake a full-sized or space-optimized instant snapshot at any
time by using the vxsnap refresh command. If a fully synchronized instant snapshot
is required, the new resynchronization must first complete.
339
To create instant snapshots of volume sets, use volume set names in place of
volume names in the vxsnap command.
See Creating instant snapshots of volume sets on page 357.
When using the vxsnap prepare or vxassist make commands to make a volume
ready for instant snapshot operations, if the specified region size exceeds half the
value of the tunable voliomem_maxpool_sz , the operation succeeds but gives a
warning such as the following (for a system where voliomem_maxpool_sz is set to
12MB):
VxVM vxassist WARNING V-5-1-0 Specified regionsize is
larger than the limit on the system
(voliomem_maxpool_sz/2=6144k).
340
Verify that the volume has an instant snap data change object (DCO) and DCO
volume, and that FastResync is enabled on the volume:
# vxprint -g volumedg -F%instant volume
# vxprint -g volumedg -F%fastresync volume
If both commands return a value of on, skip to step 3. Otherwise continue with
step 2.
Run the vxsnap prepare command on a volume only if it does not have an
instant snap DCO volume.
For example, to prepare the volume, myvol, in the disk group, mydg, use the
following command:
# vxsnap -g mydg prepare myvol regionsize=128k ndcomirs=2 \
alloc=mydg10,mydg11
This example creates a DCO object and redundant DCO volume with two
plexes located on disks mydg10 and mydg11, and associates them with myvol.
The region size is also increased to 128KB from the default size of 64KB. The
region size must be a power of 2, and be greater than or equal to 16KB. A
smaller value requires more disk space for the change maps, but the finer
granularity provides faster resynchronization.
341
Decide on the following characteristics that you want to allocate to the cache
volume that underlies the cache object:
The cache volume size should be sufficient to record changes to the parent
volumes during the interval between snapshot refreshes. A suggested value
is 10% of the total size of the parent volumes for a refresh interval of 24
hours.
The attribute init=active makes the cache volume immediately available for
use.
342
Use the vxmake cache command to create a cache object on top of the cache
volume that you created in the previous step:
# vxmake [-g diskgroup] cache cache_object \
cachevolname=volume [regionsize=size] [autogrow=on] \
[highwatermark=hwmk] [autogrowby=agbvalue] \
[maxautogrow=maxagbvalue]]
343
Use the vxprint command on the original volume to find the required size for
the snapshot volume.
# LEN=`vxprint [-g diskgroup] -F%len volume`
The command as shown assumes a Bourne-type shell such as sh, ksh or bash.
You may need to modify the command for other shells such as csh or tcsh.
Use the vxprint command on the original volume to discover the name of its
DCO:
# DCONAME=`vxprint [-g diskgroup] -F%dco_name volume`
344
Use the vxprint command on the DCO to discover its region size (in blocks):
# RSZ=`vxprint [-g diskgroup] -F%regionsz $DCONAME`
Use the vxassist command to create a volume, snapvol, of the required size
and redundancy, together with an instant snap DCO volume with the correct
region size:
# vxassist [-g diskgroup] make snapvol $LEN \
[layout=mirror nmirror=number] logtype=dco drl=off \
dcoversion=20 [ndcomirror=number] regionsz=$RSZ \
init=active [storage_attributes]
Storage attributes give you control over the devices, including disks and
controllers, which vxassist uses to configure a volume.
See Creating a volume on specific disks on page 146.
Specify the same number of DCO mirrors (ndcomirror) as the number of
mirrors in the volume (nmirror). The init=active attribute makes the volume
available immediately. You can use storage attributes to specify which disks
should be used for the volume.
As an alternative to creating the snapshot volume and its DCO volume in a
single step, you can first create the volume, and then prepare it for instant
snapshot operations as shown here:
# vxassist [-g diskgroup] make snapvol $LEN \
[layout=mirror nmirror=number] init=active \
[storage_attributes]
# vxsnap [-g diskgroup] prepare snapvol [ndcomirs=number] \
regionsize=$RSZ [storage_attributes]
345
To upgrade the instant snap DCOs for all volumes in the disk group
Make sure that the disk group is at least version 180. To upgrade the disk
group:
# vxdg upgrade diskgroup
Use the following command to upgrade the instant snap DCOs for all volumes
in the disk group:
# vxsnap -g diskgroup upgradeall
Make sure that the disk group is at least version 180. To upgrade the disk
group:
# vxdg upgrade diskgroup
To upgrade the DCOs, specify one or more volumes or volume sets to the
following command:
# vxsnap [-g diskgroup] upgrade
[volume1|volset1][volume2|volset2...]
346
If the region size of a space-optimized snapshot differs from the region size of the
cache, this can degrade the systems performance compared to the case where
the region sizes are the same.
See Creating a shared cache object on page 342.
The attributes for a snapshot are specified as a tuple to the vxsnap make command.
This command accepts multiple tuples. One tuple is required for each snapshot
that is being created. Each element of a tuple is separated from the next by a slash
character (/). Tuples are separated by white space.
To create and manage a space-optimized instant snapshot
The cachesize attribute determines the size of the cache relative to the
size of the volume. The autogrow attribute determines whether VxVM will
automatically enlarge the cache if it is in danger of overflowing. By default,
autogrow=on and the cache is automatically grown.
If autogrow is enabled, but the cache cannot be grown, VxVM disables the
oldest and largest snapshot that is using the same cache, and releases its
cache space for use.
347
Restore the contents of the original volume from the snapshot volume. The
space-optimized instant snapshot remains intact at the end of the operation.
See Restoring a volume from an instant space-optimized snapshot
on page 363.
348
To create a full-sized instant snapshot, use the following form of the vxsnap
make command:
# vxsnap [-g diskgroup] make source=volume/snapvol=snapvol\
[/snapdg=snapdiskgroup] [/syncing=off]
The command specifies the volume, snapvol, that you prepared earlier.
For example, to use the prepared volume, snap1myvol, as the snapshot for
the volume, myvol, in the disk group, mydg, use the following command:
# vxsnap -g mydg make source=myvol/snapvol=snap1myvol
For full-sized instant snapshots that are created from an empty volume,
background synchronization is enabled by default (equivalent to specifying the
syncing=on attribute). To move a snapshot into a separate disk group, or to
turn it into an independent volume, you must wait for its contents to be
synchronized with those of its parent volume.
You can use the vxsnap syncwait command to wait for the synchronization
of the snapshot volume to be completed, as shown here:
# vxsnap [-g diskgroup] syncwait snapvol
For example, you would use the following command to wait for synchronization
to finish on the snapshot volume, snap2myvol:
# vxsnap -g mydg syncwait snap2myvol
349
This command exits (with a return code of zero) when synchronization of the
snapshot volume is complete. The snapshot volume may then be moved to
another disk group or turned into an independent volume.
See Controlling instant snapshot synchronization on page 367.
If required, you can use the following command to test if the synchronization
of a volume is complete:
# vxprint [-g diskgroup] -F%incomplete snapvol
This command returns the value off if synchronization of the volume, snapvol,
is complete; otherwise, it returns the value on.
You can also use the vxsnap print command to check on the progress of
synchronization.
See Displaying snapshot information on page 379.
If you do not want to move the snapshot into a separate disk group, or to turn
it into an independent volume, specify the syncing=off attribute. This avoids
unnecessary system overhead. For example, to turn off synchronization when
creating the snapshot of the volume, myvol, you would use the following form
of the vxsnap make command:
# vxsnap -g mydg make source=myvol/snapvol=snap1myvol\
/syncing=off
Reattach some or all of the plexes of the snapshot volume with the original
volume.
350
Restore the contents of the original volume from the snapshot volume. You
can choose whether none, a subset, or all of the plexes of the snapshot
volume are returned to the original volume as a result of the operation.
See Restoring a volume from an instant space-optimized snapshot
on page 363.
Dissociate the snapshot volume entirely from the original volume. This may
be useful if you want to use the copy for other purposes such as testing or
report generation. If desired, you can delete the dissociated volume.
See Dissociating an instant snapshot on page 363.
If the snapshot is part of a snapshot hierarchy, you can also choose to split
this hierarchy from its parent volumes.
See Splitting an instant snapshot hierarchy on page 364.
351
To create the snapshot, you can either take some of the existing ACTIVE plexes
in the volume, or you can use the following command to add new snapshot
mirrors to the volume:
# vxsnap [-b] [-g diskgroup] addmir volume [nmirror=N] \
[alloc=storage_attributes]
By default, the vxsnap addmir command adds one snapshot mirror to a volume
unless you use the nmirror attribute to specify a different number of mirrors.
The mirrors remain in the SNAPATT state until they are fully synchronized. The
-b option can be used to perform the synchronization in the background. Once
synchronized, the mirrors are placed in the SNAPDONE state.
For example, the following command adds 2 mirrors to the volume, vol1, on
disks mydg10 and mydg11:
# vxsnap -g mydg addmir vol1 nmirror=2 alloc=mydg10,mydg11
If you specify the -b option to the vxsnap addmir command, you can use the
vxsnap snapwait command to wait for synchronization of the snapshot plexes
to complete, as shown in this example:
# vxsnap -g mydg snapwait vol1 nmirror=2
352
To create a third-mirror break-off snapshot, use the following form of the vxsnap
make command.
# vxsnap [-g diskgroup] make source=volume[/newvol=snapvol]\
{/plex=plex1[,plex2,...]|/nmirror=number}
Either of the following attributes may be specified to create the new snapshot
volume, snapvol, by breaking off one or more existing plexes in the original
volume:
plex
nmirror
Specifies how many plexes are to be broken off. This attribute can
only be used with plexes that are in the SNAPDONE state. (Such
plexes could have been added to the volume by using the vxsnap
addmir command.)
Snapshots that are created from one or more ACTIVE or SNAPDONE plexes in
the volume are already synchronized by definition.
For backup purposes, a snapshot volume with one plex should be sufficient.
For example, to create the instant snapshot volume, snap2myvol, of the volume,
myvol, in the disk group, mydg, from a single existing plex in the volume, use
the following command:
# vxsnap -g mydg make source=myvol/newvol=snap2myvol/nmirror=1
The next example shows how to create a mirrored snapshot from two existing
plexes in the volume:
# vxsnap -g mydg make source=myvol/newvol=snap2myvol/plex=myvol-03,myvol-04
353
Reattach some or all of the plexes of the snapshot volume with the original
volume.
See Reattaching an instant full-sized or plex break-off snapshot
on page 361.
Restore the contents of the original volume from the snapshot volume. You
can choose whether none, a subset, or all of the plexes of the snapshot
volume are returned to the original volume as a result of the operation.
See Restoring a volume from an instant space-optimized snapshot
on page 363.
Dissociate the snapshot volume entirely from the original volume. This may
be useful if you want to use the copy for other purposes such as testing or
report generation. If desired, you can delete the dissociated volume.
See Dissociating an instant snapshot on page 363.
If the snapshot is part of a snapshot hierarchy, you can also choose to split
this hierarchy from its parent volumes.
See Splitting an instant snapshot hierarchy on page 364.
354
355
Use the following command to link the prepared snapshot volume, snapvol, to
the data volume:
# vxsnap [-g diskgroup] [-b] addmir volume mirvol=snapvol \
[mirdg=snapdg]
The optional mirdg attribute can be used to specify the snapshot volumes
current disk group, snapdg. The -b option can be used to perform the
synchronization in the background. If the -b option is not specified, the
command does not return until the link becomes ACTIVE.
For example, the following command links the prepared volume, prepsnap, in
the disk group, mysnapdg, to the volume, vol1, in the disk group, mydg:
# vxsnap -g mydg -b addmir vol1 mirvol=prepsnap mirdg=mysnapdg
If the -b option is specified, you can use the vxsnap snapwait command to
wait for the synchronization of the linked snapshot volume to complete, as
shown in this example:
# vxsnap -g mydg snapwait vol1 mirvol=prepsnap mirdg=mysnapvoldg
To create a linked break-off snapshot, use the following form of the vxsnap
make command.
# vxsnap [-g diskgroup] make
source=volume/snapvol=snapvol\
[/snapdg=snapdiskgroup]
The snapdg attribute must be used to specify the snapshot volumes disk group
if this is different from that of the data volume.
For example, to use the prepared volume, prepsnap, as the snapshot for the
volume, vol1, in the disk group, mydg, use the following command:
# vxsnap -g mydg make source=vol1/snapvol=prepsnap/snapdg=mysnapdg
Dissociate the snapshot volume entirely from the original volume. This may
be useful if you want to use the copy for other purposes such as testing or
report generation. If desired, you can delete the dissociated volume.
See Dissociating an instant snapshot on page 363.
If the snapshot is part of a snapshot hierarchy, you can also choose to split
this hierarchy from its parent volumes.
See Splitting an instant snapshot hierarchy on page 364.
The snapshot volumes (snapvol1, snapvol2 and so on) must have been prepared
in advance.
See Creating a volume for use as a full-sized instant or linked break-off snapshot
on page 344.
The specified source volumes (vol1, vol2 and so on) may be the same volume or
they can be different volumes.
If all the snapshots are to be space-optimized and to share the same cache, the
following form of the command can be used:
356
The vxsnap make command also allows the snapshots to be of different types, have
different redundancy, and be configured from different storage, as shown here:
# vxsnap [-g diskgroup] make source=vol1/snapvol=snapvol1 \
source=vol2[/newvol=snapvol2]/cache=cacheobj\
[/alloc=storage_attributes2][/nmirror=number2]
source=vol3[/newvol=snapvol3][/alloc=storage_attributes3]\
/nmirror=number3
In this example, sequential DRL is enabled for the snapshots of the redo log
volumes, and normal DRL is applied to the snapshots of the volumes that contain
the database tables. The two space-optimized snapshots are configured to share
the same cache object in the disk group. Also note that break-off snapshots are
used for the redo logs as such volumes are write intensive.
357
of a volume set must itself be a volume set with the same number of volumes, and
the same volume sizes and index numbers as the parent. For example, if a volume
set contains three volumes with sizes 1GB, 2GB and 3GB, and indexes 0, 1 and 2
respectively, then the snapshot volume set must have three volumes with the same
sizes matched to the same set of index numbers. The corresponding volumes in
the parent and snapshot volume sets are also subject to the same restrictions as
apply between standalone volumes and their snapshots.
You can use the vxvset list command to verify that the volume sets have identical
characteristics as shown in this example:
# vxvset -g mydg list vset1
VOLUME
vol_0
vol_1
vol_2
INDEX
0
1
2
LENGTH
204800
409600
614400
KSTATE
ENABLED
ENABLED
ENABLED
CONTEXT
-
KSTATE
ENABLED
ENABLED
ENABLED
CONTEXT
-
INDEX
0
1
2
LENGTH
204800
409600
614400
To create a full-sized third-mirror break-off snapshot, you must ensure that each
volume in the source volume set contains sufficient plexes. The following example
shows how to achieve this by using the vxsnap command to add the required
number of plexes before breaking off the snapshot:
358
Here a new cache object is created for the volume set, vset3, and an existing cache
object, mycobj, is used for vset4.
The volume must have been prepared using the vxsnap prepare command.
If a volume set name is specified instead of a volume, the specified number of
plexes is added to each volume in the volume set.
By default, the vxsnap addmir command adds one snapshot mirror to a volume
unless you use the nmirror attribute to specify a different number of mirrors. The
mirrors remain in the SNAPATT state until they are fully synchronized. The -b option
can be used to perform the synchronization in the background. Once synchronized,
the mirrors are placed in the SNAPDONE state.
For example, the following command adds 2 mirrors to the volume, vol1, on disks
mydg10 and mydg11:
# vxsnap -g mydg addmir vol1 nmirror=2 alloc=mydg10,mydg11
359
Once you have added one or more snapshot mirrors to a volume, you can use the
vxsnap make command with either the nmirror attribute or the plex attribute to
create the snapshot volumes.
For example, the following command removes a snapshot mirror from the volume,
vol1:
# vxsnap -g mydg rmmir vol1
The mirvol and optional mirdg attributes specify the snapshot volume, snapvol,
and its disk group, snapdiskgroup. For example, the following command removes
a linked snapshot volume, prepsnap, from the volume, vol1:
# vxsnap -g mydg rmmir vol1 mirvol=prepsnap mirdg=mysnapdg
Similarly, the next snapshot that is taken, fri_bu, is placed in front of thurs_bu:
360
If the source volume is not specified, the immediate parent of the snapshot is used.
Warning: The snapshot that is being refreshed must not be open to any application.
For example, any file system configured on the volume must first be unmounted.
By default, all the plexes are reattached, which results in the removal of the
snapshot. If required, the number of plexes to be reattached may be specified as
the value assigned to the nmirror attribute.
Warning: The snapshot that is being reattached must not be open to any application.
For example, any file system configured on the snapshot volume must first be
unmounted.
It is possible to reattach a volume to an unrelated volume provided that their volume
sizes and region sizes are compatible.
For example the following command reattaches one plex from the snapshot volume,
snapmyvol, to the volume, myvol:
# vxsnap -g mydg reattach snapmyvol source=myvol nmirror=1
361
While the reattached plexes are being resynchronized from the data in the parent
volume, they remain in the SNAPTMP state. After resynchronization is complete, the
plexes are placed in the SNAPDONE state. You can use the vxsnap snapwait
command (but not vxsnap syncwait) to wait for the resynchronization of the
reattached plexes to complete, as shown here:
# vxsnap -g mydg snapwait myvol nmirror=1
If the volume and its snapshot have both been resized (to an identical smaller or
larger size) before performing the reattachment, a fast resynchronization can still
be performed. A full resynchronization is not required. Instant snap DCO volumes
are resized proportionately when the associated data volume is resized. For version
0 DCO volumes, the FastResync maps stay the same size, but the region size is
recalculated, and the locations of the dirty bits in the existing maps are adjusted.
In both cases, new regions are marked as dirty in the maps.
The sourcedg attribute must be used to specify the data volumes disk group if this
is different from the snapshot volumes disk group, snapdiskgroup.
Warning: The snapshot that is being reattached must not be open to any application.
For example, any file system configured on the snapshot volume must first be
unmounted.
It is possible to reattach a volume to an unrelated volume provided that their sizes
and region sizes are compatible.
For example the following command reattaches the snapshot volume, prepsnap,
in the disk group, snapdg, to the volume, myvol, in the disk group, mydg:
# vxsnap -g snapdg reattach prepsnap source=myvol sourcedg=mydg
362
For a space-optimized instant snapshot, the cached data is used to recreate the
contents of the specified volume. The space-optimized instant snapshot remains
unchanged by the restore operation.
Warning: For this operation to succeed, the volume that is being restored and the
snapshot volume must not be open to any application. For example, any file systems
that are configured on either volume must first be unmounted.
It is not possible to restore a volume from an unrelated volume.
The following example demonstrates how to restore the volume, myvol, from the
space-optimized snapshot, snap3myvol.
# vxsnap -g mydg restore myvol source=snap3myvol
This operation fails if the snapshot, snapvol, has unsynchronized snapshots. If this
happens, the dependent snapshots must be fully synchronized from snapvol. When
363
You can also use this command to remove a space-optimized instant snapshot
from its cache.
See Removing a cache on page 371.
The topmost snapshot volume in the hierarchy must have been fully synchronized
for this command to succeed. Snapshots that are lower down in the hierarchy need
not have been fully resynchronized.
See Controlling instant snapshot synchronization on page 367.
364
The following command splits the snapshot hierarchy under snap2myvol from its
parent volume:
# vxsnap -g mydg split snap2myvol
SNAPOBJECT
-snapvol1_snp1
snapvol1 vol1_snp1
TYPE
PARENT
SNAPSHOT
%DIRTY
%VALID
volume
volume
volume
--vol1
-snapvol1
--
-1.30
1.30
100
-1.30
The %DIRTY value for snapvol1 shows that its contents have changed by 1.30%
when compared with the contents of vol1. As snapvol1 has not been synchronized
with vol1, the %VALID value is the same as the %DIRTY value. If the snapshot were
partly synchronized, the %VALID value would lie between the %DIRTY value and
100%. If the snapshot were fully synchronized, the %VALID value would be 100%.
The snapshot could then be made independent or moved into another disk group.
Additional information about the snapshots of volumes and volume sets can be
obtained by using the -n option with the vxsnap print command:
# vxsnap [-g diskgroup] -n [-l] [-v] [-x] print [vol]
Alternatively, you can use the vxsnap list command, which is an alias for the
vxsnap -n print command:
365
366
The following output is an example of using this command on the disk group dg1:
# vxsnap -g dg -vx list
NAME
vol
svol1
svol2
svol3
svol21
vol-02
mvol
vset1
v1
v2
svset1
sv1
sv2
vol-03
mvol2
DG
dg1
dg2
dg1
dg2
dg1
dg1
dg2
dg1
dg1
dg1
dg1
dg1
dg1
dg1
dg2
OBJTYPE
vol
vol
vol
vol
vol
plex
vol
vset
compvol
compvol
vset
compvol
compvol
plex
vol
SNAPTYPE
fullinst
mirbrk
volbrk
spaceopt
snapmir
mirvol
mirbrk
mirbrk
mirbrk
detmir
detvol
PARENT
vol
vol
vol
svol2
vol
vol
vset
v1
v2
vol
vol
PARENTDG
dg1
dg1
dg1
dg1
dg1
dg1
dg1
dg1
dg1
dg1
dg1
SNAPDATE
2006/2/1
2006/2/1
2006/2/1
2006/2/1
2006/2/1
2006/2/1
2006/2/1
-
CHANGE_DATA
12:29
12:29
12:29
12:29
12:29
12:29
12:29
20M (0.2%)
120M (1.2%)
105M (1.1%)
52M (0.5%)
-
1G (50%)
512M (50%)
512M (50%)
20M (0.2%)
20M (0.2%)
SYNCED_DATA
10G (100%)
60M (0.6%)
10G (100%)
10G (100%)
52M (0.5%)
56M (0.6%)
58M (0.6%)
2G (100%)
1G (100%)
1G (100%)
2G (100%)
1G (100%)
1G (100%)
-
This shows that the volume vol has three full-sized snapshots, svol1, svol2 and
svol3, which are of types full-sized instant (fullinst), mirror break-off (mirbrk)
and linked break-off (volbrk). It also has one snapshot plex (snapmir), vol-02,
and one linked mirror volume (mirvol), mvol. The snapshot svol2 itself has a
space-optimized instant snapshot (spaceopt), svol21. There is also a volume set,
vset1, with component volumes v1 and v2. This volume set has a mirror break-off
snapshot, svset1, with component volumes sv1 and sv2. The last two entries show
a detached plex, vol-03, and a detached mirror volume, mvol2, which have vol
as their parent volume. These snapshot objects may have become detached due
to an I/O error, or, in the case of the plex, by running the vxplex det command.
The CHANGE_DATA column shows the approximate difference between the current
contents of the snapshot and its parent volume. This corresponds to the amount
of data that would have to be resynchronized to make the contents the same again.
The SYNCED_DATA column shows the approximate progress of synchronization since
the snapshot was taken.
The -l option can be used to obtain a longer form of the output listing instead of
the tabular form.
The -x option expands the output to include the component volumes of volume
sets.
See the vxsnap(1M) manual page for more information about using the vxsnap
print and vxsnap list commands.
Command
Description
Pause synchronization of a
volume.
Resume synchronization of a
volume.
vol|vol_set
vxsnap [-b] [-g diskgroup] syncstart \
vol|vol_set
The commands that are shown in Table 16-1 cannot be used to control the
synchronization of linked break-off snapshots.
367
The vxsnap snapwait command is provided to wait for the link between new linked
break-off snapshots to become ACTIVE, or for reattached snapshot plexes to reach
the SNAPDONE state following resynchronization.
See Creating and managing linked break-off snapshot volumes on page 354.
See Reattaching an instant full-sized or plex break-off snapshot on page 361.
See Reattaching a linked break-off snapshot volume on page 362.
slow=iodelay
Note: The iosize and slow parameters are not supported for space-optimized
snapshots.
368
When cache usage reaches the high watermark value, highwatermark (default
value is 90 percent), vxcached grows the size of the cache volume by the value
of autogrowby (default value is 20% of the size of the cache volume in blocks).
The new required cache size cannot exceed the value of maxautogrow (default
value is twice the size of the cache volume in blocks).
When cache usage reaches the high watermark value, and the new required
cache size would exceed the value of maxautogrow, vxcached deletes the oldest
snapshot in the cache. If there are several snapshots with the same age, the
largest of these is deleted.
When cache usage reaches the high watermark value, vxcached deletes the
oldest snapshot in the cache. If there are several snapshots with the same age,
the largest of these is deleted. If there is only a single snapshot, this snapshot
is detached and marked as invalid.
Note: The vxcached daemon does not remove snapshots that are currently open,
and it does not remove the last or only snapshot in the cache.
If the cache space becomes exhausted, the snapshot is detached and marked as
invalid. If this happens, the snapshot is unrecoverable and must be removed.
Enabling the autogrow feature on the cache helps to avoid this situation occurring.
However, for very small caches (of the order of a few megabytes), it is possible for
the cache to become exhausted before the system has time to respond and grow
the cache. In such cases, you can increase the size of the cache manually.
369
Alternatively, you can use the vxcache set command to reduce the value of
highwatermark as shown in this example:
# vxcache -g mydg set highwatermark=60 cobjmydg
You can use the maxautogrow attribute to limit the maximum size to which a cache
can grow. To estimate this size, consider how much the contents of each source
volume are likely to change between snapshot refreshes, and allow some additional
space for contingency.
If necessary, you can use the vxcache set command to change other autogrow
attribute values for a cache.
See the vxcache(1M) manual page.
For example, to increase the size of the cache volume associated with the cache
object, mycache, to 2GB, you would use the following command:
# vxcache -g mydg growcacheto mycache 2g
To grow a cache by a specified amount, use the following form of the command
shown here:
# vxcache [-g diskgroup] growcacheby cache_object
size
For example, the following command increases the size of mycache by 1GB:
# vxcache -g mydg growcacheby mycache 1g
370
You can similarly use the shrinkcacheby and shrinkcacheto operations to reduce
the size of a cache.
See the vxcache(1M) manual page.
Removing a cache
To remove a cache completely, including the cache object, its cache volume and all
space-optimized snapshots that use the cache:
Run the following command to find out the names of the top-level snapshot
volumes that are configured on the cache object:
# vxprint -g diskgroup -vne \
"v_plex.pl_subdisk.sd_dm_name ~ /cache_object/"
Remove all the top-level snapshots and their dependent snapshots (this can
be done with a single command):
# vxedit -g diskgroup -r rm snapvol ...
371
The recommended approach to performing volume backup from the command line,
or from a script, is to use the vxsnap command. The vxassist snapstart,
snapwait, and snapshot commands are supported for backward compatibility.
The vxassist snapshot procedure consists of two steps:
The vxassist snapstart step creates a write-only backup plex which gets attached
to and synchronized with the volume. When synchronized with the volume, the
backup plex is ready to be used as a snapshot mirror. The end of the update
procedure is indicated by the new snapshot mirror changing its state to SNAPDONE.
This change can be tracked by the vxassist snapwait task, which waits until at
least one of the mirrors changes its state to SNAPDONE. If the attach process fails,
the snapshot mirror is removed and its space is released.
Note: If the snapstart procedure is interrupted, the snapshot mirror is automatically
removed when the volume is started.
Once the snapshot mirror is synchronized, it continues being updated until it is
detached. You can then select a convenient time at which to create a snapshot
volume as an image of the existing volume. You can also ask users to refrain from
using the system during the brief time required to perform the snapshot (typically
less than a minute). The amount of time involved in creating the snapshot mirror
is long in contrast to the brief amount of time that it takes to create the snapshot
volume.
The online backup procedure is completed by running the vxassist snapshot
command on a volume with a SNAPDONE mirror. This task detaches the finished
snapshot (which becomes a normal mirror), creates a new normal volume and
attaches the snapshot mirror to the snapshot volume. The snapshot then becomes
a normal, functioning volume and the state of the snapshot is set to ACTIVE.
372
For example, to create a snapshot mirror of a volume called voldef, use the
following command:
# vxassist [-g diskgroup] snapstart voldef
If vxassist snapstart is not run in the background, it does not exit until the
mirror has been synchronized with the volume. The mirror is then ready to be
used as a plex of a snapshot volume. While attached to the original volume,
its contents continue to be updated until you take the snapshot.
Use the nmirror attribute to create as many snapshot mirrors as you need for
the snapshot volume. For a backup, you should usually only require the default
of one.
It is also possible to make a snapshot plex from an existing plex in a volume.
See Converting a plex into a snapshot plex on page 375.
373
If required, use the nmirror attribute to specify the number of mirrors in the
snapshot volume.
For example, to create a snapshot of voldef, use the following command:
# vxassist -g mydg snapshot voldef snapvoldef
The vxassist snapshot task detaches the finished snapshot mirror, creates
a new volume, and attaches the snapshot mirror to it. This step should only
take a few minutes. The snapshot volume, which reflects the original volume
at the time of the snapshot, is now available for backing up, while the original
volume continues to be available for applications and users.
If required, you can make snapshot volumes for several volumes in a disk
group at the same time.
See Creating multiple snapshots with the vxassist command on page 376.
If you require a backup of the data in the snapshot, use an appropriate utility
or operating system command to copy the contents of the snapshot to tape,
or to some other backup medium.
When the backup is complete, you have the following choices for what to do
with the snapshot volume:
Reattach some or all of the plexes of the snapshot volume with the original
volume.
See Reattaching a snapshot volume on page 377.
If FastResync was enabled on the volume before the snapshot was taken,
this speeds resynchronization of the snapshot plexes before the backup
cycle starts again at step 3.
This may be useful if you want to use the copy for other purposes such as
testing or report generation.
374
dcologplex is the name of an existing DCO plex that is to be associated with the
new snapshot plex. You can use the vxprint command to find out the name of the
DCO volume.
See Adding a version 0 DCO and DCO volume on page 380.
For example, to make a snapshot plex from the plex trivol-03 in the 3-plex volume
trivol, you would use the following command:
375
Here the DCO plex trivol_dco_03 is specified as the DCO plex for the new
snapshot plex.
To convert an existing plex into a snapshot plex in the SNAPDONE state for a
volume on which Non-Persistent FastResync is enabled, use the following command:
# vxplex [-g diskgroup] convert state=SNAPDONE plex
A converted plex is in the SNAPDONE state, and can be used immediately to create
a snapshot volume.
Note: The last complete regular plex in a volume, an incomplete regular plex, or a
dirty region logging (DRL) log plex cannot be converted into a snapshot plex.
See Third-mirror break-off snapshots on page 313.
By default, the first snapshot volume is named SNAP-volume, and each subsequent
snapshot is named SNAPnumber-volume, where number is a unique serial number,
and volume is the name of the volume for which the snapshot is being taken. This
default pattern can be overridden by using the option -o name=pattern, as
described on the vxassist(1M) manual page. For example, the pattern SNAP%v-%d
reverses the order of the number and volume components in the name.
To snapshot all the volumes in a single disk group, specify the option -o allvols
to vxassist:
# vxassist -g diskgroup -o allvols snapshot
This operation requires that all snapstart operations are complete on the volumes.
It fails if any of the volumes in the disk group do not have a complete snapshot plex
in the SNAPDONE state.
Note: The vxsnap command provides similiar functionality for creating multiple
snapshots.
376
To merge a specified number of plexes from the snapshot volume with the original
volume, use the following command:
# vxassist [-g diskgroup] snapback nmirror=number snapshot
Here the nmirror attribute specifies the number of mirrors in the snapshot volume
that are to be re-attached.
Once the snapshot plexes have been reattached and their data resynchronized,
they are ready to be used in another snapshot operation.
By default, the data in the original volume is used to update the snapshot plexes
that have been re-attached. To copy the data from the replica volume instead, use
the following command:
# vxassist [-g diskgroup] -o resyncfromreplica snapback snapshot
Warning: Always unmount the snapshot volume (if this is mounted) before performing
a snapback. In addition, you must unmount the file system corresponding to the
primary volume before using the resyncfromreplica option.
377
Use the following vxprint commands to discover the names of the snapshot
volumes data change object (DCO) and DCO volume:
# DCONAME=`vxprint [-g diskgroup] -F%dco_name snapshot`
# DCOVOL=`vxprint [-g diskgroup] -F%log_vol $DCONAME`
Use the vxassist mirror command to create mirrors of the existing snapshot
volume and its DCO volume:
# vxassist -g diskgroup mirror snapshot
# vxassist -g diskgroup mirror $DCOVOL
The new plex in the DCO volume is required for use with the new data plex in
the snapshot.
Use the vxprint command to find out the name of the additional snapshot
plex:
# vxprint -g diskgroup snapshot
Use the vxprint command to find out the record ID of the additional DCO
plex:
# vxprint -g diskgroup -F%rid $DCOVOL
Use the vxedit command to set the dco_plex_rid field of the new data plex
to the name of the new DCO plex:
# vxedit -g diskgroup set dco_plex_rid=dco_plex_rid new_plex
The new data plex is now ready to be used to perform a snapback operation.
378
USETYPE
NAME
VOLUME
LENGTH
LENGTH
LENGTH
%DIRTY
%DIRTY
v
ss
dp
dp
fsgen
SNAP-v1
v1
v1
20480
20480
20480
20480
4
0
0
fsgen
v1
20480
20480
v1
SNAP-v1_snp
v1-01
v1-02
v SNAP-v1
ss v1_snp
USETYPE
NAME
VOLUME
LENGTH
LENGTH
LENGTH
%DIRTY
%DIRTY
v v2
ss -dp v2-01
fsgen
SNAP-v2
v2
20480
20480
20480
0
0
v SNAP-v2
ss --
fsgen
v2
20480
20480
379
Ensure that the disk group containing the existing volume has at least disk
group version 90. To check the version of a disk group:
# vxdg list diskgroup
380
Add a DCO and DCO volume to the existing volume (which may already have
dirty region logging (DRL) enabled):
# vxassist [-g diskgroup] addlog volume logtype=dco \
[ndcomirror=number] [dcolen=size] [storage_attributes]
For non-layered volumes, the default number of plexes in the mirrored DCO
volume is equal to the lesser of the number of plexes in the data volume or 2.
For layered volumes, the default number of DCO plexes is always 2. If required,
use the ndcomirror attribute to specify a different number. It is recommended
that you configure as many DCO plexes as there are existing data and snapshot
plexes in the volume. For example, specify ndcomirror=3 when adding a DCO
to a 3-way mirrored volume.
The default size of each plex is 132 blocks. You can use the dcolen attribute
to specify a different size. If specified, the size of the plex must be an integer
multiple of 33 blocks from 33 up to a maximum of 2112 blocks.
You can specify vxassist-style storage attributes to define the disks that can
or cannot be used for the plexes of the DCO volume.
See Specifying storage for version 0 DCO plexes on page 381.
381
placed on disks which are used to hold the plexes of other volumes, this may cause
problems when you subsequently attempt to move volumes into other disk groups.
You can use storage attributes to specify explicitly which disks to use for the DCO
plexes. If possible, specify the same disks as those on which the volume is
configured.
For example, to add a DCO object and DCO volume with plexes on mydg05 and
mydg06, and a plex size of 264 blocks to the volume, myvol, in the disk group, mydg,
use the following command:
# vxassist -g mydg addlog myvol logtype=dco dcolen=264 mydg05 mydg06
To view the details of the DCO object and DCO volume that are associated with a
volume, use the vxprint command. The following is partial vxprint output for the
volume named vol1 (the TUTIL0 and PUTIL0 columns are omitted for clarity):
TY
v
pl
sd
pl
sd
dc
v
pl
sd
pl
sd
NAME
vol1
vol1-01
disk01-01
vol1-02
disk02-01
vol1_dco
vol1_dcl
vol1_dcl-01
disk03-01
vol1_dcl-02
disk04-01
ASSOC
fsgen
vol1
vol1-01
vol1
vol1-02
vol1
gen
vol1_dcl
vol1_dcl-01
vol1_dcl
vol1_dcl-02
KSTATE
ENABLED
ENABLED
ENABLED
ENABLED
ENABLED
ENABLED
ENABLED
ENABLED
ENABLED
ENABLED
LENGTH
1024
1024
1024
1024
1024
132
132
132
132
132
PLOFFS
0
0
0
0
STATE ...
ACTIVE
ACTIVE
ACTIVE
ACTIVE
ACTIVE
ACTIVE
-
In this output, the DCO object is shown as vol1_dco, and the DCO volume as
vol1_dcl with 2 plexes, vol1_dcl-01 and vol1_dcl-02.
If required, you can use the vxassist move command to relocate DCO plexes to
different disks. For example, the following command moves the plexes of the DCO
volume, vol1_dcl, for volume vol1 from disk03 and disk04 to disk07 and disk08.
Note: The ! character is a special character in some shells. The following example
shows how to escape it in a bash shell.
# vxassist -g mydg move vol1_dcl \!disk03 \!disk04 disk07 disk08
382
This completely removes the DCO object, DCO volume and any snap objects. It
also has the effect of disabling FastResync for the volume.
Alternatively, you can use the vxdco command to the same effect:
# vxdco [-g diskgroup] [-o rm] dis dco_obj
The default name of the DCO object, dco_obj, for a volume is usually formed by
appending the string _dco to the name of the parent volume. To find out the name
of the associated DCO object, use the vxprint command on the volume.
To dissociate, but not remove, the DCO object, DCO volume and any snap objects
from the volume, myvol, in the disk group, mydg, use the following command:
# vxdco -g mydg dis myvol_dco
This form of the command dissociates the DCO object from the volume but does
not destroy it or the DCO volume. If the -o rm option is specified, the DCO object,
DCO volume and its plexes, and any snap objects are also removed.
Warning: Dissociating a DCO and DCO volume disables Persistent FastResync on
the volume. A full resynchronization of any remaining snapshots is required when
they are snapped back.
See the vxassist(1M) manual page.
See the vxdco(1M) manual pages.
For example, to reattach the DCO object, myvol_dco, to the volume, myvol, use
the following command:
# vxdco -g mydg att myvol myvol_dco
383
384
Chapter
17
Administering Storage
Checkpoints
This chapter includes the following topics:
The ability for data to be immediately writeable by preserving the file system
metadata, the directory hierarchy, and user data.
Storage Checkpoints are actually data objects that are managed and controlled by
the file system. You can create, remove, and rename Storage Checkpoints because
they are data objects with associated names.
Create a stable image of the file system that can be backed up to tape.
Provide a mounted, on-disk backup of the file system so that end users can
restore their own files in the event of accidental deletion. This is especially useful
in a home directory, engineering, or email environment.
Create an on-disk backup of the file system in that can be used in addition to a
traditional tape-based backup to provide faster backup and restore capabilities.
386
approximate the amount of space required by the metadata using a method that
depends on the disk layout version of the file system.
For disk layout Version 7, multiply the number of inodes by 1 byte, and add 1 or 2
megabytes to get the approximate amount of space required. You can determine
the number of inodes with the fsckptadm utility.
Use the fsvoladm command to determine if the volume set has enough free space.
See the fsvoladm(1M) manual page.
The following example lists the volume sets and displays the storage sizes in
human-friendly units:
# fsvoladm -H list /mnt0
devid
0
1
size
20 GB
30 TB
used
10 GB
10 TB
avail
10 GB
20 TB
name
vol1
vol2
387
To mount a Storage Checkpoint of a file system, first mount the file system itself.
388
Warning: If you create a Storage Checkpoint for backup purposes, do not mount it
as a writable Storage Checkpoint. You will lose the point-in-time image if you
accidently write to the Storage Checkpoint.
If older Storage Checkpoints already exist, write activity to a writable Storage
Checkpoint can generate copy operations and increased space usage in the older
Storage Checkpoints.
A Storage Checkpoint is mounted on a special pseudo device. This pseudo device
does not exist in the system name space; the device is internally created by the
system and used while the Storage Checkpoint is mounted. The pseudo device is
removed after you unmount the Storage Checkpoint. A pseudo device name is
formed by appending the Storage Checkpoint name to the file system device name
using the colon character (:) as the separator.
For example, if a Storage Checkpoint named may_23 belongs to the file system
residing on the special device /dev/vx/dsk/fsvol/vol1, the Storage Checkpoint
pseudo device name is:
/dev/vx/dsk/fsvol/vol1:may_23
Note: The vol1 file system must already be mounted before the Storage
Checkpoint can be mounted.
To mount this Storage Checkpoint automatically when the system starts up, put
the following entries in the /etc/fstab file:
Device-Special-File
Mount-Point
fstype
options
/dev/vx/dsk/fsvol/
vol1
/fsvol
vxfs
defaults
backupfrequency
0
passnumber
0
389
/dev/vx/dsk/fsvol/
vol1:may_23
/fsvol_may_23
vxfs
ckpt=may_23 0
To mount a Storage Checkpoint of a cluster file system, you must also use the
-o cluster option:
# mount -t vxfs -o cluster,ckpt=may_23 \
/dev/vx/dsk/fsvol/vol1:may_23 /fsvol_may_23
You can only mount a Storage Checkpoint cluster-wide if the file system that
the Storage Checkpoint belongs to is also mounted cluster-wide. Similarly, you
can only mount a Storage Checkpoint locally if the file system that the Storage
Checkpoint belongs to is mounted locally.
You can unmount Storage Checkpoints using the umount command.
See the umount(1M) manual page.
Storage Checkpoints can be unmounted by the mount point or pseudo device name:
# umount /fsvol_may_23
# umount /dev/vx/dsk/fsvol/vol1:may_23
Note: You do not need to run the fsck utility on Storage Checkpoint pseudo devices
because pseudo devices are part of the actual file system.
390
391
If all of the older Storage Checkpoints in a file system are nodata Storage
Checkpoints, use the synchronous method to convert a data Storage Checkpoint
to a nodata Storage Checkpoint. If an older data Storage Checkpoint exists in the
file system, use the asynchronous method to mark the Storage Checkpoint you
want to convert for a delayed conversion. In this case, the actual conversion will
continue to be delayed until the Storage Checkpoint becomes the oldest Storage
Checkpoint in the file system, or all of the older Storage Checkpoints have been
converted to nodata Storage Checkpoints.
Note: You cannot convert a nodata Storage Checkpoint to a data Storage Checkpoint
because a nodata Storage Checkpoint only keeps track of the location of block
changes and does not save the content of file data blocks.
/mnt0@5_30pm
Examine the content of the original file and the Storage Checkpoint file:
# cat /mnt0/file
hello, world
# cat /mnt0@5_30pm/file
hello, world
Examine the content of the original file and the Storage Checkpoint file. The
original file contains the latest data while the Storage Checkpoint file still
contains the data at the time of the Storage Checkpoint creation:
# cat /mnt0/file
goodbye
# cat /mnt0@5_30pm/file
hello, world
392
Examine the content of both files. The original file must contain the latest data:
# cat /mnt0/file
goodbye
You can traverse and read the directories of the nodata Storage Checkpoint;
however, the files contain no data, only markers to indicate which block of the
file has been changed since the Storage Checkpoint was created:
# ls -l /mnt0@5_30pm/file
-rw-r--r--
1 root
# cat /mnt0@5_30pm/file
cat: /mnt0@5_30pm/file: Input/output error
393
394
Create four data Storage Checkpoints on this file system, note the order of
creation, and list them:
# fsckptadm create oldest /mnt0
# fsckptadm create older /mnt0
# fsckptadm create old /mnt0
# fsckptadm create latest /mnt0
# fsckptadm list /mnt0
/mnt0
latest:
ctime
mtime
flags
old:
ctime
mtime
flags
older:
ctime
mtime
flags
oldest:
ctime
mtime
flags
=
=
=
=
=
=
=
=
=
=
=
=
You can instead convert the latest Storage Checkpoint to a nodata Storage
Checkpoint in a delayed or asynchronous manner.
# fsckptadm set nodata latest /mnt0
List the Storage Checkpoints, as in the following example. You will see that
the latest Storage Checkpoint is marked for conversion in the future.
# fsckptadm list /mnt0
/mnt0
latest:
ctime
mtime
flags
old:
ctime
mtime
flags
older:
ctime
mtime
flags
oldest:
ctime
mtime
flags
395
Checkpoint.
To create a delayed nodata Storage Checkpoint
396
397
Note: After you remove the older and old Storage Checkpoints, the latest
Storage Checkpoint is automatically converted to a nodata Storage Checkpoint
because the only remaining older Storage Checkpoint (oldest) is already a
nodata Storage Checkpoint:
By default, Symantec Storage Foundation (SF) does not make inode numbers
unique. However, you can specify the uniqueino mount option to enable the use
of unique 64-bit inode numbers. You cannot change this option during a remount.
The following example enables Storage Checkpoint visibility by causing all clones
to be automounted as read/write:
398
399
an entire file system. Restoration from Storage Checkpoints can also help recover
incorrectly modified files, but typically cannot recover from hardware damage or
other file system integrity problems.
Note: For hardware or other integrity problems, Storage Checkpoints must be
supplemented by backups from other media.
Files can be restored by copying the entire file from a mounted Storage Checkpoint
back to the primary fileset. To restore an entire file system, you can designate a
mountable data Storage Checkpoint as the primary fileset using the fsckpt_restore
command.
See the fsckpt_restore(1M) manual page.
When using the fsckpt_restore command to restore a file system from a Storage
Checkpoint, all changes made to that file system after that Storage Checkpoint's
creation date are permanently lost. The only Storage Checkpoints and data
preserved are those that were created at the same time, or before, the selected
Storage Checkpoint's creation. The file system cannot be mounted at the time that
fsckpt_restore is invoked.
Note: Individual files can also be restored very efficiently by applications using the
fsckpt_fbmap(3) library function to restore only modified portions of a files data.
You can restore from a Storage Checkpoint only to a file system that has disk layout
Version 6 or later.
The following example restores a file, file1.txt, which resides in your home
directory, from the Storage Checkpoint CKPT1 to the device
/dev/vx/dsk/dg1/vol-01. The mount point for the device is /home.
To restore a file from a Storage Checkpoint
400
me
staff
14910
Mar 4
17:09
file1.txt
1 me
staff
14910
Mar 4
18:21
file1.txt
The following example restores a file system from the Storage Checkpoint CKPT3.
The filesets listed before the restoration show an unnamed root fileset and six
Storage Checkpoints.
U
N
N
A
M
E
D
C
K
P
T
6
C
K
P
T
5
C
K
P
T
4
C
K
P
T
3
C
K
P
T
2
C
K
P
T
1
401
402
In this example, select the Storage Checkpoint CKPT3 as the new root fileset:
Select Storage Checkpoint for restore operation
or <Control/D> (EOF) to exit
or <Return> to list Storage Checkpoints: CKPT3
CKPT3:
ctime
= Thu 08 May 2004 06:28:31 PM PST
mtime
= Thu 08 May 2004 06:28:36 PM PST
flags
= largefiles
UX:vxfs fsckpt_restore: WARNING: V-3-24640: Any file system
changes or Storage Checkpoints made after
Thu 08 May 2004 06:28:31 PM PST will be lost.
403
If the filesets are listed at this point, it shows that the former UNNAMED root
fileset and CKPT6, CKPT5, and CKPT4 were removed, and that CKPT3 is now the
primary fileset. CKPT3 is now the fileset that will be mounted by default.
C
K
P
T
3
C
K
P
T
2
C
K
P
T
1
404
soft limit
Must be lower than the hard limit. If a soft limit is exceeded, no new
Storage Checkpoints can be created. The number of blocks used must
return below the soft limit before more Storage Checkpoints can be
created. An alert and console message are generated.
In case of a hard limit violation, various solutions are possible, enacted by specifying
or not specifying the -f option for the fsckptadm utility.
See the fsckptadm(1M) manual page.
Specifying or not specifying the -f option has the following effects:
405
Chapter
18
Administering FileSnaps
This chapter includes the following topics:
FileSnap creation
Using FileSnaps
Comparison of the logical size output of the fsadm -S shared, du, and df
commands
FileSnap creation
A single thread creating FileSnaps of the same file can create over ten thousand
snapshots per minute. FileSnaps can be used for fast provisioning of new virtual
machines by cloning a virtual machine golden image, where the golden image is
stored as a file in a VxFS file system or Symantec Storage Foundation Cluster File
System High Availability (SFCFSHA) file system, which is used as a data store for
a virtual environment.
Administering FileSnaps
Using FileSnaps
The new file has the same attributes as the old file and shares all of the old file's
extents.
An application that uses this namespace extension should check if the file created
has the namespace extension, such as file1::snap:vxfs: instead of file1. This
indicates the namespace extension is not supported, either because the file system
exported over NFS is not VxFS, the file system is an older version of VxFS, or the
file system does not have a license for FileSnaps.
As with the vxfilesnap command, FileSnaps must be made within a single file set.
Using FileSnaps
Table 18-1 provides a list of Veritas File System (VxFS) commands that enable
you to administer FileSnaps.
Table 18-1
Command
Functionality
fiostat
fsadm
The fsadm command has the -S option to report shared block usage
in the file system. You can use this option to find out the storage savings
achieved through FileSnaps and how much real storage is required if
all of the files are full copies.
See the fsadm_vxfs(1M) manual page.
fsmap
The fsmap command has the -c option to report the count of the total
number of physical blocks consumed by a file, and how many of those
blocks might not be private to a given file.
See the fsmap(1) manual page.
mkfs
Use the mkfs command to make a disk layout Version 10 file system
by specifying -o version=10. VxFS internally maintains a list of
delayed operations on shared extent references and the size of this list
(rcqsize) defaults to a value that is a function of the file system size,
but can be changed when the file system is made.
See the mkfs_vxfs(1M) manual page.
407
Administering FileSnaps
Using FileSnaps to create point-in-time copies of files
Table 18-1
(continued)
Command
Functionality
vxfilesnap
vxtunefs
The master_image file can be presented as a disk device to the virtual machine for
installing the operating system. Once the operating system is installed and
configured, the file is ready for snapshots.
408
Administering FileSnaps
Using FileSnaps to create point-in-time copies of files
The last command creates a 16 GB hole at the end of the file. Since holes do not
have any extents allocated, the writes to hole do not need to be unshared.
409
Administering FileSnaps
Comparison of the logical size output of the fsadm -S shared, du, and df commands
total 1108
drwxr-xr-x
-rw-r--r--rw-r--r--
2 root
1 root
1 root
root
root
root
96
282686
282686
Jul
Jul
Jul
6 00:41 lost+found
6 00:43 tfile1
6 00:44 stfile1
# ls -ltri
total 1108
3 drwxr-xr-x
4 -rw-r--r-5 -rw-r--r--
2 root
1 root
1 root
root
root
root
96 Jul
282686 Jul
282686 Jul
6 00:41 lost+found
6 00:43 tfile1
6 00:44 stfile1
Used
83590
Size(KB)
52428800
Available(KB)
49073642
Used(KB)
83590
Logical_Size(KB) Space_Saved(KB)
83590
0
410
Administering FileSnaps
Comparison of the logical size output of the fsadm -S shared, du, and df commands
# du -sk /mnt
0
/mnt
Used
83600
/mnt
Size(KB)
52428800
Available(KB)
49073632
Used(KB)
83600
Logical_Size(KB) Space_Saved(KB)
83610
10
411
Chapter
19
the snapshot was created. The snapread ioctl takes arguments similar to those of
the read system call and returns the same results that are obtainable by performing
a read on the disk device containing the snapped file system at the exact time the
snapshot was created. In both cases, however, the snapshot file system provides
a consistent image of the snapped file system with all activity completeit is an
instantaneous read of the entire file system. This is much different than the results
that would be obtained by a dd or read command on the disk device of an active
file system.
A super-block
A bitmap
A blockmap
413
The following figure shows the disk structure of a snapshot file system.
Figure 19-1
bitmap
blockmap
data block
The super-block is similar to the super-block of a standard VxFS file system, but
the magic number is different and many of the fields are not applicable.
The bitmap contains one bit for every block on the snapped file system. Initially, all
bitmap entries are zero. A set bit indicates that the appropriate block was copied
from the snapped file system to the snapshot. In this case, the appropriate position
in the blockmap references the copied block.
The blockmap contains one entry for each block on the snapped file system. Initially,
all entries are zero. When a block is copied from the snapped file system to the
snapshot, the appropriate entry in the blockmap is changed to contain the block
number on the snapshot file system that holds the data from the snapped file system.
The data blocks are filled by data copied from the snapped file system, starting
from the beginning of the data block area.
Snapshots
Storage Checkpoints
414
Table 19-1
Snapshots
Storage Checkpoints
Are read-only
Are transient
Are persistent
Track changed blocks on the file system level Track changed blocks on each file in the file
system
Storage Checkpoints also serve as the enabling technology for two other Veritas
features: Block-Level Incremental Backups and Storage Rollback, which are used
extensively for backing up databases.
See About Storage Checkpoints on page 385.
415
416
Section
Chapter
20
Understanding storage
optimization solutions in
Storage Foundation
This chapter includes the following topics:
About SmartMove
About reclaiming space on Solid State Devices (SSDs) with the TRIM operation
The two types of thin provisioned LUNs are thin-capable or thin-reclaim capable.
Both types of LUNs provide the capability to allocate storage as needed from the
free pool. For example, storage is allocated when a file system creates or changes
a file. However, this storage is not released to the free pool when files get deleted.
Therefore, thin-provisioned LUNs can become 'thick' over time, as the file system
starts to include unused free space where the data was deleted. Thin-reclaim
capable LUNs address this problem with the ability to release the once-used storage
to the pool of free storage. This operation is called thin storage reclamation.
The thin-reclaim capable LUNs do not perform the reclamation automatically. The
server using the LUNs must initiate the reclamation. The administrator can initiate
a reclamation manually, or with a scheduled reclamation operation.
Storage Foundation provides several features to support thin provisioning and thin
reclamation, and to optimize storage use on thin provisioned arrays.
See About SmartMove on page 420.
Feature
Description
Benefits
SmartMove
419
Table 20-1
Feature
Description
Benefits
Thin Reclamation
About SmartMove
Storage Foundation provides the SmartMove utility to optimize move and copy
operations. The SmartMove utility leverages the knowledge that Veritas File System
(VxFS) has of the Veritas Volume Manager (VxVM) storage. VxFS lets VxVM know
which blocks have data. When VxVM performs an operation that copies or moves
data, SmartMove enables the operation to only copy or move the blocks used by
the file system. This capability improves performance for synchronization, mirroring,
and copying operations because it reduces the number of blocks that are copied.
SmartMove only works with VxFS file systems that are mounted on VxVM volumes.
If a file system is not mounted, the utility has no visibility into the usage on the file
system.
SmartMove is not used for volumes that have instant snapshots.
The SmartMove operation also can be used to migrate data from thick storage to
thin-provisioned storage. Because SmartMove copies only blocks that are in use
by the file system, the migration process results in a thin-provisioned LUN.
420
421
of data are no longer in use and can be erased. The SSDs erase the unused blocks
before the blocks are required for reuse, which improves the performance of the
future write I/Os to the SSD. The TRIM operation also reduces wear leveling and
fragmentation, because unused blocks are erased. The unused data does not get
moved during a garbage collection or a cleaning cycle.
In this release, SF supports the TRIM operation for Fusion-io devices for Red Hat
Linux 6.0 (RHEL6) and SUSE Linux Enterprise Server 11 (SLES11).
See the Fusion-io documentation for the firmware version requirements for TRIM
support.
The SF components, Veritas File System (VxFS) and Veritas Volume Manager
(VxVM), use the TRIM operation to free up the blocks that do not contain valid data.
The TRIM capability is similar to thin reclamation, and is performed with the same
commands. The default SF reclamation commands perform TRIM for SSDs and
thin reclamation for Thin Reclaimable LUNs. For file systems and volumes that use
both SSDs and Thin Reclaimable LUNs, you can choose whether SF performs only
a TRIM operation, only a thin reclamation, or both.
See Reclaiming space on a disk, disk group, or enclosure on page 438.
See Reclaiming space on a file system on page 436.
To display information about SSDs, use the vxdisk -o ssd list command. SF
can also discover and display the disk space usage for Veritas File System (VxFS)
file systems on SSDs. The VxFS file systems must be mounted on Veritas Volume
Manager (VxVM) volumes. Use the vxdisk -o ssd -o fssize list command.
See the vxdisk(1M) manual page.
422
is a balance between how much space can be reclaimed, and how much time the
reclaim operation will take.
The following considerations may apply:
For a VxFS file system mounted on a VxVM volume, compare the file system
usage to the actual physical allocation size to determine if a reclamation is
desirable. If the file system usage is much smaller than the physical allocation
size, it indicates that a lot of space can potentially be reclaimed. You may want
to trigger a file system reclamation. If the file system usage is close to the
physical allocation size, it indicates that the physical allocation is being used
well. You may not want to trigger a reclamation.
See Displaying VxFS file system usage on thin reclamation LUNs on page 434.
The array may provide notification when the storage pool usage has reached a
certain threshold. You can evaluate whether you can reclaim space with Storage
Foundation to free more space in the storage pool.
Deleted volumes are reclaimed automatically. You can customize the schedule
for automatic reclamation.
See Configuring automatic reclamation on page 442.
Deleting a volume.
Removing a mirror.
Shrinking a volume.
Removing a log.
The process of reclaiming storage on an array can be intense on the array. To avoid
any effect on regular I/O's to the array, Storage Foundation performs the reclaim
operation asynchronously. The disk is flagged as pending reclamation. The vxrelocd
(or recovery) daemon asynchronously reclaims the disks marked for reclamation
at a future time. By default, the vxrelocd daemon runs every day at 22:10 hours,
and reclaims storage on the deleted volumes or plexes that are one day old.
To display the disks that are pending reclamation, use the following command:
# vxprint -z
423
424
Chapter
21
CURRENT-VALUE
all
DEFAULT-VALUE
all
If the output shows that the current value is none, configure SmartMove for all
disks or thin disks.
See Configuring SmartMove on page 614.
Add the new, thin LUNs to the existing disk group. Enter the following
commands:
# vxdisksetup -i da_name
# vxdg -g datadg adddisk da_name
Add the new, thin LUNs as a new plex to the volume. On a thin LUN, when
you create a mirrored volume or add a mirror to an existing LUN, VxVM creates
a Data Change Object (DCO) by default. The DCO helps prevent the thin LUN
from becoming thick, by eliminating the need for full resynchronization of the
mirror.
NOTE: The VxFS file system must be mounted to get the benefits of the
SmartMove feature.
The following methods are available to add the LUNs:
Specify the vxassist command options for faster completion. The -b option
copies blocks in the background. The following command improves I/O
throughput:
# vxassist -b -oiosize=1m -t thinmig -g datadg mirror \
datavol da_name
426
427
# vxtask list
TASKID PTID TYPE/STATE
PCT
PROGRESS
211
ATCOPY/R 10.64% 0/20971520/2232320 PLXATT vol1 vol1-02 xivdg smartmove
212
ATCOPY/R 09.88% 0/20971520/2072576 PLXATT vol1 vol1-03 xivdg smartmove
219
xivdg
xivdg
xivdg
xivdg
xivdg
xivdg
smartmove
smartmove
smartmove
smartmove
smartmove
smartmove
Optionally, test the performance of the new LUNs before removing the old
LUNs.
To test the performance, use the following steps:
TY
dg
dm
dm
NAME
datadg
THINARRAY0_02
STDARRAY1_01
ASSOC
datadg
THINARRAY0_02
STDARRAY1_01
KSTATE
-
LENGTH
83886080
41943040
PLOFFS
-
STATE
-OHOTUSE
TUTIL0 PUTIL0
-
v
pl
sd
pl
sd
datavol
datavol-01
STDARRAY1_01-01
datavol-02
THINARRAY0_02-01
fsgen
datavol
datavol-01
datavol
datavol-02
ENABLED
ENABLED
ENABLED
ENABLED
ENABLED
41943040
41943040
41943040
41943040
41943040
0
0
ACTIVE
ACTIVE
ACTIVE
-
The example output indicates that the thin LUN corresponds to plex
datavol-02.
Grow the file system and volume to use all of the larger thin LUN:
# vxresize -g datadg -x datavol 40g da_name
428
Chapter
22
For a list of the storage arrays that support thin reclamation, see the Symantec
Hardware Compatibility List (HCL):
http://www.symantec.com/docs/TECH211575
Thin reclamation is not supported for boot devices.
You can use the thin reclamation feature in the following ways:
Perform the reclamation operation on a disk group, LUN, or enclosure using the
vxdisk command.
See Reclaiming space on a disk, disk group, or enclosure on page 438.
Perform the reclamation operation on a Veritas File System (VxFS) file system
using the fsadm command.
See Reclaiming space on a file system on page 436.
430
operations, VxVM creates a task for a reclaim operation. You can monitor the reclaim
operation with the vxtask command.
See Monitoring Thin Reclamation using the vxtask command on page 441.
431
432
performing thin reclamation, determine whether the system recognizes the LUN as
a thinrclm LUN.
To identify devices on a host that are known to have the thin or thinrclm attributes,
use the vxdisk -o thin list command. The vxdisk -o thin list command
also reports on the size of the disk, and the physical space that is allocated on the
array.
To identify thin and thinrclm LUNs
To identify all of the thin or thinrclm LUNs that are known to a host, use the
following command:
# vxdisk -o thin list
DEVICE
xiv0_6695
xiv0_6696
xiv0_6697
xiv0_6698
xiv0_6699
3pardata0_5074
3pardata0_5075
3pardata0_5076
3pardata0_5077
3pardata0_5081
SIZE(MB)
16384
16384
16384
16384
16384
2048
2048
2048
2048
2048
PHYS_ALLOC(MB)
30
30
30
30
30
2043
2043
1166
2043
1092
GROUP
dg1
dg1
dg1
dg1
dg1
vvrdg
vvrdg
vvrdg
vvrdg
vvrdg
TYPE
thinrclm
thinrclm
thinrclm
thinrclm
thinrclm
thinrclm
thinrclm
thinrclm
thinrclm
thinrclm
RECLAIM_CMD
WRITE_SAME
WRITE_SAME
WRITE_SAME
WRITE_SAME
WRITE_SAME
WRITE_SAME
WRITE_SAME
WRITE_SAME
WRITE_SAME
WRITE_SAME
In the output, the SIZE column shows the size of the disk. The PHYS_ALLOC
column shows the physical allocation on the array side. The TYPE indicates
whether the array is thin or thinrclm. The RECLAIM_CMD column displays
which reclamation method that DMP uses.
See the vxdisk(1m) manual page.
To display detailed information about the thin reclamation methods for a device,
use the following command:
# vxdisk -p list xiv0_6699
DISK
VID
UDID
TP_PREF_RCLMCMD
TP_RECLM_CMDS
TP_ALLOC_UNIT
TP_MAX_REC_SIZE
TP_LUN_SHIFT_OF
SCSI_VERSION
SCSI3_VPD_ID
REVISION
.
.
.
LUN_SIZE
NUM_PATHS
STATE
:
:
:
:
:
:
:
:
:
:
:
xiv0_6699
IBM
IBM%5F2810XIV%5F0E95%5F1A2B
write_same
write_same, unmap
1048576
268435456
0
5
001738000E951A2B
10.2
: 33554432
: 4
: online
The following fields show the information about the reclamation attributes:
TP_PREF_RCLMCMD
TP_RECLM_CMDS
TP_ALLOC_UNIT
TP_MAX_REC_SIZE
TP_LUN_SHIFT_OF
433
The -o fssize option does not display the space used by cache objects or
instant snapshots.
If the VxFS file system is not mounted, or if the device has both mounted and
unmounted VxFS file systems, no information is displayed. The file system
usage (FS_SIZE) column displays a dash (-).
You can display the size and usage for all thin or thinrclm LUNs, or specify an
enclosure name or a device name. If you specify one or more devices or enclosures,
the command displays only the space usage on the specified devices. If the specified
device is not a thin device or thinrclm device, the device is listed but the FS_SIZE
column displays a dash (-).
If a VxFS file system spans multiple devices, you must specify all of the devices to
display the entire file system usage. If you specify only some of the devices, the
file system usage is incomplete. The command ignores the file system usage on
any devices that are not specified.
Note: The command can potentially take a long time to complete depending on the
file system size, the level of fragmentation, and other factors. The command creates
a task that you can monitor with the vxtask command.
434
SIZE
The size of the disk; that is, the size that is presented to the
file system. This size represents the virtual size rather than
the actual physical space used on the device.
PHYS_ALLOC
FS_SIZE
GROUP
TYPE
RECLAIM_CMD
435
To display the file system usage on all thin or thinrclm LUNs known to the
system, use the following command:
$ vxdisk -o thin,fssize [-u unit] list
SIZE
16384.00m
16384.00m
16384.00m
16384.00m
16384.00m
16384.00m
16384.00m
16384.00m
16384.00m
16384.00m
TYPE
RECLAIM_CMD
thinrclm WRITE_SAME
thinrclm WRITE_SAME
thinrclm WRITE_SAME
thinrclm WRITE_SAME
thinrclm WRITE_SAME
thinrclm WRITE_SAME
thinrclm WRITE_SAME
thinrclm WRITE_SAME
thinrclm WRITE_SAME
thinrclm WRITE_SAME
Or, to display the file system usage on a specific LUN or enclosure, use the
following form of the command:
$ vxdisk -o thin,fssize list [-u unit] disk|enclosure
For example:
$ vxdisk -o thin,fssize list emc0
DEVICE
emc0_428a
emc0_428b
emc0_4287
emc0_4288
emc0_4289
SIZE(MB)
16384
16384
16384
16384
16384
PHYS_ALLOC(MB)
6335
6335
6335
1584
2844
FS_SIZE(MB)
610
624
617
617
1187
GROUP
mydg
mydg
mydg
mydg
mydg
TYPE
thinrclm
thinrclm
thinrclm
thinrclm
thinrclm
RECLAIM_CMD
WRITE_SAME
WRITE_SAME
WRITE_SAME
WRITE_SAME
WRITE_SAME
436
Table 22-1
Option
Description
-o aggressive | -A
-o analyse|analyze
-o auto
-o ssd
-o thin
-P
-R
437
Perform space reclamation on the VxFS file system that is mounted at /mnt1:
# /opt/VRTS/bin/fsadm -R /mnt1
438
If the disk group contains both SSDs and Thin Reclamation LUNs, you can
use the -o ssd option to perform only the TRIM operation. Use the -o thin
option to perform only the thin reclamation.
Reclaiming space on an enclosure
439
You can turn off TRIM functionalty or thin reclamation for a specific device with the
following command:
# vxdisk set reclaim=off disk
LOG fields
Description
START_TIME
DURATION
DISKGROUP
The disk group name associated with the subdisk. For TYPE=GAP, the
disk group value may be NULL value.
VOLUME
DISK
SUBDISK
OFFSET
LEN
PA_BEFORE
PA_AFTER
TYPE
The type for the reclamation operation. The value is one of the following:
440
Table 22-2
LOG fields
Description
STATUS
Initiate the thin reclamation as usual, for a disk, disk group, or enclosure.
# vxdisk reclaim diskgroup| disk| enclosure
For example:
# vxdisk reclaim dg100
TASKID
1258
1259
1263
1258
1258
1263
1259
PTID
-
TYPE/STATE
PCT
PROGRESS
RECLAIM/R 17.28% 65792/33447328/5834752
RECLAIM/R 25.98% 0/20971520/5447680
RECLAIM/R 25.21% 0/20971520/5287936
RECLAIM/R 25.49% 0/20971520/3248128
RECLAIM/R 27.51% 0/20971520/3252224
RECLAIM/R 25.23% 0/20971520/5292032
RECLAIM/R 26.00% 0/20971520/5451776
RECLAIM
RECLAIM
RECLAIM
RECLAIM
RECLAIM
RECLAIM
RECLAIM
vol4
vol2
vol3
vol4
vol4
vol3
vol2
dg100
dg100
dg100
dg100
dg100
dg100
dg100
441
If you have multiple tasks, you can use the following command to display the
tasks.
# vxtask list
TASKID
1258
1259
1263
PTID
-
TYPE/STATE
PCT
PROGRESS
RECLAIM/R 17.28% 65792/33447328/5834752 RECLAIM vol4 dg100
RECLAIM/R 25.98% 0/20971520/5447680
RECLAIM vol2 dg100
RECLAIM/R 25.21% 0/20971520/5287936
RECLAIM vol3 dg100
Use the task id from the above output to monitor the task:
# vxtask monitor 1258
TASKID
1258
1258
1258
1258
.
.
.
PTID
-
TYPE/STATE
PCT
PROGRESS
RECLAIM/R 17.28% 65792/33447328/5834752 RECLAIM vol4 dg100
RECLAIM/R 32.99% 65792/33447328/11077632 RECLAIM vol4 dg100
RECLAIM/R 45.55% 65792/33447328/15271936 RECLAIM vol4 dg100
RECLAIM/R 50.00% 0/20971520/10485760
RECLAIM vol4 dg100
The vxdisk reclaim command runs in another session while you run the
vxtask list command.
See the vxtask(1m) manual page.
442
reclaim_on_delete_wait_period
reclaim_on_delete_start_time
Change the tunables using the vxdefault command. See the vxdefault(1m)
manual page.
443
Section
Maximizing storage
utilization
Chapter
23
Understanding storage
tiering with SmartTier
This chapter includes the following topics:
About SmartTier
About SmartTier
SmartTier matches data storage with data usage requirements. After data matching,
the data can then be relocated based upon data usage and other requirements
determined by the storage or database administrator (DBA).
As more and more data is retained over a period of time, eventually, some of that
data is needed less frequently. The data that is needed less frequently still requires
a large amount of disk space. SmartTier enables the database administrator to
manage data so that less frequently used data can be moved to slower, less
expensive disks. This also permits the frequently accessed data to be stored on
faster disks for quicker retrieval.
Tiered storage is the assignment of different types of data to different storage types
to improve performance and reduce costs. With SmartTier, storage classes are
used to designate which disks make up a particular tier. There are two common
ways of defining storage classes:
SmartTier is a VxFS feature that enables you to allocate file storage space from
different storage tiers according to rules you create. SmartTier provides a more
flexible alternative compared to current approaches for tiered storage. Static storage
tiering involves a manual one- time assignment of application files to a storage
class, which is inflexible over a long term. Hierarchical Storage Management
solutions typically require files to be migrated back into a file system name space
before an application access request can be fulfilled, leading to latency and run-time
overhead. In contrast, SmartTier allows organizations to:
Optimize storage assets by dynamically moving a file to its optimal storage tier
as the value of the file changes over time
Automate the movement of data between storage tiers without changing the
way users or applications access the files
Note: SmartTier is the expanded and renamed feature previously known as Dynamic
Storage Tiering (DST).
SmartTier policies control initial file location and the circumstances under which
existing files are relocated. These policies cause the files to which they apply to be
created and extended on specific subsets of a file systems's volume set, known as
placement classes. The files are relocated to volumes in other placement classes
when they meet specified naming, timing, access rate, and storage capacity-related
conditions.
In addition to preset policies, you can manually move files to faster or slower storage
with SmartTier, when necessary. You can also run reports that list active policies,
display file activity, display volume usage, or show file statistics.
SmartTier leverages two key technologies included with Symantec Storage
Foundation: support for multi-volume file systems and automatic policy-based
placement of files within the storage managed by a file system. A multi-volume file
system occupies two or more virtual storage volumes and thereby enables a single
file system to span across multiple, possibly heterogeneous, physical storage
446
devices. For example the first volume could reside on EMC Symmetrix DMX
spindles, and the second volume could reside on EMC CLARiiON spindles. By
presenting a single name space, multi-volumes are transparent to users and
applications. This multi-volume file system remains aware of each volumes identity,
making it possible to control the locations at which individual files are stored. When
combined with the automatic policy-based placement of files, the multi-volume file
system provides an ideal storage tiering facility, which moves data automatically
without any downtime requirements for applications and users alike.
In a database environment, the access age rule can be applied to some files.
However, some data files, for instance are updated every time they are accessed
and hence access age rules cannot be used. SmartTier provides mechanisms to
relocate portions of files as well as entire files to a secondary tier.
To use SmartTier, your storage must be managed using the following features:
Volume tags
The minimum file system layout version is 7 for file level SmartTier.
The minimum file system layout version is 8 for sub-file level SmartTier.
To convert your existing VxFS system to a VxFS multi-volume file system, you must
convert a single volume to a volume set.
447
448
known as placement classes. The files are relocated to volumes in other placement
classes when they meet the specified naming, timing, access rate, and storage
capacity-related conditions.
File-based movement:
The administrator can create a file allocation policy based on filename extension
before new files are created, which will create the datafiles on the appropriate
tier during database creation.
The administrator can also create a file relocation policy for database files or
any types of files, which would relocate files based on how frequently a file is
used.
Periodically enforce the ranges of the registered sets of files based on their
relative frequency of access to a desired set of tiers
449
On the other hand, any subsequent new allocation on behalf of the file A adheres
to the preset SmartTier policy. Since the copy-on-write or unshare operation requires
a new allocation, the SmartTier enforcement operation complies with the preset
policy. If a write operation on the file A writes to shared extents, new allocations as
part of copy-on-write operation is done from device 2. This behaviour adheres to
the preset SmartTier policy.
450
Chapter
24
The first volume (index 0) in a volume set must be larger than the sum of the
total volume size divided by 4000, the size of the VxFS intent log, and 1MB.
Volumes 258 MB or larger should always suffice.
Raw I/O from and to the component volumes of a volume set is supported under
certain conditions.
See Managing raw device nodes of component volumes on page 455.
Volume sets can be used in place of volumes with the following vxsnap
operations on instant snapshots: addmir, dis, make, prepare, reattach,
refresh, restore, rmmir, split, syncpause, syncresume, syncstart, syncstop,
syncwait, and unprepare. The third-mirror break-off usage model for full-sized
instant snapshots is supported for volume sets provided that sufficient plexes
exist for each volume in the volume set.
For more information about snapshots, see the Symantec Storage Foundation
and High Availability Solutions Solutions Guide.
A full-sized snapshot of a volume set must itself be a volume set with the same
number of volumes and the same volume index numbers as the parent. The
corresponding volumes in the parent and snapshot volume sets are also subject
to the same restrictions as apply between standalone volumes and their
snapshots.
Here volset is the name of the volume set, and volume is the name of the first
volume in the volume set. The -t vxfs option creates the volume set configured
for use by VxFS. You must create the volume before running the command. vxvset
will not automatically create the volume.
For example, to create a volume set named myvset that contains the volume vol1,
in the disk group mydg, you would use the following command:
# vxvset -g mydg -t vxfs make myvset vol1
452
For example, to add the volume vol2, to the volume set myvset, use the following
command:
# vxvset -g mydg addvol myvset vol2
Warning: The -f (force) option must be specified if the volume being added, or any
volume in the volume set, is either a snapshot or the parent of a snapshot. Using
this option can potentially cause inconsistencies in a snapshot hierarchy if any of
the volumes involved in the operation is already in a snapshot chain.
For example, the following commands remove the volumes, vol1 and vol2, from
the volume set myvset:
# vxvset -g mydg -f rmvol myvset vol1
# vxvset -g mydg -f rmvol myvset vol2
453
454
If the name of a volume set is not specified, the command lists the details of all
volume sets in a disk group, as shown in the following example:
# vxvset -g mydg list
NAME
set1
set2
GROUP
mydg
mydg
NVOLS
3
2
CONTEXT
-
To list the details of each volume in a volume set, specify the name of the volume
set as an argument to the command:
# vxvset -g mydg list set1
VOLUME
vol1
vol2
vol3
INDEX
0
1
2
LENGTH
12582912
12582912
12582912
KSTATE
ENABLED
ENABLED
ENABLED
CONTEXT
-
The context field contains details of any string that the application has set up for
the volume or volume set to tag its purpose.
INDEX
0
1
2
LENGTH
12582912
12582912
12582912
KSTATE
DETACHED
ENABLED
ENABLED
CONTEXT
-
To stop and restart one or more volume sets, use the following commands:
# vxvset [-g diskgroup] stop volset ...
# vxvset [-g diskgroup] start volset ...
For the example given previously, the effect of running these commands on the
component volumes is shown below:
455
INDEX
0
1
2
LENGTH
12582912
12582912
12582912
KSTATE
DISABLED
DISABLED
DISABLED
CONTEXT
-
LENGTH
12582912
12582912
12582912
KSTATE
ENABLED
ENABLED
ENABLED
CONTEXT
-
INDEX
0
1
2
Access to the raw device nodes for the component volumes can be configured to
be read-only or read-write. This mode is shared by all the raw device nodes for the
component volumes of a volume set. The read-only access mode implies that any
writes to the raw device will fail, however writes using the ioctl interface or by
VxFS to update metadata are not prevented. The read-write access mode allows
direct writes via the raw device. The access mode to the raw device nodes of a
volume set can be changed as required.
The presence of raw device nodes and their access mode is persistent across
system reboots.
Note the following limitations of this feature:
Access to the raw device nodes of the component volumes of a volume set is
only supported for private disk groups; it is not supported for shared disk groups
in a cluster.
The -o makedev=on option enables the creation of raw device nodes for the
component volumes at the same time that the volume set is created. The default
setting is off.
If the -o compvol_access=read-write option is specified, direct writes are allowed
to the raw device of each component volume. If the value is set to read-only, only
reads are allowed from the raw device of each component volume.
If the -o makedev=on option is specified, but -o compvol_access is not specified,
the default access mode is read-only.
If the vxvset addvol command is subsequently used to add a volume to a volume
set, a new raw device node is created in /dev/vx/rdsk/diskgroup if the value of
the makedev attribute is currently set to on. The access mode is determined by the
current setting of the compvol_access attribute.
The following example creates a volume set, myvset1, containing the volume,
myvol1, in the disk group, mydg, with raw device access enabled in read-write mode:
456
vset_devinfo=on:read-write
The makedev attribute can be specified to the vxvset set command to create
(makedev=on) or remove (makedev=off) the raw device nodes for the component
volumes of a volume set. If any of the component volumes are open, the -f (force)
option must be specified to set the attribute to off.
Specifying makedev=off removes the existing raw device nodes from the
/dev/vx/rdsk/diskgroup directory.
If the makedev attribute is set to off, and you use the mknod command to create
the raw device nodes, you cannot read from or write to those nodes unless you set
the value of makedev to on.
The syntax for setting the compvol_access attribute on a volume set is:
# vxvset [-g diskgroup] [-f] set \
compvol_access={read-only|read-write} vset
457
component volumes are open, the -f (force) option must be specified to set the
attribute to read-only.
The following example sets the makedev=on and compvol_access=read-only
attributes on a volume set, myvset2, in the disk group, mydg:
# vxvset -g mydg set makedev=on myvset2
The final example removes raw device node access for the volume set, myvset2:
# vxvset -g mydg set makedev=off myvset2
458
Chapter
25
Volume encapsulation
Load balancing
expensive arrays. Using the MVS administrative interface, you can control which
data goes on which volume types.
Note: Multi-volume file system support is available only on file systems using disk
layout Version 7 or later.
Controlling where files are stored can be selected at multiple levels so that
specific files or file hierarchies can be assigned to different volumes. This
functionality is available in the Veritas File System SmartTier feature.
Placing the VxFS intent log on its own volume to minimize disk head movement
and thereby increase performance.
460
Volume availability
MVS guarantees that a dataonly volume being unavailable does not cause a
metadataok volume to be unavailable. This allows you to mount a multi-volume file
system even if one or more component dataonly volumes are missing.
The volumes are separated by whether metadata is allowed on the volume. An I/O
error on a dataonly volume does not affect access to any other volumes. All VxFS
operations that do not access the missing dataonly volume function normally.
Some VxFS operations that do not access the missing dataonly volume and
function normally include the following:
Mounting the multi-volume file system, regardless if the file system is read-only
or read/write.
Kernel operations.
Performing a fsck replay. Logged writes are converted to normal writes if the
corresponding volume is dataonly.
Using all other commands that do not access data on a missing volume.
Reading or writing file data if the file's data extents were allocated from the
missing dataonly volume.
Volume availability is supported only on a file system with disk layout Version 7 or
later.
Note: Do not mount a multi-volume system with the ioerror=disable or
ioerror=wdisable mount options if the volumes have different availability properties.
Symantec recommends the ioerror=mdisable mount option both for cluster mounts
and for local mounts.
461
462
After a volume set is created, create a VxFS file system by specifying the
volume set name as an argument to mkfs:
# mkfs -t vxfs /dev/vx/rdsk/dg1/myvset
version 10 layout
134217728 sectors, 67108864 blocks of size 1024, log size 65536 blocks
rcq size 4096 blocks
largefiles supported
maxlink
supported
After the file system is created, VxFS allocates space from the different volumes
within the volume set.
List the component volumes of the volume set using of the fsvoladm command:
# mount -t vxfs /dev/vx/dsk/dg1/myvset /mnt1
# fsvoladm -H list /mnt1
devid
0
1
size
20 GB
30 TB
used
10 GB
10 TB
avail
10 GB
20 TB
name
vol1
vol2
Add a new volume by adding the volume to the volume set, then adding the
volume to the file system:
# vxassist -g dg1 make vol5 50m
# vxvset -g dg1 addvol myvset vol5
# fsvoladm add /mnt1 vol5 50m
# fsvoladm -H list /mnt1
devid
0
1
2
size
10 GB
20 GB
50 MB
used
74.6 MB
16 KB
16 KB
avail
9.93 GB
20.0 GB
50.0 MB
name
vol1
vol2
vol5
flags
metadataok
dataonly
dataonly
dataonly
dataonly
Increase the metadata space in the file system using the fsvoladm command:
# fsvoladm clearflags dataonly /mnt1 vol2
# fsvoladm queryflags /mnt1
volname
vol1
vol2
vol3
vol4
vol5
flags
metadataok
metadataok
dataonly
dataonly
dataonly
463
Edit the /etc/fstab file to replace the volume device name, vol1, with the
volume set name, vset1.
10 Set the placement class tags on all volumes that do not have a tag:
# vxassist -g dg1 settag vol1 vxfs.placement_class.tier1
# vxassist -g dg1 settag vol2 vxfs.placement_class.tier2
464
465
Move volume 0:
# vxassist -g mydg move vol1 \!mydg
Volume encapsulation
Multi-volume file system support enables the ability to encapsulate an existing raw
volume and make the volume contents appear as a file in the file system.
Encapsulating a volume involves the following actions:
Encapsulating a volume
The following example illustrates how to encapsulate a volume.
466
467
To encapsulate a volume
INDEX
0
1
LENGTH
104857600
104857600
KSTATE
ENABLED
ENABLED
CONTEXT
-
Create a third volume and copy the passwd file to the third volume:
# vxassist -g dg1 make dbvol 100m
# dd if=/etc/passwd of=/dev/vx/rdsk/dg1/dbvol count=1
1+0 records in
1+0 records out
The third volume will be used to demonstrate how the volume can be accessed
as a file, as shown later.
Encapsulate dbvol:
# fsvoladm encapsulate /mnt1/dbfile dbvol 100m
# ls -l /mnt1/dbfile
-rw------- 1 root other 104857600 May 22 11:30 /mnt1/dbfile
The passwd file that was written to the raw volume is now visible in the new
file.
Note: If the encapsulated file is changed in any way, such as if the file is
extended, truncated, or moved with an allocation policy or resized volume, or
the volume is encapsulated with a bias, the file cannot be de-encapsulated.
Deencapsulating a volume
The following example illustrates how to deencapsulate a volume.
To deencapsulate a volume
INDEX
0
1
2
LENGTH
102400
102400
102400
KSTATE
ACTIVE
ACTIVE
ACTIVE
CONTEXT
-
Deencapsulate dbvol:
# fsvoladm deencapsulate /mnt1/dbfile
468
command reports the volume name, logical offset, and size of data extents, or the
volume name and size of indirect extents associated with a file on a multi-volume
file system. The fsvmap command maps volumes to the files that have extents on
those volumes.
See the fsmap(1M) and fsvmap(1M) manual pages.
The fsmap command requires open() permission for each file or directory specified.
Root permission is required to report the list of files with extents on a particular
volume.
The following examples show typical uses of the fsmap and fsvmap commands.
Example of using the fsmap command
Use the find command to descend directories recursively and run fsmap on
the list of files:
# find . | fsmap Volume
vol2
vol1
Extent Type
Data
Data
File
./file1
./file2
/.
/ns2
/ns3
/file1
/file1
/file2
Report the extents of files that have either data or metadata on a single volume
in all Storage Checkpoints, and indicate if the volume has file system metadata:
# fsvmap -mvC /dev/vx/rdsk/fstest/testvset vol1
Meta
Data
Data
Data
Data
Meta
Structural
UNNAMED
UNNAMED
UNNAMED
UNNAMED
UNNAMED
vol1
vol1
vol1
vol1
vol1
vol1
469
470
Load balancing
An allocation policy with the balance allocation order can be defined and assigned
to files that must have their allocations distributed at random between a set of
specified volumes. Each extent associated with these files are limited to a maximum
size that is defined as the required chunk size in the allocation policy. The distribution
of the extents is mostly equal if none of the volumes are full or disabled.
Load balancing allocation policies can be assigned to individual files or for all files
in the file system. Although intended for balancing data extents across volumes, a
load balancing policy can be assigned as a metadata policy if desired, without any
restrictions.
Note: If a file has both a fixed extent size set and an allocation policy for load
balancing, certain behavior can be expected. If the chunk size in the allocation
policy is greater than the fixed extent size, all extents for the file are limited by the
chunk size. For example, if the chunk size is 16 MB and the fixed extent size is 3
MB, then the largest extent that satisfies both the conditions is 15 MB. If the fixed
extent size is larger than the chunk size, all extents are limited to the fixed extent
size. For example, if the chunk size is 2 MB and the fixed extent size is 3 MB, then
all extents for the file are limited to 3 MB.
471
Rebalancing extents
Extents can be rebalanced by strictly enforcing the allocation policy. Rebalancing
is generally required when volumes are added or removed from the policy or when
the chunk size is modified. When volumes are removed from the volume set, any
extents on the volumes being removed are automatically relocated to other volumes
within the policy.
The following example redefines a policy that has four volumes by adding two new
volumes, removing an existing volume, and enforcing the policy for rebalancing.
To rebalance extents
The single volume must be the first volume in the volume set
The first volume must have sufficient space to hold all of the data and file system
metadata
The volume cannot have any allocation policies that restrict the movement of
data
Note: Steps 5, 6, 7, and 8 are optional, and can be performed if you prefer to remove
the wrapper of the volume set object.
Converting to a single volume file system
Determine if the first volume in the volume set, which is identified as device
number 0, has the capacity to receive the data from the other volumes that will
be removed:
# df /mnt1
/mnt1
(/dev/vx/dsk/dg1/vol1):16777216 blocks
3443528 files
If the first volume does not have sufficient capacity, grow the volume to a
sufficient size:
# fsvoladm resize /mnt1 vol1 150g
Remove all volumes except the first volume in the volume set:
# fsvoladm remove /mnt1 vol2
# vxvset -g dg1 rmvol vset1 vol2
# fsvoladm remove /mnt1 vol3
# vxvset -g dg1 rmvol vset1 vol3
Before removing a volume, the file system attempts to relocate the files on that
volume. Successful relocation requires space on another volume, and no
allocation policies can be enforced that pin files to that volume. The time for
the command to complete is proportional to the amount of data that must be
relocated.
472
Edit the /etc/fstab file to replace the volume set name, vset1, with the volume
device name, vol1.
473
Chapter
26
Administering SmartTier
This chapter includes the following topics:
About SmartTier
Placement classes
Sub-file relocation
About SmartTier
Veritas File System (VxFS) uses multi-tier online storage by way of the SmartTier
feature, which functions on top of multi-volume file systems. Multi-volume file
systems are file systems that occupy two or more virtual volumes. The collection
of volumes is known as a volume set. A volume set is made up of disks or disk
array LUNs belonging to a single Veritas Volume Manager (VxVM) disk group. A
multi-volume file system presents a single name space, making the existence of
multiple volumes transparent to users and applications. Each volume retains a
Administering SmartTier
About SmartTier
475
Administering SmartTier
Supported SmartTier document type definitions
Compress while relocating files from one tier to another in a multi-volume file
system
Uncompress while relocating files from one tier to another in a multi-volume file
system
476
Administering SmartTier
Placement classes
Table 26-1
VxFS Version
1.0
1.1
5.0
Supported
Not supported
5.1
Supported
Supported
5.1 SP1
Supported
Supported
6.0
Supported
Supported
6.0.1
Supported
Supported
Placement classes
A placement class is a SmartTier attribute of a given volume in a volume set of a
multi-volume file system. This attribute is a character string, and is known as a
volume tag. A volume can have different tags, one of which can be the placement
class. The placement class tag makes a volume distinguishable by SmartTier.
Volume tags are organized as hierarchical name spaces in which periods separate
the levels of the hierarchy . By convention, the uppermost level in the volume tag
hierarchy denotes the Symantec Storage Foundation component or application that
uses a tag, and the second level denotes the tags purpose. SmartTier recognizes
volume tags of the form vxfs.placement_class.class_name. The prefix vxfs
identifies a tag as being associated with VxFS. The placement_class string
identifies the tag as a file placement class that SmartTier uses. The class_name
string represents the name of the file placement class to which the tagged volume
belongs. For example, a volume with the tag vxfs.placement_class.tier1 belongs
to placement class tier1. Administrators use the vxassist command to associate
tags with volumes.
See the vxassist(1M) manual page.
SmartTier policy rules specify file placement in terms of placement classes rather
than in terms of individual volumes. All volumes that belong to a particular placement
class are interchangeable with respect to file creation and relocation operations.
Specifying file placement in terms of placement classes rather than in terms of
specific volumes simplifies the administration of multi-tier storage.
The administration of multi-tier storage is simplified in the following ways:
Adding or removing volumes does not require a file placement policy change.
If a volume with a tag value of vxfs.placement_class.tier2 is added to a file
systems volume set, all policies that refer to tier2 immediately apply to the
477
Administering SmartTier
Placement classes
File placement policies are not specific to individual file systems. A file placement
policy can be assigned to any file system whose volume set includes volumes
tagged with the tag values (placement classes) named in the policy. This property
makes it possible for data centers with large numbers of servers to define
standard placement policies and apply them uniformly to all servers with a single
administrative action.
478
Administering SmartTier
Administering placement policies
479
Administering SmartTier
Administering placement policies
Analyze the impact of enforcing the file placement policy represented in the
XML policy document /tmp/policy1.xml for the mount point /mnt1:
# fsppadm analyze -F /tmp/policy1.xml -i /mnt1
480
Administering SmartTier
Administering placement policies
previous locations, the files' new locations, and the reasons for the files' relocations.
The enforce operation creates the .__fsppadm_enforce.log file if the file does not
exist. The enforce operation appends the file if the file already exists. The
.__fsppadm_enforce.log file can be backed up or removed as with a normal file.
You can specify the -F option to specify a placement policy other than the existing
active placement policy. This option can be used to enforce the rules given in the
specified placement policy for maintenance purposes, such as for reclaiming a LUN
from the file system.
You can specify the -p option to specify the number of concurrent threads to be
used to perform the fsppadm operation. You specify the io_nice parameter as an
integer between 1 and 100, with 50 being the default value. A value of 1 specifies
1 slave and 1 master thread per mount. A value of 50 specifies 16 slaves and 1
master thread per mount. A value of 100 specifies 32 slaves and 1 master thread
per mount.
You can specify the -C option so that the fsppadm command processes only those
files that have some activity stats logged in the File Change Log (FCL) file during
the period specified in the placement policy. You can use the -C option only if the
policys ACCESSTEMP or IOTEMP elements use the Prefer criteria.
You can specify the -T option to specify the placement classes that contain files
for the fsppadm command to sweep and relocate selectively. You can specify the
-T option only if the policy uses the Prefer criteria forIOTEMP.
See the fsppadm(1M) manual page.
The following example uses the fsppadm enforce command to enforce the file
placement policy for the file system at mount point /mnt1, and includes the access
time, modification time, and file size of the specified paths in the report, /tmp/report.
481
Administering SmartTier
File placement policy grammar
482
Relocated
Class
tier3
tier3
tier3
tier3
Tier Name
tier4
tier3
tier2
tier1
Size (KB)
524288
524288
524288
524288
Relocated
Volume
vole
vole
vole
volf
Rule
a_to_z
a_to_z
a_to_z
a_to_z
File
/mnt1/mds1/d1/file1
/mnt1/mds1/d1/file2
/mnt1/mds1/d1/d2/file3
/mnt1/mds1/d1/d2/file4
/mnt1
42
1267
Administering SmartTier
File placement policy rules
A VxFS file placement policy defines the desired placement of sets of files on the
volumes of a VxFS multi-volume file system. A file placement policy specifies the
placement classes of volumes on which files should be created, and where and
under what conditions the files should be relocated to volumes in alternate placement
classes or deleted. You can create file placement policy documents, which are XML
text files, using an XML editor, a text editor, or Veritas Operations Manager (VOM).
See the /opt/VRTSvxfs/etc/placement_policy.dtd file for the overall structure
of a placement policy.
SELECT statement
The VxFS placement policy rule SELECT statement designates the collection of files
to which a rule applies.
The following XML snippet illustrates the general form of the SELECT statement:
<SELECT>
<DIRECTORY Flags="directory_flag_value"> value
</DIRECTORY>
483
Administering SmartTier
File placement policy rules
A SELECT statement may designate files by using the following selection criteria:
<DIRECTORY>
A full path name relative to the file system mount point. The
Flags="directory_flag_value" XML attribute must have a value
of nonrecursive, denoting that only files in the specified directory
are designated, or a value of recursive, denoting that files in all
subdirectories of the specified directory are designated. The Flags
attribute is mandatory.
The <DIRECTORY> criterion is optional, and may be specified more
than once.
<PATTERN>
484
Administering SmartTier
File placement policy rules
<USER>
User name of the file's owner. The user number cannot be specified in
place of the name.
The <USER> criterion is optional, and may be specified more than once.
<GROUP>
Group name of the file's owner. The group number cannot be specified
in place of the group name.
The <GROUP> criterion is optional, and may be specified more than
once.
One or more instances of any or all of the file selection criteria may be specified
within a single SELECT statement. If two or more selection criteria of different types
are specified in a single statement, a file must satisfy one criterion of each type to
be selected.
In the following example, only files that reside in either the ora/db or the crash/dump
directory, and whose owner is either user1 or user2 are selected for possible action:
<SELECT>
<DIRECTORY Flags="nonrecursive">ora/db</DIRECTORY>
<DIRECTORY Flags="nonrecursive">crash/dump</DIRECTORY>
<USER>user1</USER>
<USER>user2</USER>
</SELECT>
A rule may include multiple SELECT statements. If a file satisfies the selection criteria
of one of the SELECT statements, it is eligible for action.
In the following example, any files owned by either user1 or user2, no matter in
which directories they reside, as well as all files in the ora/db or crash/dump
directories, no matter which users own them, are eligible for action:
<SELECT>
<DIRECTORY Flags="nonrecursive">ora/db</DIRECTORY>
<DIRECTORY Flags="nonrecursive">crash/dump</DIRECTORY>
</SELECT>
<SELECT>
<USER>user1</USER>
<USER>user2</USER>
</SELECT>
When VxFS creates new files, VxFS applies active placement policy rules in the
order of appearance in the active placement policy's XML source file. The first rule
in which a SELECT statement designates the file to be created determines the file's
placement; no later rules apply. Similarly, VxFS scans the active policy rules on
behalf of each file when relocating files, stopping the rules scan when it reaches
485
Administering SmartTier
File placement policy rules
the first rule containing a SELECT statement that designates the file. This behavior
holds true even if the applicable rule results in no action. Take for example a policy
rule that indicates that .dat files inactive for 30 days should be relocated, and a
later rule indicates that .dat files larger than 10 megabytes should be relocated.
A 20 megabyte .dat file that has been inactive for 10 days will not be relocated
because the earlier rule applied. The later rule is never scanned.
A placement policy rule's action statements apply to all files designated by any of
the rule's SELECT statements. If an existing file is not designated by a SELECT
statement in any rule of a file system's active placement policy, then SmartTier
does not relocate or delete the file. If an application creates a file that is not
designated by a SELECT statement in a rule of the file system's active policy, then
VxFS places the file according to its own internal algorithms. If this behavior is
inappropriate, the last rule in the policy document on which the file system's active
placement policy is based should specify <PATTERN>*</PATTERN> as the only
selection criterion in its SELECT statement, and a CREATE statement naming the
desired placement class for files not selected by other rules.
CREATE statement
A CREATE statement in a file placement policy rule specifies one or more placement
classes of volumes on which VxFS should allocate space for new files to which the
rule applies at the time the files are created. You can specify only placement classes,
not individual volume names, in a CREATE statement.
A file placement policy rule may contain at most one CREATE statement. If a rule
does not contain a CREATE statement, VxFS places files designated by the rule's
SELECT statements according to its internal algorithms. However, rules without
CREATE statements can be used to relocate or delete existing files that the rules'
SELECT statements designate.
The following XML snippet illustrates the general form of the CREATE statement:
<CREATE>
<ON Flags="flag_value">
<DESTINATION>
<CLASS> placement_class_name </CLASS>
<BALANCE_SIZE Units="units_specifier"> chunk_size
</BALANCE_SIZE>
</DESTINATION>
<DESTINATION> additional_placement_class_specifications
</DESTINATION>
</ON>
</CREATE>
486
Administering SmartTier
File placement policy rules
Bytes
KB
Kilobytes
MB
Megabytes
GB
Gigabytes
487
Administering SmartTier
File placement policy rules
The CREATE statement in the following example specifies that files to which the rule
applies should be created on the tier1 volume if space is available, and on one
of the tier2 volumes if not. If space allocation on tier1 and tier2 volumes is not
possible, file creation fails, even if space is available on tier3 volumes.
<CREATE>
<ON>
<DESTINATION>
<CLASS>tier1</CLASS>
</DESTINATION>
<DESTINATION>
<CLASS>tier2</CLASS>
<BALANCE_SIZE Units="MB">1</BALANCE_SIZE>
</DESTINATION>
</ON>
</CREATE>
RELOCATE statement
The RELOCATE action statement of file placement policy rules specifies an action
that VxFS takes on designated files during periodic scans of the file system, and
the circumstances under which the actions should be taken. The fsppadm enforce
command is used to scan all or part of a file system for files that should be relocated
based on rules in the active placement policy at the time of the scan.
See the fsppadm(1M) manual page.
The fsppadm enforce command scans file systems in path name order. For each
file, VxFS identifies the first applicable rule in the active placement policy, as
determined by the rules' SELECT statements. If the file resides on a volume specified
in the <FROM> clause of one of the rule's RELOCATE statements, and if the file meets
the criteria for relocation specified in the statement's <WHEN> clause, the file is
scheduled for relocation to a volume in the first placement class listed in the <TO>
clause that has space available for the file. The scan that results from issuing the
fsppadm enforce command runs to completion before any files are relocated.
The following XML snippet illustrates the general form of the RELOCATE statement:
<RELOCATE>
<FROM>
488
Administering SmartTier
File placement policy rules
<SOURCE>
<CLASS> placement_class_name </CLASS>
</SOURCE>
<SOURCE> additional_placement_class_specifications
</SOURCE>
</FROM>
<TO>
<DESTINATION>
<CLASS> placement_class_name </CLASS>
<BALANCE_SIZE Units="units_specifier">
chunk_size
</BALANCE_SIZE>
</DESTINATION>
<DESTINATION>
additional_placement_class_specifications
</DESTINATION>
</TO>
<WHEN> relocation_conditions </WHEN>
</RELOCATE>
<FROM> An optional clause that contains a list of placement classes from whose
volumes designated files should be relocated if the files meet the conditions
specified in the <WHEN> clause. No priority is associated with the ordering of
placement classes listed in a <FROM> clause. If a file to which the rule applies is
located on a volume in any specified placement class, the file is considered for
relocation.
If a RELOCATE statement contains a <FROM> clause, VxFS only considers files
that reside on volumes in placement classes specified in the clause for relocation.
If no <FROM> clause is present, qualifying files are relocated regardless of where
the files reside.
relocated. Unlike the source placement class list in a FROM clause, placement
classes in a <TO>clause are specified in priority order. Files are relocated to
volumes in the first specified placement class if possible, to the second if not,
and so forth.
The <TO> clause of the RELOCATE statement contains a list of <DESTINATION>
XML elements specifying placement classes to whose volumes VxFS relocates
qualifying files. Placement classes are specified in priority order. VxFS relocates
qualifying files to volumes in the first placement class specified as long as space
is available. A <DESTINATION> element may contain an optional <BALANCE_SIZE>
modifier sub-element. The <BALANCE_SIZE> modifier indicates that relocated
489
Administering SmartTier
File placement policy rules
files should be distributed across the volumes of the destination placement class
in chunks of the indicated size. For example, if a balance size of one megabyte
is specified for a placement class containing three volumes, VxFS relocates the
first megabyte the file to the first (lowest indexed) volume in the class, the second
megabyte to the second volume, the third megabyte to the third volume, the
fourth megabyte to the first volume, and so forth. Using the Units attribute in the
<BALANCE_SIZE> XML tag, the chunk value may be specified in the balance size
value may be specified in bytes (Units="bytes"), kilobytes (Units="KB"),
megabytes (Units="MB"), or gigabytes (Units="GB").
The <BALANCE_SIZE> element distributes the allocation of database files across
the volumes in a placement class. In principle, distributing the data in each file
across multiple volumes distributes the I/O load across the volumes as well.
For a multi-volume file system, you can specify the compress flag or the
uncompress flag with the <TO> clause. The compress flag causes SmartTier to
compress a file's extents while relocating the file to the tier specified by the
<DESTINATION> element. SmartTier compresses the entire file and relocates
the file to the destination tier, even if the file spans multiple tiers. The uncompress
flag causes SmartTier to uncompress a file's extents while relocating the file to
the tier specified by the <DESTINATION> element.
The following XML snippet specifies the compress flag:
<TO Flags="compress">
<DESTINATION>
<CLASS> tier4 </CLASS>
</DESTINATION>
</TO>
<WHEN> An optional clause that indicates the conditions under which files to
which the rule applies should be relocated. Files that have been unaccessed or
unmodified for a specified period, reached a certain size, or reached a specific
I/O temperature or access temperature level may be relocated. If a RELOCATE
statement does not contain a <WHEN> clause, files to which the rule applies are
relocated unconditionally.
490
Administering SmartTier
File placement policy rules
This criterion is met when files are inactive for a designated period
or during a designated period relative to the time at which the
fsppadm enforce command was issued.
<MODAGE>
<SIZE>
<IOTEMP>
<ACCESSTEMP>
Note: The use of <IOTEMP> and <ACCESSTEMP> for data placement on VxFS servers
that are used as NFS servers may not be very effective due to NFS caching. NFS
client side caching and the way that NFS works can result in I/O initiated from an
NFS client not producing NFS server side I/O. As such, any temperature
measurements in place on the server side will not correctly reflect the I/O behavior
that is specified by the placement policy.
If the server is solely used as an NFS server, this problem can potentially be
mitigated by suitably adjusting or lowering the temperature thresholds. However,
adjusting the thresholds may not always create the desired effect. In addition, if the
same mount point is used both as an NFS export as well as a local mount, the
temperature-based placement decisions will not be very effective due to the NFS
cache skew.
491
Administering SmartTier
File placement policy rules
492
The following XML snippet illustrates the general form of the <WHEN> clause in a
RELOCATE statement:
<WHEN>
<ACCAGE Units="units_value">
<MIN Flags="comparison_operator">
min_access_age</MIN>
<MAX Flags="comparison_operator">
max_access_age</MAX>
</ACCAGE>
<MODAGE Units="units_value">
<MIN Flags="comparison_operator">
min_modification_age</MIN>
<MAX Flags="comparison_operator">
max_modification_age</MAX>
</MODAGE>
<SIZE " Units="units_value">
<MIN Flags="comparison_operator">
min_size</MIN>
<MAX Flags="comparison_operator">
max_size</MAX>
</SIZE>
<IOTEMP Type="read_write_preference" Prefer="temperature_preference">
<MIN Flags="comparison_operator">
min_I/O_temperature</MIN>
<MAX Flags="comparison_operator">
max_I/O_temperature</MAX>
<PERIOD Units="days_or_hours"> days_or_hours_of_interest </PERIOD>
</IOTEMP>
<ACCESSTEMP Type="read_write_preference"
Prefer="temperature_preference">
<MIN Flags="comparison_operator">
min_access_temperature</MIN>
<MAX Flags="comparison_operator">
max_access_temperature</MAX>
<PERIOD Units="days_or_hours"> days_or_hours_of_interest </PERIOD>
</ACCESSTEMP>
</WHEN>
The access age (<ACCAGE>) element refers to the amount of time since a file was
last accessed. VxFS computes access age by subtracting a file's time of last access,
atime, from the time when the fsppadm enforce command was issued. The <MIN>
and <MAX> XML elements in an <ACCAGE> clause, denote the minimum and maximum
Administering SmartTier
File placement policy rules
access age thresholds for relocation, respectively. These elements are optional,
but at least one must be included. Using the Units XML attribute, the <MIN> and
<MAX> elements may be specified in the following units:
hours
Hours
days
Both the <MIN> and <MAX> elements require Flags attributes to direct their operation.
For <MIN>, the following Flags attributes values may be specified:
gt
The time of last access must be greater than the specified interval.
eq
gteq
The time of last access must be greater than or equal to the specified
interval.
The time of last access must be less than the specified interval.
lteq
The time of last access must be less than or equal to the specified
interval.
Including a <MIN> element in a <WHEN> clause causes VxFS to relocate files to which
the rule applies that have been inactive for longer than the specified interval. Such
a rule would typically be used to relocate inactive files to less expensive storage
tiers. Conversely, including <MAX> causes files accessed within the specified interval
to be relocated. It would typically be used to move inactive files against which activity
had recommenced to higher performance or more reliable storage. Including both
<MIN> and <MAX> causes VxFS to relocate files whose access age lies between
the two.
The modification age relocation criterion, <MODAGE>, is similar to access age, except
that files' POSIX mtime values are used in computations. You would typically specify
the <MODAGE> criterion to cause relocation of recently modified files to higher
performance or more reliable storage tiers in anticipation that the files would be
accessed recurrently in the near future.
The file size relocation criterion, <SIZE>, causes files to be relocated if the files are
larger or smaller than the values specified in the <MIN> and <MAX> relocation criteria,
respectively, at the time that the fsppadm enforce command was issued. Specifying
both criteria causes VxFS to schedule relocation for files whose sizes lie between
493
Administering SmartTier
File placement policy rules
the two. Using the Units attribute, threshold file sizes may be specified in the
following units:
bytes
Bytes
KB
Kilobytes
MB
Megabytes
GB
Gigabytes
494
Administering SmartTier
File placement policy rules
If you instead want VxFS to look at file I/O activity between 3 hours prior to running
the fsppadm enforce command and the time that you ran the command, you specify
the following <PERIOD> element:
<PERIOD Units="hours"> 3 </PERIOD>
The amount of time specified in the <PERIOD> element should not exceed one or
two weeks due to the disk space used by the File Change Log (FCL) file.
See About the Veritas File System File Change Log file on page 710.
I/O temperature is a softer measure of I/O activity than access age. With access
age, a single access to a file resets the file's atime to the current time. In contrast,
a file's I/O temperature decreases gradually as time passes without the file being
accessed, and increases gradually as the file is accessed periodically. For example,
if a new 10 megabyte file is read completely five times on Monday and fsppadm
enforce runs at midnight, the file's two-day I/O temperature will be five and its
access age in days will be zero. If the file is read once on Tuesday, the file's access
age in days at midnight will be zero, and its two-day I/O temperature will have
dropped to three. If the file is read once on Wednesday, the file's access age at
midnight will still be zero, but its two-day I/O temperature will have dropped to one,
as the influence of Monday's I/O will have disappeared.
If the intention of a file placement policy is to keep files in place, such as on top-tier
storage devices, as long as the files are being accessed at all, then access age is
the more appropriate relocation criterion. However, if the intention is to relocate
files as the I/O load on them decreases, then I/O temperature is more appropriate.
The case for upward relocation is similar. If files that have been relocated to
lower-tier storage devices due to infrequent access experience renewed application
activity, then it may be appropriate to relocate those files to top-tier devices. A policy
rule that uses access age with a low <MAX> value, that is, the interval between
fsppadm enforce runs, as a relocation criterion will cause files to be relocated that
have been accessed even once during the interval. Conversely, a policy that uses
I/O temperature with a <MIN> value will only relocate files that have experienced a
sustained level of activity over the period of interest.
Prefer attribute
You can specify a value for the Prefer attribute for the <IOTEMP> and <ACCESSTEMP>
criteria, which gives preference to relocating files. The Prefer attribute can take
two values: low or high. If you specify low, Veritas File System (VxFS) relocates
the files with the lower I/O temperature before relocating the files with the higher
I/O temperature. If you specify high, VxFS relocates the files with the higher I/O
temperature before relocating the files with the lower I/O temperature. Symantec
495
Administering SmartTier
File placement policy rules
recommends that you specify a Prefer attribute value only if you are using solid
state disks (SSDs).
See Prefer mechanism with solid state disks on page 535.
Different <PERIOD> elements may be used in the <IOTEMP> and <ACCESSTEMP>
criteria of different RELOCATE statements within the same policy.
The following placement policy snippet gives an example of the Prefer criteria:
<RELOCATE>
...
<WHEN>
<IOTEMP Type="nrbytes" Prefer="high">
<MIN Flags="gteq"> 3.4 </MIN>
<PERIOD Units="hours"> 6 </PERIOD>
</IOTEMP>
</WHEN>
</RELOCATE>
If there are a number of files whose I/O temperature is greater than the given
minimum value, the files with the higher temperature are first subject to the RELOCATE
operation before the files with the lower temperature.
496
Administering SmartTier
File placement policy rules
497
In the snippet, VxFS relocates any file whose read IOTEMP over the last 6 hours
is 1.5 times that of all the active files in the whole file system over the last 24 hours.
This Average criteria is more intuitive and easier to specify than the absolute values.
The following formula computes the read IOTEMP of a given file:
IOTEMP = (bytes of the file that are read in the PERIOD) /
(PERIOD in hours * size of the file in bytes)
h is 24 hours by default. The average write and read/write IOTEMP are also
computed accordingly.
In the example snippet, the value 1.5 is the multiple of average read IOTEMP over
the last 24 hours across the whole file system, or rather across all of the active
inodes whose activity is still available in the File Change Log (FCL) file at the time
of the scan. Thus, the files read IOTEMP activity over the last 6 hours is compared
against 1.5 times that of the last 24 hours average activity to make the relocation
decision. Using this method eliminates the need to give a specific number for the
<IOTEMP> or <ACCESSTEMP> criteria, and instead lets you specify a multiple of the
Average temperature. Keeping this averaging period longer than the specified
<PERIOD> value normalizes the effects of any spikes and lulls in the file activity.
You can also use the Average criteria with the <ACCESSTEMP> criteria. The purpose
and usage are the same.
You determine the type of the average by whether you specify the Average criteria
with the <IOTEMP> or with the <ACCESSTEMP> criteria. The Average criteria can be
any of the following types, depending on the criteria used:
rw Average IOTEMP
Administering SmartTier
File placement policy rules
rw Average ACCESSTEMP
The default Average is a 24 hour average temperature, which is the total of all of
the temperatures available up to the last 24 hours in the FCL file, divided by the
number of files for which such I/O statistics still exist in the FCL file. You can override
the number of hours by specifying the AveragePeriod attribute in the
<PLACEMENT_POLICY> element. Symantec recommends that you specify an
AveragePeriod attribute value only if you are using solid state disks (SSDs).
The following example statement causes the average file system activity be collected
and computed over a period of 30 hours instead of the default 24 hours:
<PLACEMENT_POLICY Name="Policy1" Version="5.1" AveragePeriod="30">
The files designated by the rule's SELECT statement that reside on volumes in
placement class tier1 at the time the fsppadm enforce command executes would
be unconditionally relocated to volumes in placement class tier2 as long as space
permitted. This type of rule might be used, for example, with applications that create
and access new files but seldom access existing files once they have been
processed. A CREATE statement would specify creation on tier1 volumes, which
are presumably high performance or high availability, or both. Each instantiation of
fsppadm enforce would relocate files created since the last run to tier2 volumes.
The following example illustrates a more comprehensive form of the RELOCATE
statement that uses access age as the criterion for relocating files from tier1
volumes to tier2 volumes. This rule is designed to maintain free space on tier1
volumes by relocating inactive files to tier2 volumes:
498
Administering SmartTier
File placement policy rules
<RELOCATE>
<FROM>
<SOURCE>
<CLASS>tier1</CLASS>
</SOURCE>
</FROM>
<TO>
<DESTINATION>
<CLASS>tier2</CLASS>
</DESTINATION>
</TO>
<WHEN>
<SIZE Units="MB">
<MIN Flags="gt">1</MIN>
<MAX Flags="lt">1000</MAX>
</SIZE>
<ACCAGE Units="days">
<MIN Flags="gt">30</MIN>
</ACCAGE>
</WHEN>
</RELOCATE>
Files designated by the rule's SELECT statement are relocated from tier1 volumes
to tier2 volumes if they are between 1 MB and 1000 MB in size and have not been
accessed for 30 days. VxFS relocates qualifying files in the order in which it
encounters them as it scans the file system's directory tree. VxFS stops scheduling
qualifying files for relocation when when it calculates that already-scheduled
relocations would result in tier2 volumes being fully occupied.
The following example illustrates a possible companion rule that relocates files from
tier2 volumes to tier1 ones based on their I/O temperatures. This rule might be
used to return files that had been relocated to tier2 volumes due to inactivity to
tier1 volumes when application activity against them increases. Using I/O
temperature rather than access age as the relocation criterion reduces the chance
of relocating files that are not actually being used frequently by applications. This
rule does not cause files to be relocated unless there is sustained activity against
them over the most recent two-day period.
<RELOCATE>
<FROM>
<SOURCE>
<CLASS>tier2</CLASS>
</SOURCE>
</FROM>
499
Administering SmartTier
File placement policy rules
<TO>
<DESTINATION>
<CLASS>tier1</CLASS>
</DESTINATION>
</TO>
<WHEN>
<IOTEMP Type="nrbytes">
<MIN Flags="gt">5</MIN>
<PERIOD>2</PERIOD>
</IOTEMP>
</WHEN>
</RELOCATE>
This rule relocates files that reside on tier2 volumes to tier1 volumes if their I/O
temperatures are above 5 for the two day period immediately preceding the issuing
of the fsppadm enforce command. VxFS relocates qualifying files in the order in
which it encounters them during its file system directory tree scan. When tier1
volumes are fully occupied, VxFS stops scheduling qualifying files for relocation.
VxFS file placement policies are able to control file placement across any number
of placement classes. The following example illustrates a rule for relocating files
with low I/O temperatures from tier1 volumes to tier2 volumes, and to tier3
volumes when tier2 volumes are fully occupied:
<RELOCATE>
<FROM>
<SOURCE>
<CLASS>tier1</CLASS>
</SOURCE>
</FROM>
<TO>
<DESTINATION>
<CLASS>tier2</CLASS>
</DESTINATION>
<DESTINATION>
<CLASS>tier3</CLASS>
</DESTINATION>
</TO>
<WHEN>
<IOTEMP Type="nrbytes">
<MAX Flags="lt">4</MAX>
<PERIOD>3</PERIOD>
</IOTEMP>
500
Administering SmartTier
File placement policy rules
</WHEN>
</RELOCATE>
This rule relocates files whose 3-day I/O temperatures are less than 4 and which
reside on tier1 volumes. When VxFS calculates that already-relocated files would
result in tier2 volumes being fully occupied, VxFS relocates qualifying files to
tier3 volumes instead. VxFS relocates qualifying files as it encounters them in its
scan of the file system directory tree.
The <FROM> clause in the RELOCATE statement is optional. If the clause is not present,
VxFS evaluates files designated by the rule's SELECT statement for relocation no
matter which volumes they reside on when the fsppadm enforce command is
issued. The following example illustrates a fragment of a policy rule that relocates
files according to their sizes, no matter where they reside when the fsppadm
enforce command is issued:
<RELOCATE>
<TO>
<DESTINATION>
<CLASS>tier1</CLASS>
</DESTINATION>
</TO>
<WHEN>
<SIZE Units="MB">
<MAX Flags="lt">10</MAX>
</SIZE>
</WHEN>
</RELOCATE>
<RELOCATE>
<TO>
<DESTINATION>
<CLASS>tier2</CLASS>
</DESTINATION>
</TO>
<WHEN>
<SIZE Units="MB">
<MIN Flags="gteq">10</MIN>
<MAX Flags="lt">100</MAX>
</SIZE>
</WHEN>
</RELOCATE>
<RELOCATE>
<TO>
<DESTINATION>
501
Administering SmartTier
File placement policy rules
<CLASS>tier3</CLASS>
</DESTINATION>
</TO>
<WHEN>
<SIZE Units="MB">
<MIN Flags="gteq">100</MIN>
</SIZE>
</WHEN>
</RELOCATE>
This rule relocates files smaller than 10 megabytes to tier1 volumes, files between
10 and 100 megabytes to tier2 volumes, and files larger than 100 megabytes to
tier3 volumes. VxFS relocates all qualifying files that do not already reside on
volumes in their DESTINATION placement classes when the fsppadm enforce
command is issued.
The following example compresses while relocating all of the files from tier2 with
the extension dbf to tier4 if the file was accessed over 30 days ago:
<SELECT Flags="Data">
<PATTERN> *.dbf </PATTERN>
</SELECT>
<RELOCATE>
<FROM>
<SOURCE>
<CLASS> tier2 </CLASS>
</SOURCE>
</FROM>
<TO Flags="compress">
<DESTINATION>
<CLASS> tier4 </CLASS>
</DESTINATION>
</TO>
<WHEN>
<ACCAGE Units="days">
<MIN Flags="gt">30</MIN>
</ACCAGE>
</WHEN>
</RELOCATE>
The following example uncompresses while relocating all of the files from tier3
with the extension dbf to tier1 if the file was accessed over 1 hour ago:
502
Administering SmartTier
File placement policy rules
<SELECT Flags="Data">
<PATTERN> *.dbf </PATTERN>
</SELECT>
<RELOCATE>
<FROM>
<SOURCE>
<CLASS> tier3 </CLASS>
</SOURCE>
</FROM>
<TO Flags="uncompress">
<DESTINATION>
<CLASS> tier1 </CLASS>
</DESTINATION>
</TO>
<WHEN>
<ACCAGE Units="hours">
<MIN Flags="gt">1</MIN>
</ACCAGE>
</WHEN>
</RELOCATE>
DELETE statement
The DELETE file placement policy rule statement is very similar to the RELOCATE
statement in both form and function, lacking only the <TO> clause. File placement
policy-based deletion may be thought of as relocation with a fixed destination.
Note: Use DELETE statements with caution.
The following XML snippet illustrates the general form of the DELETE statement:
<DELETE>
<FROM>
<SOURCE>
<CLASS> placement_class_name </CLASS>
</SOURCE>
<SOURCE>
additional_placement_class_specifications
</SOURCE>
</FROM>
503
Administering SmartTier
File placement policy rules
<WHEN>
504
Administering SmartTier
File placement policy rules
The first DELETE statement unconditionally deletes files designated by the rule's
SELECT statement that reside on tier3 volumes when the fsppadm enforce
command is issued. The absence of a <WHEN> clause in the DELETE statement
indicates that deletion of designated files is unconditional.
The second DELETE statement deletes files to which the rule applies that reside on
tier2 volumes when the fsppadm enforce command is issued and that have not
been accessed for the past 120 days.
COMPRESS statement
The COMPRESS statement in a file placement policy rule specifies in-place file
compression on multi-volume or single-volume file systems. The placement policy
becomes assigned to the selected file, and allocation for the compressed extents
is done from the same tier specified in the <SOURCE> element of the <FROM> clause.
SmartTier performs in-place compression of the entire file, even if the file spans
across multiple tiers.
Note: SmartTier does not schedule compression activity. If you did not integrate
your Symantec Storage Foundation product with the Veritas Operations Manager
(VOM), then you must automate compression activity by using techniques such as
scheduling through cron jobs.
The following XML snippet illustrates the general form of the COMPRESS statement:
<COMPRESS>
<FROM>
<SOURCE>
<CLASS> placement_class_name </CLASS>
</SOURCE>
<SOURCE> additional_placement_class_specifications
</SOURCE>
</FROM>
<WHEN> compression_conditions </WHEN>
</COMPRESS>
505
Administering SmartTier
File placement policy rules
<FROM>
<WHEN>
The following are the criteria that can be specified for the <WHEN> clause:
<ACCAGE>
This criterion is met when files are inactive for a designated period
or during a designated period relative to the time at which the
fsppadm enforce command was issued.
<MODAGE>
<SIZE>
<IOTEMP>
506
Administering SmartTier
File placement policy rules
<ACCESSTEMP>
Note: The use of <IOTEMP> and <ACCESSTEMP> for data placement on VxFS servers
that are used as NFS servers may not be very effective due to NFS caching. NFS
client side caching and the way that NFS works can result in I/O initiated from an
NFS client not producing NFS server side I/O. As such, any temperature
measurements in place on the server side will not correctly reflect the I/O behavior
that is specified by the placement policy.
If the server is solely used as an NFS server, this problem can potentially be
mitigated by suitably adjusting or lowering the temperature thresholds. However,
adjusting the thresholds may not always create the desired effect. In addition, if the
same mount point is used both as an NFS export as well as a local mount, the
temperature-based placement decisions will not be very effective due to the NFS
cache skew.
The following XML snippet illustrates the general form of the <WHEN> clause in a
COMPRESS statement:
<WHEN>
<ACCAGE Units="units_value">
<MIN Flags="comparison_operator">
min_access_age</MIN>
<MAX Flags="comparison_operator">
max_access_age</MAX>
</ACCAGE>
<MODAGE Units="units_value">
<MIN Flags="comparison_operator">
min_modification_age</MIN>
<MAX Flags="comparison_operator">
max_modification_age</MAX>
</MODAGE>
<SIZE " Units="units_value">
<MIN Flags="comparison_operator">
min_size</MIN>
<MAX Flags="comparison_operator">
max_size</MAX>
</SIZE>
507
Administering SmartTier
File placement policy rules
508
The access age (<ACCAGE>) element refers to the amount of time since a file was
last accessed. VxFS computes access age by subtracting a file's time of last access,
atime, from the time when the fsppadm enforce command was issued. The <MIN>
and <MAX> XML elements in an <ACCAGE> clause, denote the minimum and maximum
access age thresholds for compression, respectively. These elements are optional,
but at least one must be included. Using the Units XML attribute, the <MIN> and
<MAX> elements may be specified in the following units:
hours
Hours
days
Both the <MIN> and <MAX> elements require Flags attributes to direct their operation.
For <MIN>, the following Flags attributes values may be specified:
gt
The time of last access must be greater than the specified interval.
eq
gteq
The time of last access must be greater than or equal to the specified
interval.
The time of last access must be less than the specified interval.
Administering SmartTier
File placement policy rules
lteq
The time of last access must be less than or equal to the specified
interval.
Bytes
KB
Kilobytes
MB
Megabytes
GB
Gigabytes
509
Administering SmartTier
File placement policy rules
If you instead want VxFS to look at file I/O activity between 3 hours prior to running
the fsppadm enforce command and the time that you ran the command, you specify
the following <PERIOD> element:
<PERIOD Units="hours"> 3 </PERIOD>
The amount of time specified in the <PERIOD> element should not exceed one or
two weeks due to the disk space used by the File Change Log (FCL) file.
See About the Veritas File System File Change Log file on page 710.
I/O temperature is a softer measure of I/O activity than access age. With access
age, a single access to a file resets the file's atime to the current time. In contrast,
a file's I/O temperature decreases gradually as time passes without the file being
accessed, and increases gradually as the file is accessed periodically. For example,
if a new 10 megabyte file is read completely five times on Monday and fsppadm
enforce runs at midnight, the file's two-day I/O temperature will be five and its
access age in days will be zero. If the file is read once on Tuesday, the file's access
age in days at midnight will be zero, and its two-day I/O temperature will have
dropped to three. If the file is read once on Wednesday, the file's access age at
midnight will still be zero, but its two-day I/O temperature will have dropped to one,
as the influence of Monday's I/O will have disappeared.
If the intention of a file placement policy is to keep files in place, such as on top-tier
storage devices, as long as the files are being accessed at all, then access age is
510
Administering SmartTier
File placement policy rules
Prefer attribute
You can specify a value for the Prefer attribute for the <IOTEMP> and <ACCESSTEMP>
criteria, which gives preference to compressing files. The Prefer attribute can take
two values: low or high. If you specify low, Veritas File System (VxFS) compresss
the files with the lower I/O temperature before compressing the files with the higher
I/O temperature. If you specify high, VxFS compresss the files with the higher I/O
temperature before compressing the files with the lower I/O temperature. Symantec
recommends that you specify a Prefer attribute value only if you are using solid
state disks (SSDs).
See Prefer mechanism with solid state disks on page 535.
Different <PERIOD> elements may be used in the <IOTEMP> and <ACCESSTEMP>
criteria of different COMPRESS statements within the same policy.
The following placement policy snippet gives an example of the Prefer criteria:
<COMPRESS>
...
<WHEN>
<IOTEMP Type="nrbytes" Prefer="high">
<MIN Flags="gteq"> 3.4 </MIN>
<PERIOD Units="hours"> 6 </PERIOD>
</IOTEMP>
</WHEN>
</COMPRESS>
If there are a number of files whose I/O temperature is greater than the given
minimum value, the files with the higher temperature are first subject to the COMPRESS
operation before the files with the lower temperature.
511
Administering SmartTier
File placement policy rules
512
In the snippet, VxFS compresss any file whose read IOTEMP over the last 6 hours
is 1.5 times that of all the active files in the whole file system over the last 24 hours.
This Average criteria is more intuitive and easier to specify than the absolute values.
The following formula computes the read IOTEMP of a given file:
IOTEMP = (bytes of the file that are read in the PERIOD) /
(PERIOD in hours * size of the file in bytes)
h is 24 hours by default. The average write and read/write IOTEMP are also
computed accordingly.
Administering SmartTier
File placement policy rules
In the example snippet, the value 1.5 is the multiple of average read IOTEMP over
the last 24 hours across the whole file system, or rather across all of the active
inodes whose activity is still available in the File Change Log (FCL) file at the time
of the scan. Thus, the files read IOTEMP activity over the last 6 hours is compared
against 1.5 times that of the last 24 hours average activity to make the compression
decision. Using this method eliminates the need to give a specific number for the
<IOTEMP> or <ACCESSTEMP> criteria, and instead lets you specify a multiple of the
Average temperature. Keeping this averaging period longer than the specified
<PERIOD> value normalizes the effects of any spikes and lulls in the file activity.
You can also use the Average criteria with the <ACCESSTEMP> criteria. The purpose
and usage are the same.
You determine the type of the average by whether you specify the Average criteria
with the <IOTEMP> or with the <ACCESSTEMP> criteria. The Average criteria can be
any of the following types, depending on the criteria used:
rw Average IOTEMP
rw Average ACCESSTEMP
The default Average is a 24 hour average temperature, which is the total of all of
the temperatures available up to the last 24 hours in the FCL file, divided by the
number of files for which such I/O statistics still exist in the FCL file. You can override
the number of hours by specifying the AveragePeriod attribute in the
<PLACEMENT_POLICY> element. Symantec recommends that you specify an
AveragePeriod attribute value only if you are using solid state disks (SSDs).
The following example statement causes the average file system activity be collected
and computed over a period of 30 hours instead of the default 24 hours:
<PLACEMENT_POLICY Name="Policy1" Version="5.1" AveragePeriod="30">
513
Administering SmartTier
File placement policy rules
<COMPRESS>
<FROM>
<SOURCE>
<CLASS> tier2 </CLASS>
</SOURCE>
</FROM>
<WHEN>
<ACCAGE Units="days">
<MIN Flags="gt">30</MIN>
</ACCAGE>
</WHEN>
</COMPRESS>
The files designated by the rule's SELECT statement that reside on volumes in
placement class tier2 at the time the fsppadm enforce command executes are
compressed in place. Each instantiation of fsppadm enforce compresses files
created since the last run on the tier2 volumes.
The following example compresses all of the files with the extension dbf on a single
volume if the file was not accessed for one minute.
<SELECT Flags="Data">
<PATTERN> *.dbf </PATTERN>
</SELECT>
<COMPRESS>
<WHEN>
<ACCAGE Units="minutes">
<MIN Flags="gt">1</MIN>
</ACCAGE>
</WHEN>
</COMPRESS>
No <FROM> clause is required for single volume. The files designated by the rule's
SELECT statement at the time the fsppadm enforce command executes are
compressed in place. Each instantiation of fsppadm enforce compresses files
created since the last run on the volume.
The following example compresses all of the files on tier3:
<SELECT Flags="Data">
<PATTERN> * </PATTERN>
</SELECT>
514
Administering SmartTier
File placement policy rules
<COMPRESS>
<FROM>
<SOURCE>
<CLASS> tier3 </CLASS>
</SOURCE>
</FROM>
</COMPRESS>
This rule compresses in place all files that reside on tier3 at the time the fsppadm
enforce command executes.
UNCOMPRESS statement
The UNCOMPRESS statement in a file placement policy rule specifies in-place file
uncompression on multi-volume and single-volume file systems. The placement
policy becomes assigned to the selected file, and allocation for the uncompressed
extents is done from the tier specified in the <SOURCE> element of the <FROM> clause.
If a file is partially compressed, then the file can be picked only for in-place
compression. After being compressed, the file will be uncompressed before being
relocated in the next policy enforcement.
Note: SmartTier does not schedule uncompression activity. If you did not integrate
your Symantec Storage Foundation product with the Veritas Operations Manager
(VOM), then you must automate uncompression activity by using techniques such
as scheduling through cron jobs.
The following XML snippet illustrates the general form of the UNCOMPRESS statement:
<UNCOMPRESS>
<FROM>
<SOURCE>
<CLASS> placement_class_name </CLASS>
</SOURCE>
<SOURCE> additional_placement_class_specifications
</SOURCE>
</FROM>
<WHEN> uncompression_conditions </WHEN>
</UNCOMPRESS>
515
Administering SmartTier
File placement policy rules
<FROM>
<WHEN>
The following are the criteria that can be specified for the <WHEN> clause:
<ACCAGE>
This criterion is met when files are inactive for a designated period
or during a designated period relative to the time at which the
fsppadm enforce command was issued.
<MODAGE>
<SIZE>
<IOTEMP>
516
Administering SmartTier
File placement policy rules
<ACCESSTEMP>
Note: The use of <IOTEMP> and <ACCESSTEMP> for data placement on VxFS servers
that are used as NFS servers may not be very effective due to NFS caching. NFS
client side caching and the way that NFS works can result in I/O initiated from an
NFS client not producing NFS server side I/O. As such, any temperature
measurements in place on the server side will not correctly reflect the I/O behavior
that is specified by the placement policy.
If the server is solely used as an NFS server, this problem can potentially be
mitigated by suitably adjusting or lowering the temperature thresholds. However,
adjusting the thresholds may not always create the desired effect. In addition, if the
same mount point is used both as an NFS export as well as a local mount, the
temperature-based placement decisions will not be very effective due to the NFS
cache skew.
The following XML snippet illustrates the general form of the <WHEN> clause in a
UNCOMPRESS statement:
<WHEN>
<ACCAGE Units="units_value">
<MIN Flags="comparison_operator">
min_access_age</MIN>
<MAX Flags="comparison_operator">
max_access_age</MAX>
</ACCAGE>
<MODAGE Units="units_value">
<MIN Flags="comparison_operator">
min_modification_age</MIN>
<MAX Flags="comparison_operator">
max_modification_age</MAX>
</MODAGE>
<SIZE " Units="units_value">
<MIN Flags="comparison_operator">
min_size</MIN>
<MAX Flags="comparison_operator">
max_size</MAX>
</SIZE>
517
Administering SmartTier
File placement policy rules
518
The access age (<ACCAGE>) element refers to the amount of time since a file was
last accessed. VxFS computes access age by subtracting a file's time of last access,
atime, from the time when the fsppadm enforce command was issued. The <MIN>
and <MAX> XML elements in an <ACCAGE> clause, denote the minimum and maximum
access age thresholds for uncompression, respectively. These elements are optional,
but at least one must be included. Using the Units XML attribute, the <MIN> and
<MAX> elements may be specified in the following units:
hours
Hours
days
Both the <MIN> and <MAX> elements require Flags attributes to direct their operation.
For <MIN>, the following Flags attributes values may be specified:
gt
The time of last access must be greater than the specified interval.
eq
gteq
The time of last access must be greater than or equal to the specified
interval.
The time of last access must be less than the specified interval.
Administering SmartTier
File placement policy rules
lteq
The time of last access must be less than or equal to the specified
interval.
Bytes
KB
Kilobytes
MB
Megabytes
GB
Gigabytes
519
Administering SmartTier
File placement policy rules
If you instead want VxFS to look at file I/O activity between 3 hours prior to running
the fsppadm enforce command and the time that you ran the command, you specify
the following <PERIOD> element:
<PERIOD Units="hours"> 3 </PERIOD>
The amount of time specified in the <PERIOD> element should not exceed one or
two weeks due to the disk space used by the File Change Log (FCL) file.
See About the Veritas File System File Change Log file on page 710.
I/O temperature is a softer measure of I/O activity than access age. With access
age, a single access to a file resets the file's atime to the current time. In contrast,
a file's I/O temperature decreases gradually as time passes without the file being
accessed, and increases gradually as the file is accessed periodically. For example,
if a new 10 megabyte file is read completely five times on Monday and fsppadm
enforce runs at midnight, the file's two-day I/O temperature will be five and its
access age in days will be zero. If the file is read once on Tuesday, the file's access
age in days at midnight will be zero, and its two-day I/O temperature will have
dropped to three. If the file is read once on Wednesday, the file's access age at
midnight will still be zero, but its two-day I/O temperature will have dropped to one,
as the influence of Monday's I/O will have disappeared.
If the intention of a file placement policy is to keep files in place, such as on top-tier
storage devices, as long as the files are being accessed at all, then access age is
520
Administering SmartTier
File placement policy rules
Prefer attribute
You can specify a value for the Prefer attribute for the <IOTEMP> and <ACCESSTEMP>
criteria, which gives preference to uncompressing files. The Prefer attribute can
take two values: low or high. If you specify low, Veritas File System (VxFS)
uncompresss the files with the lower I/O temperature before uncompressing the
files with the higher I/O temperature. If you specify high, VxFS uncompresss the
files with the higher I/O temperature before uncompressing the files with the lower
I/O temperature. Symantec recommends that you specify a Prefer attribute value
only if you are using solid state disks (SSDs).
See Prefer mechanism with solid state disks on page 535.
Different <PERIOD> elements may be used in the <IOTEMP> and <ACCESSTEMP>
criteria of different UNCOMPRESS statements within the same policy.
The following placement policy snippet gives an example of the Prefer criteria:
<UNCOMPRESS>
...
<WHEN>
<IOTEMP Type="nrbytes" Prefer="high">
<MIN Flags="gteq"> 3.4 </MIN>
<PERIOD Units="hours"> 6 </PERIOD>
</IOTEMP>
</WHEN>
</UNCOMPRESS>
If there are a number of files whose I/O temperature is greater than the given
minimum value, the files with the higher temperature are first subject to the
UNCOMPRESS operation before the files with the lower temperature.
521
Administering SmartTier
File placement policy rules
522
In the snippet, VxFS uncompresss any file whose read IOTEMP over the last 6
hours is 1.5 times that of all the active files in the whole file system over the last 24
hours. This Average criteria is more intuitive and easier to specify than the absolute
values.
The following formula computes the read IOTEMP of a given file:
IOTEMP = (bytes of the file that are read in the PERIOD) /
(PERIOD in hours * size of the file in bytes)
h is 24 hours by default. The average write and read/write IOTEMP are also
computed accordingly.
Administering SmartTier
File placement policy rules
In the example snippet, the value 1.5 is the multiple of average read IOTEMP over
the last 24 hours across the whole file system, or rather across all of the active
inodes whose activity is still available in the File Change Log (FCL) file at the time
of the scan. Thus, the files read IOTEMP activity over the last 6 hours is compared
against 1.5 times that of the last 24 hours average activity to make the
uncompression decision. Using this method eliminates the need to give a specific
number for the <IOTEMP> or <ACCESSTEMP> criteria, and instead lets you specify a
multiple of the Average temperature. Keeping this averaging period longer than the
specified <PERIOD> value normalizes the effects of any spikes and lulls in the file
activity.
You can also use the Average criteria with the <ACCESSTEMP> criteria. The purpose
and usage are the same.
You determine the type of the average by whether you specify the Average criteria
with the <IOTEMP> or with the <ACCESSTEMP> criteria. The Average criteria can be
any of the following types, depending on the criteria used:
rw Average IOTEMP
rw Average ACCESSTEMP
The default Average is a 24 hour average temperature, which is the total of all of
the temperatures available up to the last 24 hours in the FCL file, divided by the
number of files for which such I/O statistics still exist in the FCL file. You can override
the number of hours by specifying the AveragePeriod attribute in the
<PLACEMENT_POLICY> element. Symantec recommends that you specify an
AveragePeriod attribute value only if you are using solid state disks (SSDs).
The following example statement causes the average file system activity be collected
and computed over a period of 30 hours instead of the default 24 hours:
<PLACEMENT_POLICY Name="Policy1" Version="5.1" AveragePeriod="30">
523
Administering SmartTier
Calculating I/O temperature and access temperature
<SELECT Flags="Data">
<PATTERN> *.dbf </PATTERN>
</SELECT>
<UNCOMPRESS>
<FROM>
<SOURCE>
<CLASS> tier3 </CLASS>
</SOURCE>
</FROM>
<WHEN>
<ACCAGE Units="minutes">
<MIN Flags="gt">60</MIN>
</ACCAGE>
</WHEN>
</UNCOMPRESS>
The following example uncompresses in place all of the files with the extension dbf
on a single volume that have been accessed over 1 minute ago:
<SELECT Flags="Data">
<PATTERN> *.dbf </PATTERN>
</SELECT>
<UNCOMPRESS>
<WHEN>
<ACCAGE Units="minutes">
<MIN Flags="gt">1</MIN>
</ACCAGE>
</WHEN>
</UNCOMPRESS>
524
Administering SmartTier
Calculating I/O temperature and access temperature
Access age is a binary measure. The time since last access of a file is computed
by subtracting the time at which the fsppadm enforce command is issued from
the POSIX atime in the file's metadata. If a file is opened the day before the
fsppadm enforce command, its time since last access is one day, even though
it may have been inactive for the month preceding. If the intent of a policy rule
is to relocate inactive files to lower tier volumes, it will perform badly against
files that happen to be accessed, however casually, within the interval defined
by the value of the <ACCAGE> pa-rameter.
525
Administering SmartTier
Calculating I/O temperature and access temperature
Note: If FCL is turned off, I/O temperature-based relocation will not be accurate.
When you invoke the fsppadm enforce command, the command displays a warning
if the FCL is turned off.
As its name implies, the File Change Log records information about changes made
to files in a VxFS file system. In addition to recording creations, deletions, extensions,
the FCL periodically captures the cumulative amount of I/O activity (number of bytes
read and written) on a file-by-file basis. File I/O activity is recorded in the FCL each
time a file is opened or closed, as well as at timed intervals to capture information
about files that remain open for long periods.
If a file system's active file placement policy contains <IOTEMP> clauses, execution
of the fsppadm enforce command begins with a scan of the FCL to extract I/O
activity information over the period of interest for the policy. The period of interest
is the interval between the time at which the fsppadm enforce command was
issued and that time minus the largest interval value specified in any <PERIOD>
element in the active policy.
For files with I/O activity during the largest interval, VxFS computes an approximation
of the amount of read, write, and total data transfer (the sum of the two) activity by
subtracting the I/O levels in the oldest FCL record that pertains to the file from those
in the newest. It then computes each file's I/O temperature by dividing its I/O activity
by its size at Tscan. Dividing by file size is an implicit acknowledgement that
relocating larger files consumes more I/O resources than relocating smaller ones.
Using this algorithm requires that larger files must have more activity against them
in order to reach a given I/O temperature, and thereby justify the resource cost of
relocation.
While this computation is an approximation in several ways, it represents an easy
to compute, and more importantly, unbiased estimate of relative recent I/O activity
upon which reasonable relocation decisions can be based.
File relocation and deletion decisions can be based on read, write, or total I/O
activity.
The following XML snippet illustrates the use of IOTEMP in a policy rule to specify
relocation of low activity files from tier1 volumes to tier2 volumes:
<RELOCATE>
<FROM>
<SOURCE>
<CLASS>tier1</CLASS>
</SOURCE>
</FROM>
<TO>
526
Administering SmartTier
Calculating I/O temperature and access temperature
<DESTINATION>
<CLASS>tier2</CLASS>
</DESTINATION>
</TO>
<WHEN>
<IOTEMP Type="nrwbytes}">
<MAX Flags="lt">3</MAX>
<PERIOD Units="days">4</PERIOD>
</IOTEMP>
</WHEN>
</RELOCATE>
This snippet specifies that files to which the rule applies should be relocated from
tier1 volumes to tier2 volumes if their I/O temperatures fall below 3 over a period
of 4 days. The Type="nrwbytes}" XML attribute specifies that total data transfer
activity, which is the the sum of bytes read and bytes written, should be used in the
computation. For example, a 50 megabyte file that experienced less than 150
megabytes of data transfer over the 4-day period immediately preceding the fsppadm
enforce scan would be a candidate for relocation. VxFS considers files that
experience no activity over the period of interest to have an I/O temperature of zero.
VxFS relocates qualifying files in the order in which it encounters the files in its scan
of the file system directory tree.
Using I/O temperature or access temperature rather than a binary indication of
activity, such as the POSIX atime or mtime, minimizes the chance of not relocating
files that were only accessed occasionally during the period of interest. A large file
that has had only a few bytes transferred to or from it would have a low I/O
temperature, and would therefore be a candidate for relocation to tier2 volumes,
even if the activity was very recent.
But, the greater value of I/O temperature or access temperature as a file relocation
criterion lies in upward relocation: detecting increasing levels of I/O activity against
files that had previously been relocated to lower tiers in a storage hierarchy due to
inactivity or low temperatures, and relocating them to higher tiers in the storage
hierarchy.
The following XML snippet illustrates relocating files from tier2 volumes to tier1
when the activity level against them increases.
<RELOCATE>
<FROM>
<SOURCE>
<CLASS>tier2</CLASS>
</SOURCE>
</FROM>
527
Administering SmartTier
Multiple criteria in file placement policy rule statements
<TO>
<DESTINATION>
<CLASS>tier1</CLASS>
</DESTINATION>
</TO>
<WHEN>
<IOTEMP Type="nrbytes">
<MAX Flags="gt">5</MAX>
<PERIOD Units="days">2</PERIOD>
</IOTEMP>
</WHEN>
</RELOCATE>
The <RELOCATE> statement specifies that files on tier2 volumes whose I/O
temperature as calculated using the number of bytes read is above 5 over a 2-day
period are to be relocated to tier1 volumes. Bytes written to the file during the
period of interest are not part of this calculation.
Using I/O temperature rather than a binary indicator of activity as a criterion for file
relocation gives administrators a granular level of control over automated file
relocation that can be used to attune policies to application requirements. For
example, specifying a large value in the <PERIOD> element of an upward relocation
statement prevents files from being relocated unless I/O activity against them is
sustained. Alternatively, specifying a high temperature and a short period tends to
relocate files based on short-term intensity of I/O activity against them.
I/O temperature and access temperature utilize the sqlite3 database for building
a temporary table indexed on an inode. This temporary table is used to filter files
based on I/O temperature and access temperature. The temporary table is stored
in the database file .__fsppadm_fcliotemp.db, which resides in the lost+found
directory of the mount point.
528
Administering SmartTier
Multiple criteria in file placement policy rule statements
If a rule includes multiple SELECT statements, a file need only satisfy one of them
to be selected for action. This property can be used to specify alternative conditions
for file selection.
In the following example, a file need only reside in one of db/datafiles,
db/indexes, or db/logs or be owned by one of DBA_Manager, MFG_DBA, or HR_DBA
to be designated for possible action:
<SELECT>
<DIRECTORY Flags="nonrecursive">db/datafiles</DIRECTORY>
<DIRECTORY Flags="nonrecursive">db/indexes</DIRECTORY>
<DIRECTORY Flags="nonrecursive">db/logs</DIRECTORY>
</SELECT>
529
Administering SmartTier
Multiple criteria in file placement policy rule statements
<SELECT>
<USER>DBA_Manager</USER>
<USER>MFG_DBA</USER>
<USER>HR_DBA</USER>
</SELECT>
In this statement, VxFS would allocate space for newly created files designated by
the rule's SELECT statement on tier1 volumes if space was available. If no tier1
volume had sufficient free space, VxFS would attempt to allocate space on a tier2
volume. If no tier2 volume had sufficient free space, VxFS would attempt allocation
on a tier3 volume. If sufficient space could not be allocated on a volume in any
of the three specified placement classes, allocation would fail with an ENOSPC error,
even if the file system's volume set included volumes in other placement classes
that did have sufficient space.
530
Administering SmartTier
File placement policy rule and statement ordering
The <TO> clause in the RELOCATE statement behaves similarly. VxFS relocates
qualifying files to volumes in the first placement class specified if possible, to volumes
in the second specified class if not, and so forth. If none of the destination criteria
can be met, such as if all specified classes are fully occupied, qualifying files are
not relocated, but no error is signaled in this case.
You cannot write rules to relocate or delete a single designated set of files if the
files meet one of two or more relocation or deletion criteria.
531
Administering SmartTier
File placement policy rule and statement ordering
editor to create XML policy documents directly. The GUI places policy rule
statements in the correct order to achieve the desired behavior. If you use a text
editor, it is your responsibility to order policy rules and the statements in them so
that the desired behavior results.
The rules that comprise a placement policy may occur in any order, but during both
file allocation and fsppadm enforce relocation scans, the first rule in which a file
is designated by a SELECT statement is the only rule against which that file is
evaluated. Thus, rules whose purpose is to supersede a generally applicable
behavior for a special class of files should precede the general rules in a file
placement policy document.
The following XML snippet illustrates faulty rule placement with potentially unintended
consequences:
<?xml version="1.0"?>
<!DOCTYPE FILE_PLACEMENT_POLICY SYSTEM "placement.dtd">
<FILE_PLACEMENT_POLICY Version="5.0">
<RULE Name="GeneralRule">
<SELECT>
<PATTERN>*</PATTERN>
</SELECT>
<CREATE>
<ON>
<DESTINATION>
<CLASS>tier2</CLASS>
</DESTINATION>
</ON>
</CREATE>
other_statements
</RULE>
<RULE Name="DatabaseRule">
<SELECT>
<PATTERN>*.db</PATTERN>
</SELECT>
<CREATE>
<ON>
<DESTINATION>
<CLASS>tier1</CLASS>
</DESTINATION>
</ON>
</CREATE>
other_statements
532
Administering SmartTier
File placement policy rule and statement ordering
</RULE>
</FILE_PLACEMENT_POLICY>
The GeneralRule rule specifies that all files created in the file system, designated
by <PATTERN>*</PATTERN>, should be created on tier2 volumes. The DatabaseRule
rule specifies that files whose names include an extension of .db should be created
on tier1 volumes. The GeneralRule rule applies to any file created in the file
system, including those with a naming pattern of *.db, so the DatabaseRule rule
will never apply to any file. This fault can be remedied by exchanging the order of
the two rules. If the DatabaseRule rule occurs first in the policy document, VxFS
encounters it first when determining where to new place files whose names follow
the pattern *.db, and correctly allocates space for them on tier1 volumes. For
files to which the DatabaseRule rule does not apply, VxFS continues scanning the
policy and allocates space according to the specification in the CREATE statement
of the GeneralRule rule.
A similar consideration applies to statements within a placement policy rule. VxFS
processes these statements in order, and stops processing on behalf of a file when
it encounters a statement that pertains to the file. This can result in unintended
behavior.
The following XML snippet illustrates a RELOCATE statement and a DELETE statement
in a rule that is intended to relocate if the files have not been accessed in 30 days,
and delete the files if they have not been accessed in 90 days:
<RELOCATE>
<TO>
<DESTINATION>
<CLASS>tier2</CLASS>
</DESTINATION>
</TO>
<WHEN>
<ACCAGE Units="days">
<MIN Flags="gt">30</MIN>
</ACCAGE>
</WHEN>
</RELOCATE>
<DELETE>
<WHEN>
<ACCAGE Units="days">
<MIN Flags="gt">90</MIN>
</ACCAGE>
</WHEN>
</DELETE>
533
Administering SmartTier
File placement policies and extending files
As written with the RELOCATE statement preceding the DELETE statement, files will
never be deleted, because the <WHEN> clause in the RELOCATE statement applies
to all selected files that have not been accessed for at least 30 days. This includes
those that have not been accessed for 90 days. VxFS ceases to process a file
against a placement policy when it identifies a statement that applies to that file,
so the DELETE statement would never occur. This example illustrates the general
point that RELOCATE and DELETE statements that specify less inclusive criteria should
precede statements that specify more inclusive criteria in a file placement policy
document. The GUI automatically produce the correct statement order for the
policies it creates.
Allowance of fine grained temperatures, such as allowing hours as units for the
<IOTEMP> and <ACCESSTEMP> criteria
See Fine grain temperatures with solid state disks on page 535.
Support of the Prefer attribute for the <IOTEMP> and <ACCESSTEMP> criteria
See Prefer mechanism with solid state disks on page 535.
534
Administering SmartTier
Using SmartTier with solid state disks
See Quick identification of cold files with solid state disks on page 537.
To gain these benefits, you must modify the existing placement policy as per the
latest version of the DTD and assign the policy again. However, existing placement
policies continue to function as before. You do not need to update the placement
policies if you do not use the new features.
535
Administering SmartTier
Using SmartTier with solid state disks
The following placement policy snippet gives an example of the Prefer criteria:
<RELOCATE>
...
<WHEN>
<IOTEMP Type="nrbytes" Prefer="high">
<MIN Flags="gteq"> 3.4 </MIN>
<PERIOD Units="hours"> 6 </PERIOD>
</IOTEMP>
</WHEN>
</RELOCATE>
If there are a number of files whose I/O temperature is greater than the given
minimum value, the files with the higher temperature are first subject to the RELOCATE
operation before the files with the lower temperature. This is particularly useful in
case of SSDs, which are limited in size and are expensive. As such, you generally
want to use SSDs for the most active files.
536
Administering SmartTier
Using SmartTier with solid state disks
Reduce the impact of more frequent scans on resources, such as CPU, I/O,
and memory.
The following scheme is an example of one way to reduce the impact of frequent
scans:
Confine the scan to only active files during the PERIOD by focusing only on the
files that showed any activity in the File Change Log (FCL) by running the
fsppadm command with the -C option.
See Quick identification of cold files with solid state disks on page 537.
Scan frequently, such as every few hours. Frequent scans potentially reduce
the number of inodes that VxFS touches and logs in the File Change Log (FCL)
file, thereby limiting the duration of each scan. As such, the changes that VxFS
collects in the FCL file since the last scan provide details on fewer active files.
Use the <IOTEMP> and <ACCESSTEMP> criteria to promote files to SSDs more
aggressively, which leaves cold files sitting in SSDs.
On examination of the file systems File Change Log (FCL) for changes that are
made outside of SmartTiers scope.
Both of these updates occur during SmartTiers relocation scans, which are typically
scheduled to occur periodically. But, you can also update the file location map
anytime by running the fsppadm command with the -T option.
The -C option is useful to process active files before any other files. For best results,
specify the -T option in conjunction with the -C option. Specifying both the -T option
and -C option causes the fsppadm command to evacuate any cold files first to create
room in the SSD tier to accommodate any active files that will be moved into the
SSD tier via the -C option. Specifying -C in conjunction with -T confines the scope
537
Administering SmartTier
Using SmartTier with solid state disks
538
of the scan, which consumes less time and resources, and thus allows frequent
scans to meet the dynamic needs of data placement.
See Enforcing a placement policy on page 480.
See the fsppadm(1M) manual page.
With the help of the map, instead of scanning the full file system, you can confine
the scan to only the files on the SSD tiers in addition to the active files that VxFS
recorded in the FCL. This scheme potentially achieves the dual purpose of reducing
the temperature time granularity and at the same time reducing the scan load.
Administering SmartTier
Using SmartTier with solid state disks
<RELOCATE>
<COMMENT>
Move the files out of SSD if their last 3 hour
write IOTEMP is more than 1.5 times the last
24 hour average write IOTEMP. The PERIOD is
purposely shorter than the other RELOCATEs
because we want to move it out as soon as
write activity starts peaking. This criteria
could be used to reduce SSD wear outs.
</COMMENT>
<FROM>
<SOURCE>
<CLASS> ssdtier </CLASS>
</SOURCE>
</FROM>
<TO>
<DESTINATION>
<CLASS> nonssd_tier </CLASS>
</DESTINATION>
</TO>
<WHEN>
<IOTEMP Type="nwbytes" Average="*">
<MIN Flags="gt"> 1.5 </MIN>
<PERIOD Units="hours"> 3 </PERIOD>
</IOTEMP>
</WHEN>
</RELOCATE>
<RELOCATE>
<COMMENT>
OR move the files out of SSD if their last 6 hour
read IOTEMP is less than half the last 24 hour
average read IOTEMP. The PERIOD is longer,
we may want to observe longer periods
having brought the file in. This avoids quickly
sending the file out of SSDs once in.
</COMMENT>
<FROM>
<SOURCE>
<CLASS> ssdtier </CLASS>
</SOURCE>
</FROM>
539
Administering SmartTier
Using SmartTier with solid state disks
<TO>
<DESTINATION>
<CLASS> nonssd_tier </CLASS>
</DESTINATION>
</TO>
<WHEN>
<IOTEMP Type="nrbytes" Average="*">
<MAX Flags="lt"> 0.5 </MAX>
<PERIOD Units="hours"> 6 </PERIOD>
</IOTEMP>
</WHEN>
</RELOCATE>
<RELOCATE>
<COMMENT>
OR move the files into SSD if their last 3 hour
read IOTEMP is more than or equal to 1.5 times
the last 24 hour average read IOTEMP AND
their last 6 hour write IOTEMP is less than
half of the last 24 hour average write IOTEMP
</COMMENT>
<TO>
<DESTINATION>
<CLASS> ssd_tier </CLASS>
</DESTINATION>
</TO>
<WHEN>
<IOTEMP Type="nrbytes" Prefer="high" Average="*">
<MIN Flags="gteq"> 1.5 </MIN>
<PERIOD Units="hours"> 3 </PERIOD>
</IOTEMP>
<IOTEMP Type="nwbytes" Average="*">
<MAX Flags="lt"> 0.5 </MAX>
<PERIOD Units="hours"> 3 </PERIOD>
</IOTEMP>
</WHEN>
</RELOCATE>
</RULE>
</PLACEMENT_POLICY>
In this placement policy, new files are created on the SSD tiers if space is available,
or elsewhere if space is not available. When enforce is performed, the files that are
currently in SSDs whose write activity is increased above a threshold or whose
540
Administering SmartTier
Using SmartTier with solid state disks
read activity fell below a threshold over a given period are moved out of the SSDs.
The first two RELOCATEs capture this intent. However, the files whose read activity
intensified above a threshold and whose write activity does not exceed a threshold
over the given period are moved into SSDs, while giving preference to files with
higher read activity.
The following figure illustrates the behavior of the example placement policy:
The files whose I/O activity falls in the light gray area are good candidates for moving
in to SSD storage. These files have less write activity such that they have less
impact on wear leveling, and the slower write times to SSDs is less of a factor.
These files have intense read activity, which also makes the files ideal for placement
on SSDs since read activity does not cause any wear leveling side effects, and
reads are faster from SSDs. In contrast, the files whose I/O activity falls in the dark
gray area are good candidates to be moved out of SSD storage, since they have
more write activity or less read activity. Greater write activity leads to greater wear
leveling of the SSDs, and your file system's performance suffers from the slower
write times of SSDs. Lesser read activity means that you are not benefitting from
the faster read times of SSDs with these files.
541
Administering SmartTier
Sub-file relocation
Sub-file relocation
The sub-file relocation functionality relocates the data ranges of the specified files
to the specified target tier. Only one instance is allowed at a time on a given node
for a given mount.
You can move sub-file data by using the fsppadm subfilemove command. The
application using this framework calls the fsppadm subfilemove command
periodically via some external scheduling mechanism at desired intervals, to effect
relocations. The application might need to call subfilemove on each node of a
cluster, in case of a cluster file system, if you want to distribute the load. The
application also must arrange for initiating this relocation for new mounts and
reboots, if the application needs subfile relocations on those nodes or mounts.
In a cluster situation, since enforcement can happen from multiple nodes even if
each node is scheduled to collect statistics at the same intervals, each nodes
persistence into the database can be slightly out of sync with each other on each
node. Since enforcement should follow statistics collection, Symantec recommends
that you schedule enforcements on each node with a few minutes of lag so that all
nodes can complete the statistics synchronizing by that time. A lag time of 5 minutes
suffices in most cases.
Note: You cannot use SmartTier to compress files while using the sub-file relocation
functionality.
542
Chapter
27
Administering
hot-relocation
This chapter includes the following topics:
About hot-relocation
About hot-relocation
If a volume has a disk I/O failure (for example, the disk has an uncorrectable error),
Veritas Volume Manager (VxVM) can detach the plex involved in the failure. I/O
stops on that plex but continues on the remaining plexes of the volume.
Administering hot-relocation
How hot-relocation works
If a disk fails completely, VxVM can detach the disk from its disk group. All plexes
on the disk are disabled. If there are any unmirrored volumes on a disk when it is
detached, those volumes are also disabled.
Apparent disk failure may not be due to a fault in the physical disk media or the
disk controller, but may instead be caused by a fault in an intermediate or ancillary
component such as a cable, host bus adapter, or power supply.
The hot-relocation feature in VxVM automatically detects disk failures, and notifies
the system administrator and other nominated users of the failures by electronic
mail. Hot-relocation also attempts to use spare disks and free disk space to restore
redundancy and to preserve access to mirrored and RAID-5 volumes.
See How hot-relocation works on page 544.
If hot-relocation is disabled or you miss the electronic mail, you can use the vxprint
command or the graphical user interface to examine the status of the disks. You
may also see driver error messages on the console or in the system messages file.
Failed disks must be removed and replaced manually.
See Removing and replacing disks on page 667.
For more information about recovering volumes and their data after hardware failure,
see the Symantec Storage Foundation and High Availability Solutions
Troubleshooting Guide.
544
Administering hot-relocation
How hot-relocation works
Disk failure
Plex failure
RAID-5 subdisk
failure
electronic mail of the failure and which VxVM objects are affected.
See Partial disk failure mail messages on page 547.
See Complete disk failure mail messages on page 548.
See Modifying the behavior of hot-relocation on page 558.
vxrelocd next determines if any subdisks can be relocated. vxrelocd looks for
If relocation is not possible, vxrelocd notifies the system administrator and takes
no further action.
Warning: Hot-relocation does not guarantee the same layout of data or the same
performance after relocation. An administrator should check whether any
configuration changes are required after hot-relocation occurs.
Relocation of failing subdisks is not possible in the following cases:
545
Administering hot-relocation
How hot-relocation works
The failing subdisks are on non-redundant volumes (that is, volumes of types
other than mirrored or RAID-5).
There are insufficient spare disks or free disk space in the disk group.
The only available space is on a disk that already contains a mirror of the failing
plex.
The only available space is on a disk that already contains the RAID-5 log plex
or one of its healthy subdisks. Failing subdisks in the RAID-5 plex cannot be
relocated.
If a mirrored volume has a dirty region logging (DRL) log subdisk as part of its
data plex, failing subdisks belonging to that plex cannot be relocated.
If a RAID-5 volume log plex or a mirrored volume DRL log plex fails, a new log
plex is created elsewhere. There is no need to relocate the failed subdisks of
the log plex.
546
Administering hot-relocation
How hot-relocation works
Figure 27-1
a Disk group contains five disks. Two RAID-5 volumes are configured
across four of the disks. One spare disk is availavle for hot-relocation.
mydg01
mydg02
mydg03
mydg01-01
mydg02-01
mydg03-01
mydg02-02
mydg03-02
mydg04
mydg04-01
mydg05
Spare disk
mydg02
mydg03
mydg01-01
mydg02-01
mydg03-01
mydg02-02
mydg03-02
mydg04
mydg05
mydg04-01
mydg05-01
mydg02
mydg03
mydg02-01
mydg03-01
mydg02-02
mydg03-02
mydg04
mydg05
mydg04-01
mydg05-01
547
Administering hot-relocation
How hot-relocation works
The -s option asks for information about individual subdisks, and the -ff option
displays the number of failed read and write operations. The following output display
is typical:
TYP NAME
sd mydg01-04
sd mydg01-06
sd mydg02-03
sd mydg02-04
FAILED
READS
WRITES
0
0
0
0
1
0
1
0
This example shows failures on reading from subdisks mydg02-03 and mydg02-04
of disk mydg02.
Hot-relocation automatically relocates the affected subdisks and initiates any
necessary recovery procedures. However, if relocation is not possible or the
hot-relocation feature is disabled, you must investigate the problem and attempt to
recover the plexes. Errors can be caused by cabling failures, so check the cables
connecting your disks to your system. If there are obvious problems, correct them
and recover the plexes using the following command:
# vxrecover -b -g mydg home src
This starts recovery of the failed plexes in the background (the command prompt
reappears before the operation completes). If an error message appears later, or
if the plexes become detached again and there are no obvious cabling failures,
replace the disk.
See Removing and replacing disks on page 667.
548
Administering hot-relocation
How hot-relocation works
This message shows that mydg02 was detached by a failure. When a disk is
detached, I/O cannot get to that disk. The plexes home-02, src-02, and mkting-01
were also detached (probably because of the failure of the disk).
One possible cause of the problem could be a cabling error.
See Partial disk failure mail messages on page 547.
If the problem is not a cabling error, replace the disk.
See Removing and replacing disks on page 667.
549
Administering hot-relocation
Configuring a system for hot-relocation
Hot-relocation tries to move all subdisks from a failing drive to the same destination
disk, if possible.
When hot-relocation takes place, the failed subdisk is removed from the configuration
database, and VxVM ensures that the disk space used by the failed subdisk is not
recycled as free space.
550
Administering hot-relocation
Marking a disk as a hot-relocation spare
DEVICE
TAG
OFFSET
LENGTH
FLAGS
mydg
sdc
sdc
658007
mydg02
Here mydg02 is the only disk designated as a spare in the mydg disk group. The
LENGTH field indicates how much spare space is currently available on mydg02 for
relocation.
The following commands can also be used to display information about disks that
are currently designated as spares:
vxdisk list lists disk information and displays spare disks with a spare flag.
vxprint lists disk and other information and displays spare disks with a SPARE
flag.
The list menu item on the vxdiskadm main menu lists all disks including spare
disks.
You can use the vxdisk list command to confirm that this disk is now a spare;
mydg01 should be listed with a spare flag.
Any VxVM disk in this disk group can now use this disk as a spare in the event of
a failure. If a disk fails, hot-relocation automatically occurs (if possible). You are
551
Administering hot-relocation
Removing a disk from use as a hot-relocation spare
notified of the failure and relocation through electronic mail. After successful
relocation, you may want to replace the failed disk.
To use vxdiskadm to designate a disk as a hot-relocation spare
Select Mark a disk as a spare for a disk group from the vxdiskadm
main menu.
The following notice is displayed when the disk has been marked as spare:
VxVM NOTICE V-5-2-219 Marking of mydg01 in mydg as a spare disk
is complete.
At the following prompt, indicate whether you want to add more disks as spares
(y) or return to the vxdiskadm main menu (n):
Mark another disk as a spare? [y,n,q,?] (default: n)
Any VxVM disk in this disk group can now use this disk as a spare in the event
of a failure. If a disk fails, hot-relocation should automatically occur (if possible).
You should be notified of the failure and relocation through electronic mail.
After successful relocation, you may want to replace the failed disk.
552
Administering hot-relocation
Excluding a disk from hot-relocation use
Select Turn off the spare flag on a disk from the vxdiskadm main menu.
At the following prompt, enter the disk media name of a spare disk (such as
mydg01):
Enter disk name [<disk>,list,q,?] mydg01
At the following prompt, indicate whether you want to disable more spare disks
(y) or return to the vxdiskadm main menu (n):
Turn off spare flag on another disk? [y,n,q,?] (default: n)
At the following prompt, enter the disk media name (such as mydg01):
Enter disk name [<disk>,list,q,?] mydg01
At the following prompt, indicate whether you want to add more disks to be
excluded from hot-relocation (y) or return to the vxdiskadm main menu (n):
Exclude another disk from hot-relocation use? [y,n,q,?]
(default: n)
553
Administering hot-relocation
Making a disk available for hot-relocation use
Select Make a disk available for hot-relocation use from the vxdiskadm
main menu.
At the following prompt, enter the disk media name (such as mydg01):
Enter disk name [<disk>,list,q,?] mydg01
At the following prompt, indicate whether you want to add more disks to be
excluded from hot-relocation (y) or return to the vxdiskadm main menu (n):
Make another disk available for hot-relocation use? [y,n,q,?]
(default: n)
If not enough storage can be located on disks marked as spare, the relocation fails.
Any free space on non-spare disks is not used.
554
Administering hot-relocation
Moving relocated subdisks
This message has information about the subdisk before relocation and can be used
to decide where to move the subdisk after relocation.
Here is an example message that shows the new location for the relocated subdisk:
To: root
Subject: Attempting VxVM relocation on host teal
Volume home Subdisk mydg02-03 relocated to mydg05-01,
but not yet recovered.
Before you move any relocated subdisks, fix or replace the disk that failed.
See Removing and replacing disks on page 667.
Once this is done, you can move a relocated subdisk back to the original disk as
described in the following sections.
Warning: During subdisk move operations, RAID-5 volumes are not redundant.
555
Administering hot-relocation
Moving relocated subdisks
the object available again. This mechanism detects I/O failures in a subdisk,
relocates the subdisk, and recovers the plex associated with the subdisk. After the
disk has been replaced, vxunreloc allows you to restore the system back to the
configuration that existed before the disk failure. vxunreloc allows you to move
the hot-relocated subdisks back onto a disk that was replaced due to a failure.
When vxunreloc is invoked, you must specify the disk media name where the
hot-relocated subdisks originally resided. When vxunreloc moves the subdisks, it
moves them to the original offsets. If you try to unrelocate to a disk that is smaller
than the original disk that failed,vxunreloc does nothing except return an error.
vxunreloc provides an option to move the subdisks to a different disk from where
they were originally relocated. It also provides an option to unrelocate subdisks to
a different offset as long as the destination disk is large enough to accommodate
all the subdisks.
If vxunreloc cannot replace the subdisks back to the same original offsets, a force
option is available that allows you to move the subdisks to a specified disk without
using the original offsets.
See the vxunreloc(1M) manual page.
The examples in the following sections demonstrate the use of vxunreloc.
556
Administering hot-relocation
Moving relocated subdisks
The destination disk should have at least as much storage capacity as was in use
on the original disk. If there is not enough space, the unrelocate operation will fail
and none of the subdisks will be moved.
Move the existing subdisks somewhere else, and then re-run vxunreloc.
Use the -f option provided by vxunreloc to move the subdisks to the destination
disk, but leave it to vxunreloc to find the space on the disk. As long as the
destination disk is large enough so that the region of the disk for storing subdisks
can accommodate all subdisks, all the hot-relocated subdisks will be unrelocated
without using the original offsets.
Assume that mydg01 failed and the subdisks were relocated and that you want to
move the hot-relocated subdisks to mydg05 where some subdisks already reside.
You can use the force option to move the hot-relocated subdisks to mydg05, but not
to the exact offsets:
# vxunreloc -g mydg -f -n mydg05 mydg01
557
Administering hot-relocation
Modifying the behavior of hot-relocation
vxunreloc moves the data from each subdisk to the corresponding newly created
When all subdisk data moves have been completed successfully, vxunreloc
sets the comment field to the null string for each subdisk on the destination disk
whose comment field is currently set to UNRELOC.
The comment fields of all the subdisks on the destination disk remain marked as
UNRELOC until phase 3 completes. If its execution is interrupted, vxunreloc can
subsequently re-use subdisks that it created on the destination disk during a previous
execution, but it does not use any data that was moved to the destination disk.
If a subdisk data move fails, vxunreloc displays an error message and exits.
Determine the problem that caused the move to fail, and fix it before re-executing
vxunreloc.
If the system goes down after the new subdisks are created on the destination disk,
but before all the data has been moved, re-execute vxunreloc when the system
has been rebooted.
Warning: Do not modify the string UNRELOC in the comment field of a subdisk record.
558
Administering hot-relocation
Modifying the behavior of hot-relocation
To prevent vxrelocd starting, comment out the entry that invokes it in the
startup file:
# nohup vxrelocd root &
By default, vxrelocd sends electronic mail to root when failures are detected
and relocation actions are performed. You can instruct vxrelocd to notify
additional users by adding the appropriate user names as shown here:
# nohup vxrelocd root user1 user2 &
where the optional IOdelay value indicates the desired delay in milliseconds.
The default value for the delay is 250 milliseconds.
559
Chapter
28
Deduplicating data
This chapter includes the following topics:
Deduplicating data
Deduplication results
Deduplication supportability
Deduplication limitations
Identifies duplicates
Deduplicating data
About deduplicating data
The amount of space savings that you get from deduplicating depends on your
data. Deduplicating different data gives different space savings.
You deduplicate data using the fsdedupadm command.
See the fsdedupadm(1M) manual page.
Deduplication requires an Enterprise license.
561
Deduplicating data
About deduplicating data
562
Deduplicating data
Deduplicating data
563
Symantec recommends that you schedule deduplication when the system activity
is low. This ensures that the scheduler does not interfere with the regular workload
of the system.
Deduplicating data
You deduplicate data using the fsdedupadm command. The fsdedupadm command
performs the following functions:
Functionality
Command syntax
Deduplicating data
Deduplicating data
564
Functionality
Command syntax
For more information about the keywords, see the fsdedupadm(1M) manual page.
The following example creates a file system, creates duplicate data on the file
system, and deduplicates the file system.
Example of deduplicating a file system
Make a temporary directory, temp1, on /mnt1 and copy the file1 file into the
directory:
# mkdir /mnt1/temp1
# cd /mnt1/temp1
# cp /root/file1 .
# /opt/VRTS/bin/fsadm -S shared /mnt1
Mountpoint
/mnt1
The file1 file is approximately 250 MB, as shown by the output of the fsadm
command.
Deduplicating data
Deduplicating data
Make another temporary directory, temp2, and copy the same file, file1, into
the new directory:
# mkdir /mnt1/temp2
# cd /mnt1/temp2
# cp /root/file1 .
# /opt/VRTS/bin/fsadm -S shared /mnt1
Mountpoint
/mnt1
By copying the same file into temp2, you now have duplicate data. The output
of the fsadm command show that you are now using twice the amount of space.
565
Deduplicating data
Deduplicating data
566
Verify that the file system was deduplicated by checking how much space you
are using:
# /opt/VRTS/bin/fsadm -S shared /mnt1
Mountpoint
/mnt1
The output shows that the used space is nearly identical to when you had only
one copy of the file1 file on the file system.
You can disable deduplication on a file system by using the fsdedupadm disable
command.
The following example disables deduplication on the file system mounted at /mnt1:
# /opt/VRTS/bin/fsdedupadm disable /mnt1
Deduplicating data
Deduplicating data
You must enable deduplication on the file system before you can set a schedule.
See Enabling and disabling deduplication on a file system on page 566.
You can schedule the deduplication run every hour or every specified number of
hours, and every day or every specified number of days. You can also schedule
the actual deduplication run to occur each time, or every specified number of times
that the scheduled time elapses. During times that deduplication does not occur,
the deduplication run only updates the fingerprints in the database.
The schedule commands are not cumulative. If a deduplication schedule comes
up while the previous deduplication process is running for any reason, the upcoming
deduplication is discarded and an warning message displays.
You can remove a schedule by specifying an empty string enclosed by double
quotes ("") for the schedule.
See the fsdedupadm(1M) manual page.
You must start the fsdedupschd daemon before scheduling the task:
# chkconfig --add fsdedupschd
# service fsdedupschd start
In the following example, deduplication for the file system /vx/fs1 will be done at
midnight, every other day:
# fsdedupadm setschedule "0 */2" /vx/fs1
In the following example, deduplication for the file system /vx/fs1 will be done
twice every day, once at midnight and once at noon:
# fsdedupadm setschedule "0,12 *" /vx/fs1
In the following example, deduplication for the file system /vx/fs1 will be done four
times every day, but only the fourth deduplication run will actually deduplicate the
file system. The other runs will do the scanning and processing. This option achieves
load distribution not only in a system, but also across the cluster.
# fsdedupadm setschedule "0,6,12,18 * 4" /vx/fs1
The following example removes the deduplication schedule from the file system
/vx/fs1:
# fsdedupadm setschedule "" /vx/fs1
567
Deduplicating data
Deduplicating data
You can specify fsdedupadm to perform the actual deduplication by specifying the
-o threshold option. In this case, the fsdedupadm command performs an actual
deduplication run if the expected space savings meets the specified threshold.
The following command initiates a deduplication dry run on the file system /mnt1,
and performs the actual deduplication if the expected space savings crosses the
threshold of 60 percent:
# fsdedupadm dryrun -o threshold=60 /mnt1
The following command queries the deduplication status of all running deduplication
jobs:
# fsdedupadm status all
568
Deduplicating data
Deduplication results
is automatically restarted after the reboot. If you stopped the fsdedupschd daemon
prior to a reboot, it remains stopped after the reboot. The default fsdedupschd
daemon state is stopped.
You must enable deduplication on the file system before you can start or stop the
scheduler daemon.
See Enabling and disabling deduplication on a file system on page 566.
The following command starts the fsdedupschd daemon:
# chkconfig --add fsdedupschd
# service fsdedupschd start
Deduplication results
The nature of the data is very important for deciding whether to enable deduplication.
Databases or media files, such as JPEG, MP3, and MOV, might not be the best
candidates for deduplication, as they have very little or no duplicate data. Virtual
machine boot image files (vmdk files), user home directories, and file system with
multiple copies of files are good candidates for deduplication. While smaller
deduplication chunk size normally results into higher storage saving, it takes longer
to deduplicate and requires a larger deduplication database.
Deduplication supportability
Veritas File System (VxFS) supports deduplication in the 6.0 release and later, and
on file system disk layout version 9 and later.
569
Deduplicating data
Deduplication limitations
vmdk files
Deduplication limitations
The deduplication feature has the following limitations:
A full backup of a deduplicated Veritas File System (VxFS) file system can
require as much space in the target as a file system that has not been
deduplicated. For example, if you have 2 TB of data that occupies 1 TB worth
of disk space in the file system after deduplication, this data requires 2 TB of
space on the target to back up the file system, assuming that the backup target
does not do any deduplication. Similarly, when you restore such a file system,
you must have 2 TB on the file system to restore the complete data. However,
this freshly restored file system can be deduplicated again to regain the space
savings. After a full file system restore, Symantec recommends that you remove
any existing deduplication configuration using the fsdedupadm remove command
and that you reconfigure deduplication using the fsdedupadm enable command.
Deduplication does not support mounted clone and snapshot mounted file
system.
After you restore data from a backup, you must deduplicate the restored data
to regain any space savings provided by deduplication.
If you use the cross-platform data sharing feature to convert data from one
platform to another, you must remove the deduplication configuration file and
database, re-enable deduplication, and restart deduplication after the conversion.
The following example shows the commands that you must run, and you must
run the commands in the order shown:
570
Deduplicating data
Deduplication limitations
You cannot use the FlashBackup feature of NetBackup in conjunction with the
data deduplication feature, because FlashBackup does not support disk layout
Version 8 and 9.
571
Chapter
29
Compressing files
This chapter includes the following topics:
Compressing files
About compressing files
Compression algorithm
573
Compressing files
Compressing files with the vxcompress command
574
Command syntax
Report the compression savings in a file or vxcompress {-l|-L} [-r] file_or_dir ...
directory tree
List the supported compression algorithms vxcompress -a
Compressing files
Compressing files with the vxcompress command
When reporting the compression details for a file, the vxcompress -l command or
vxcompress -L command displays the following information:
Compression algorithm
Strength
If you attempt to compress a file with the vxcompress command and the extents
have data that cannot be compressed, the command still marks the file as
compressed and replaces the extents with compressed extent descriptors.
If you recompress a file, you do not need to specify any options with the vxcompress
command. The command automatically uses the options that you used to compress
the file previously.
The following command compresses the file1 file, using the default algorithm and
strength of gzip-6:
$ vxcompress file1
The following command recursively compresses all files below the dir1 directory,
using the gzip algorithm at the highest strength (9):
$ vxcompress -r -t gzip-9 dir1
The following command compresses the file2 file and all files below the dir2
directory, using the gzip algorithm at strength 3, while limiting the vxcompress
command to a single thread and reducing the scheduling priority:
$ vxcompress -r -t gzip-3 file2 dir2
The following command displays the results of compressing the file1 file in
human-friendly units:
$ vxcompress -L file1
%Comp
99%
Physical
1 KB
Logical
159 KB
%Exts
100%
Alg-Str
gzip-6
BSize
1024k
Filename
file1
575
Compressing files
Interaction of compressed files and other commands
df
The df command shows the actual blocks in use by the file system.
This number includes the compression savings, but the command
does not display the savings explicitly.
See the df(1) manual page.
du
The du command usually uses the block count and thus implicitly
shows the results of compression, but the GNU du command has an
option to use the file size instead, which is not changed by
compression.
See the du(1) manual page.
fsadm -S
fsmap -p
ls -l
ls -s
The inode size reported by a stat call is the logical size, as shown by
the ls -l command. This size is not affected by compression. On
the other hand, the block count reflects the actual blocks used. As
such, the ls -s command shows the result of compression.
See the ls(1) manual page.
vxdump
576
Compressing files
Interaction of compressed files and other features
Table 29-1
(continued)
Command
vxquota
Cross-Platform Data Sharing If you convert a disk or file system from one platform that
supports compression to a platform that does not support
compression and the file system contains compressed files,
the fscdsconv command displays a message that some
files violate the CDS limits and prompts you to confirm if you
want to continue the conversion. If you continue, the
conversion completes successfully, but the compressed files
will not be accessible on the new platform.
File Change Log
577
Compressing files
Interaction of compressed files and applications
Table 29-2
(continued)
Feature
Storage Checkpoints
578
Compressing files
Use cases for compressing files
579
Compressing files
Use cases for compressing files
580
As an Oracle DBA, run the following query and get the archive log location:
SQL> select destination from v$archive_dest where status = 'VALID'
and valid_now = 'YES';
Compress all of the archive logs that are older than a day:
$ find /oraarch/MYDB -mtime +1 -exec /opt/VRTS/bin/vxcompress {} \;
You can run this step daily via a scheduler, such as cron.
Compressing files
Use cases for compressing files
updates on all tables and objects residing in the tablespace, regardless of a user's
update privilege level. These kinds of read-only tablespaces are excellent candidates
for compression. In some cases such as month end reports, there may be large
queries executed against these read-only tablespaces. To make the report run
faster, you can uncompress the tablespace on demand before running the monthly
reports.
In the following example, a sporting goods company has its inventory divided into
two tablespaces: winter_items and summer_items. In the end of the Spring season,
you can compress the winter_item tablespace and uncompress the summer_item
tablespace. You can do the reverse actions at end of the Summer season. The
following example procedure performs these tasks.
To compress and uncompress tablespaces depending on the season
Using SQL, get a list of files in each tablespace and store the result in the files
summer_files and winter_files:
SQL> select file_name from dba_data_files where
tablespace_name = 'WINTER_ITEM';
tablespace_name = 'SUMMER_ITEM';
581
Compressing files
Use cases for compressing files
582
partition, and previous quarter records do not get updated. Since telecommunications
databases are generally very large, compressing last years data provides great
savings.
In the following example, assume that the table CALL_DETAIL is partitioned by
quarters, and the partition names are CALL_2010_Q1, CALL_2010_Q2, and
CALL_2011_Q1, and so on. In the first Quarter of 2011, you can compress the
CALL_2010_Q1 data.
To compress the CALL_2010_Q1 partition
Compressing files
Use cases for compressing files
Select files that have the least I/O activity from the report and compress those
files:
$ /opt/VRTS/bin/vxcompress file1 file2 file3 ...
Periodically run the query again to ensure that the compressed files do not
have increased I/O activity. If I/O activity increases, uncompress the files:
$ /opt/VRTS/bin/vxcompress -u file1 file2 file3 ...
Monitor the I/O activity on compressed files periodically and uncompress the
files if I/O activity increases.
583
Section
Administering storage
Chapter
30
Configuring SmartMove
Removing a mirror
Decommissioning storage
Use the default disk group name that is specified by the environment variable
VXVM_DEFAULTDG. This variable can also be set to one of the reserved
system-wide disk group names: bootdg, defaultdg, or nodg.
See Displaying the system-wide boot disk group on page 586.
If the variable is undefined, the following rule is applied.
Use the disk group that has been assigned to the system-wide default disk group
alias, defaultdg.
See Displaying and specifying the system-wide default disk group on page 586.
If this alias is undefined, the following rule is applied.
If the operation can be performed without requiring a disk group name (for
example, an edit operation on disk access records), do so.
vxdg defaultdg
586
bootdg
Sets the default disk group to be the same as the currently defined system-wide
boot disk group.
nodg
From the vxdiskadm main menu, select Move volumes from a disk .
At the following prompt, enter the disk name of the disk whose volumes you
want to move, as follows:
Enter disk name [<disk>,list,q,?] mydg01
You can now optionally specify a list of disks to which the volume(s) should be
moved. At the prompt, do one of the following:
Press Enter to move the volumes onto available space in the disk group.
Specify the disks in the disk group that should be used, as follows:
587
As the volumes are moved from the disk, the vxdiskadm program displays the
status of the operation:
VxVM vxevac INFO V-5-2-24 Move volume voltest ...
When the volumes have all been moved, the vxdiskadm program displays the
following success message:
VxVM INFO V-5-2-188 Evacuation of disk mydg02 is complete.
At the following prompt, indicate whether you want to move volumes from
another disk (y) or return to the vxdiskadm main menu (n):
Move volumes from another disk? [y,n,q,?] (default: n)
Warning: This procedure does not save the configurations nor data on the disks.
You can also move a disk by using the vxdiskadm command. Select Remove a
disk from the main menu, and then select Add or initialize a disk.
To move disks and preserve the data on these disks, along with VxVM objects,
such as volumes:
See Moving objects between disk groups on page 596.
588
To isolate volumes or disks from a disk group, and process them independently
on the same host or on a different host. This allows you to implement off-host
processing solutions for the purposes of backup or decision support.
To reduce the size of a disk groups configuration database in the event that its
private region is nearly full. This is a much simpler solution than the alternative
of trying to grow the private region.
589
Figure 30-1
Move
After move
590
Figure 30-2
After split
The join operation removes all VxVM objects from an imported disk group and
moves them to an imported target disk group. The source disk group is removed
when the join is complete.
Figure 30-3 shows the join operation.
591
Figure 30-3
Join
After join
592
Warning: Before moving volumes between disk groups, stop all applications that
are accessing the volumes, and unmount all file systems that are configured on
these volumes.
If the system crashes or a hardware subsystem fails, VxVM attempts to complete
or reverse an incomplete disk group reconfiguration when the system is restarted
or the hardware subsystem is repaired, depending on how far the reconfiguration
had progressed. If one of the disk groups is no longer available because it has been
imported by another host or because it no longer exists, you must recover the disk
group manually.
See the Symantec Storage Foundation and High Availability Solutions
Troubleshooting Guide.
Disks cannot be moved between CDS and non-CDS compatible disk groups.
By default, VxVM automatically recovers and starts the volumes following a disk
group move, split or join. If you have turned off the automatic recovery feature,
volumes are disabled after a move, split, or join. Use the vxrecover -m and
vxvol startall commands to recover and restart the volumes.
See Setting the automatic recovery of volumes on page 629.
Data change objects (DCOs) and snap objects that have been dissociated by
Persistent FastResync cannot be moved between disk groups.
Veritas Volume Replicator (VVR) objects cannot be moved between disk groups.
For a disk group move to succeed, the source disk group must contain at least
one disk that can store copies of the configuration database after the move.
For a disk group split to succeed, both the source and target disk groups must
contain at least one disk that can store copies of the configuration database
after the split.
For a disk group move or join to succeed, the configuration database in the
target disk group must be able to accommodate information about all the objects
in the enlarged disk group.
593
Splitting or moving a volume into a different disk group changes the volumes
record ID.
The operation can only be performed on the master node of a cluster if either
the source disk group or the target disk group is shared.
If a cache object or volume set that is to be split or moved uses ISP volumes,
the storage pool that contains these volumes must also be specified.
The following example lists the objects that would be affected by moving volume
vol1 from disk group mydg to newdg:
# vxdg listmove mydg newdg vol1
mydg01 sda mydg05 sde vol1 vol1-01 vol1-02 mydg01-01 mydg05-01
However, the following command produces an error because only a part of the
volume vol1 is configured on the disk mydg01:
# vxdg listmove mydg newdg mydg01
VxVM vxdg ERROR V-5-2-4597 vxdg listmove mydg newdg failed
VxVM vxdg ERROR V-5-2-3091 mydg05 : Disk not moving, but
subdisks on it are
Specifying the -o expand option, as shown below, ensures that the list of objects
to be moved includes the other disks (in this case, mydg05) that are configured in
vol1:
# vxdg -o expand listmove mydg newdg mydg01
mydg01 sda mydg05 sde vol1 vol1-01 vol1-02 mydg01-01
mydg05-01
594
the move. You can use the vxprint command on a volume to examine the
configuration of its associated DCO volume.
If you use the vxassist command to create both a volume and its DCO, or the
vxsnap prepare command to add a DCO to a volume, the DCO plexes are
automatically placed on different disks from the data plexes of the parent volume.
In previous releases, version 0 DCO plexes were placed on the same disks as the
data plexes for convenience when performing disk group split and move operations.
As version 20 DCOs support dirty region logging (DRL) in addition to Persistent
FastResync, it is preferable for the DCO plexes to be separated from the data
plexes. This improves the performance of I/O from/to the volume, and provides
resilience for the DRL logs.
Figure 30-4 shows some instances in which it is not be possible to split a disk group
because of the location of the DCO plexes on the disks of the disk group.
See Volume snapshots on page 81.
595
Figure 30-4
Volume
data plexes
Snapshot
plex
Split
Volume DCO
plexes
Snapshot
DCO plex
Volume
data plexes
Snapshot
plex
Volume
DCO plex
Volume
DCO plex
Snapshot
DCO plex
Volume
data plexes
Snapshot
plex
Split
?
Volume
DCO plexes
Snapshot
DCO plex
Volume 1
data plexes
Volume 2
data plexes
Volume 1
DCO plexes
Snapshot
plex
?
Snapshot
DCO plex
596
The -o expand option ensures that the objects that are actually moved include all
other disks containing subdisks that are associated with the specified objects or
with objects that they contain.
The default behavior of vxdg when moving licensed disks in an EMC array is to
perform an EMC disk compatibility check for each disk involved in the move. If the
compatibility checks succeed, the move takes place. vxdg then checks again to
ensure that the configuration has not changed since it performed the compatibility
check. If the configuration has changed, vxdg attempts to perform the entire move
again.
Note: You should only use the -o override and -o verify options if you are using
an EMC array with a valid timefinder license. If you specify one of these options
and do not meet the array and license requirements, a warning message is displayed
and the operation is ignored.
The -o override option enables the move to take place without any EMC checking.
The -o verify option returns the access names of the disks that would be moved
but does not perform the move.
The following output from vxprint shows the contents of disk groups rootdg and
mydg.
The output includes two utility fields, TUTIL0 and PUTIL0. VxVM creates these fields
to manage objects and communications between different commands and Symantec
products. The TUTIL0 values are temporary; they are not maintained on reboot.
The PUTIL0 values are persistent; they are maintained on reboot.
# vxprint
Disk group: rootdg
TY NAME
ASSOC
dg rootdg
rootdg
dm rootdg02
sdb
dm rootdg03
sdc
dm rootdg04
csdd
dm rootdg06
sdf
KSTATE
-
LENGTH
17678493
17678493
17678493
17678493
PLOFFS
-
STATE
-
TUTIL0
-
PUTIL0
-
KSTATE
-
LENGTH
17678493
PLOFFS
-
STATE
-
TUTIL0
-
PUTIL0
-
597
dm
dm
dm
v
pl
sd
pl
sd
mydg05
mydg07
mydg08
vol1
vol1-01
mydg01-01
vol1-02
mydg05-01
sde
sdg
sdh
fsgen
vol1
vol1-01
vol1
vol1-02
ENABLED
ENABLED
ENABLED
ENABLED
ENABLED
17678493
17678493
17678493
2048
3591
3591
3591
3591
0
0
ACTIVE
ACTIVE
ACTIVE
-
The following command moves the self-contained set of objects implied by specifying
disk mydg01 from disk group mydg to rootdg:
# vxdg -o expand move mydg rootdg mydg01
By default, VxVM automatically recovers and starts the volumes following a disk
group move. If you have turned off the automatic recovery feature, volumes are
disabled after a move. Use the following commands to recover and restart the
volumes in the target disk group:
# vxrecover -g targetdg -m [volume ...]
# vxvol -g targetdg startall
The output from vxprint after the move shows that not only mydg01 but also volume
vol1 and mydg05 have moved to rootdg, leaving only mydg07 and mydg08 in disk
group mydg:
# vxprint
Disk group: rootdg
TY NAME
ASSOC
dg rootdg
rootdg
dm mydg01
sda
dm rootdg02
sdb
dm rootdg03
sdc
dm rootdg04
sdd
dm mydg05
sde
dm rootdg06
sdf
v vol1
fsgen
pl vol1-01
vol1
sd mydg01-01
vol1-01
pl vol1-02
vol1
sd mydg05-01
vol1-02
KSTATE
ENABLED
ENABLED
ENABLED
ENABLED
ENABLED
LENGTH
17678493
17678493
17678493
17678493
17678493
17678493
2048
3591
3591
3591
3591
PLOFFS
0
0
STATE
ACTIVE
ACTIVE
ACTIVE
-
TUTIL0
-
PUTIL0
-
KSTATE
-
LENGTH
-
PLOFFS
-
STATE
-
TUTIL0
-
PUTIL0
-
598
dm mydg07
dm mydg08
sdg
sdh
17678493
17678493
KSTATE
ENABLED
ENABLED
ENABLED
ENABLED
ENABLED
LENGTH
17678493
17678493
17678493
17678493
17678493
17678493
17678493
17678493
2048
3591
3591
3591
3591
PLOFFS
0
0
STATE
ACTIVE
ACTIVE
ACTIVE
-
TUTIL0
-
PUTIL0
-
The following command removes disks rootdg07 and rootdg08 from rootdg to
form a new disk group, mydg:
599
By default, VxVM automatically recovers and starts the volumes following a disk
group split. If you have turned off the automatic recovery feature, volumes are
disabled after a split. Use the following commands to recover and restart the volumes
in the target disk group:
# vxrecover -g targetdg -m [volume ...]
# vxvol -g targetdg startall
The output from vxprint after the split shows the new disk group, mydg:
# vxprint
Disk group: rootdg
TY NAME
ASSOC
dg rootdg
rootdg
dm rootdg01
sda
dm rootdg02
sdb
dm rootdg03
sdc
dm rootdg04
sdd
dm rootdg05
sde
dm rootdg06
sdf
v vol1
fsgen
pl vol1-01
vol1
sd rootdg01-01 vol1-01
pl vol1-02
vol1
sd rootdg05-01 vol1-02
Disk group: mydg
TY NAME
ASSOC
dg mydg
mydg
dm rootdg07
sdg
dm rootdg08
sdh
KSTATE
ENABLED
ENABLED
ENABLED
ENABLED
ENABLED
LENGTH
17678493
17678493
17678493
17678493
17678493
17678493
2048
3591
3591
3591
3591
PLOFFS
0
0
STATE
ACTIVE
ACTIVE
ACTIVE
-
TUTIL0
-
PUTIL0
-
KSTATE
-
LENGTH
17678493
17678493
PLOFFS
-
STATE
-
TUTIL0
-
PUTIL0
-
600
The following output from vxprint shows the contents of the disk groups rootdg
and mydg.
The output includes two utility fields, TUTIL0 and PUTIL0.. VxVM creates these
fields to manage objects and communications between different commands and
Symantec products. The TUTIL0 values are temporary; they are not maintained on
reboot. The PUTIL0 values are persistent; they are maintained on reboot.
# vxprint
Disk group: rootdg
TY NAME
ASSOC
dg rootdg
rootdg
dm rootdg01
sda
dm rootdg02
sdb
dm rootdg03
sdc
dm rootdg04
sdd
dm rootdg07
sdg
dm rootdg08
sdh
KSTATE
-
LENGTH
17678493
17678493
17678493
17678493
17678493
17678493
PLOFFS
-
STATE
-
TUTIL0
-
PUTIL0
-
KSTATE
ENABLED
ENABLED
ENABLED
ENABLED
ENABLED
LENGTH
17678493
17678493
2048
3591
3591
3591
3591
PLOFFS
0
0
STATE
ACTIVE
ACTIVE
ACTIVE
-
TUTIL0
-
PUTIL0
-
By default, VxVM automatically recovers and starts the volumes following a disk
group join. If you have turned off the automatic recovery feature, volumes are
disabled after a join. Use the following commands to recover and restart the volumes
in the target disk group:
# vxrecover -g targetdg -m [volume ...]
# vxvol -g targetdg startall
The output from vxprint after the join shows that disk group mydg has been
removed:
601
# vxprint
Disk group: rootdg
TY NAME
ASSOC
dg rootdg
rootdg
KSTATE
-
LENGTH
-
PLOFFS
-
STATE
-
TUTIL0
-
PUTIL0
-
dm
dm
dm
dm
dm
dm
dm
dm
v
pl
sd
pl
sd
ENABLED
ENABLED
ENABLED
ENABLED
ENABLED
17678493
17678493
17678493
17678493
17678493
17678493
17678493
17678493
2048
3591
3591
3591
3591
0
0
ACTIVE
ACTIVE
ACTIVE
-
mydg01
rootdg02
rootdg03
rootdg04
mydg05
rootdg06
rootdg07
rootdg08
vol1
vol1-01
mydg01-01
vol1-02
mydg05-01
sda
sdb
sdc
sdd
sde
sdf
sdg
sdh
fsgen
vol1
vol1-01
vol1
vol1-02
vxassist
vxevac
vxmirror
602
vxplex
vxrecover
vxrelayout
vxresize
vxsd
vxvol
For example, to execute a vxrecover command and track the resulting tasks as a
group with the task tag myrecovery, use the following command:
# vxrecover -g mydg -t myrecovery -b mydg05
Any tasks started by the utilities invoked by vxrecover also inherit its task ID and
task tag, establishing a parent-child task relationship.
For more information about the utilities that support task tagging, see their respective
manual pages.
vxtask operations
The vxtask command supports the following operations:
603
abort
Stops the specified task. In most cases, the operations back out as if
an I/O error occurred, reversing what has been done so far to the largest
extent possible.
list
Displays a one-line summary for each task running on the system. The
-l option prints tasks in long format. The -h option prints tasks
hierarchically, with child tasks following the parent tasks. By default, all
tasks running on the system are printed. If you include a taskid
argument, the output is limited to those tasks whose taskid or task
tag match taskid. The remaining arguments filter tasks and limit which
ones are listed.
In this release, the vxtask list command supports SmartMove and
thin reclamation operation.
monitor
pause
resume
set
604
# vxtask list
To print tasks hierarchically, with child tasks following the parent tasks, specify the
-h option, as follows:
# vxtask -h list
To trace all paused tasks in the disk group mydg, as well as any tasks with the tag
sysstart, use the following command:
# vxtask -g mydg -p -i sysstart list
To list all paused tasks, use the vxtask -p list command. To continue execution
(the task may be specified by its ID or by its tag), use vxtask resume :
# vxtask -p list
# vxtask resume 167
To monitor all tasks with the tag myoperation, use the following command:
# vxtask monitor myoperation
To cause all tasks tagged with recovall to exit, use the following command:
# vxtask abort recovall
This command causes VxVM to try to reverse the progress of the operation so far.
For example, aborting an Online Relayout results in VxVM returning the volume to
its original layout.
See Controlling the progress of a relayout on page 611.
605
For example, the following vxnotify command displays information about all disk,
plex, and volume detachments as they occur:
# vxnotify -f
Concatenated-mirror
concat
Concatenated
nomirror
Concatenated
nostripe
Concatenated
raid5
span
Concatenated
stripe
Striped
Sometimes, you may need to perform a relayout on a plex rather than on a volume.
See Specifying a plex for relayout on page 610.
606
Relayout to
From concat
concat
No.
concat-mirror
mirror-concat
mirror-stripe
raid5
stripe
stripe-mirror
Relayout to
From concat-mirror
concat
No. Use vxassist convert, and then remove the unwanted mirrors
from the resulting mirrored-concatenated volume instead.
concat-mirror
No.
mirror-concat
mirror-stripe
raid5
Yes.
stripe
Yes. This relayout removes a mirror and adds striping. The stripe width
and number of columns may be defined.
stripe-mirror
Table 30-3 shows the supported relayout transformations for RAID-5 volumes.
607
Table 30-3
Relayout to
From RAID-5
concat
Yes.
concat-mirror
Yes.
mirror-concat
mirror-stripe
raid5
stripe
stripe-mirror
Relayout to
From mirror-concat
concat
concat-mirror
mirror-concat
No.
mirror-stripe
raid5
Yes. The stripe width and number of columns may be defined. Choose
a plex in the existing mirrored volume on which to perform the relayout.
The other plexes are removed at the end of the relayout operation.
stripe
Yes.
stripe-mirror
Yes.
Table 30-5 shows the supported relayout transformations for mirrored-stripe volumes.
608
Table 30-5
Relayout to
From mirror-stripe
concat
Yes.
concat-mirror
Yes.
mirror-concat
mirror-stripe
raid5
stripe
stripe-mirror
Table 30-6 shows the supported relayout transformations for unmirrored stripe and
layered striped-mirror volumes.
Table 30-6
Relayout to
concat
Yes.
concat-mirror
Yes.
mirror-concat
mirror-stripe
raid5
stripe
stripe-mirror
609
ncol=number
ncol=+number
ncol=-number
stripeunit=size
The following examples use vxassist to change the stripe width and number of
columns for a striped volume in the disk group dbasedg:
# vxassist -g dbasedg relayout vol03 stripeunit=64k ncol=6
# vxassist -g dbasedg relayout vol03 ncol=+2
# vxassist -g dbasedg relayout vol03 stripeunit=128k
610
For relayout operations that have not been stopped using the vxtask pause
command (for example, the vxtask abort command was used to stop the task,
the transformation process died, or there was an I/O failure), resume the relayout
by specifying the start keyword to vxrelayout, as follows:
# vxrelayout -g mydg -o bg start vol04
If you use the vxrelayout start command to restart a relayout that you previously
suspended using the vxtask pause command, a new untagged task is created to
611
complete the operation. You cannot then use the original task tag to control the
relayout.
The -o bg option restarts the relayout in the background. You can also specify the
slow and iosize option modifiers to control the speed of the relayout and the size
of each region that is copied. For example, the following command inserts a delay
of 1000 milliseconds (1 second) between copying each 10 MB region:
# vxrelayout -g mydg -o bg,slow=1000,iosize=10m start vol04
The default delay and region size values are 250 milliseconds and 1 MB respectively.
To reverse the direction of relayout operation that is stopped, specify the reverse
keyword to vxrelayout as follows:
# vxrelayout -g mydg -o bg reverse vol04
This undoes changes made to the volume so far, and returns it to its original layout.
If you cancel a relayout using vxtask abort, the direction of the conversion is also
reversed, and the volume is returned to its original configuration.
See Managing tasks with vxtask on page 603.
See the vxrelayout(1M) manual page.
See the vxtask(1M) manual page.
Specifying the -b option makes synchronizing the new mirror a background task.
For example, to create a mirror of the volume voltest in the disk group, mydg, use
the following command:
# vxassist -b -g mydg mirror voltest
You can also mirror a volume by creating a plex and then attaching it to a volume
using the following commands:
# vxmake [-g diskgroup] plex plex sd=subdisk ...
# vxplex [-g diskgroup] att volume plex
612
If you make this change, you can still make unmirrored volumes by specifying
nmirror=1 as an attribute to the vxassist command. For example, to create an
unmirrored 20-gigabyte volume named nomirror in the disk group mydg, use the
following command:
# vxassist -g mydg make nomirror 20g nmirror=1
Make sure that the target disk has an equal or greater amount of space as the
source disk.
At the prompt, enter the disk name of the disk that you wish to mirror:
Enter disk name [<disk>,list,q,?] mydg02
At the prompt, enter the target disk name (this disk must be the same size or
larger than the originating disk):
Enter destination disk [<disk>,list,q,?] (default: any) mydg01
613
At the prompt, indicate whether you want to mirror volumes on another disk
(y) or return to the vxdiskadm main menu (n):
Mirror volumes on another disk? [y,n,q,?] (default: n)
Configuring SmartMove
By default, the SmartMove utility is enabled for all volumes. Configuring the
SmartMove feature is only required if you want to change the default behavior, or
if you have modified the behavior previously.
SmartMove has three values where SmartMove can be applied or not. The three
values are:
Value
Meaning
none
thinonly
all
614
To display the current and default SmartMove values, type the following
command:
# vxdefault list
KEYWORD
usefssmartmove
...
CURRENT-VALUE
all
DEFAULT-VALUE
all
Removing a mirror
When you no longer need a mirror, you can remove it to free disk space.
Note: VxVM will not allow you to remove the last valid plex associated with a volume.
To remove a mirror from a volume, use the following command:
# vxassist [-g diskgroup] remove mirror volume
You can also use storage attributes to specify the storage to be removed. For
example, to remove a mirror on disk mydg01 from volume vol01, enter the following.
Note: The ! character is a special character in some shells. The following example
shows how to escape it in a bash shell.
# vxassist -g mydg remove mirror vol01 \!mydg01
For example, to dissociate and remove a mirror named vol01-02 from the disk
group mydg, use the following command:
# vxplex -g mydg -o rm dis vol01-02
615
This command removes the mirror vol01-02 and all associated subdisks. This is
equivalent to entering the following commands separately:
# vxplex -g mydg dis vol01-02
# vxedit -g mydg -r rm vol01-02
Replace a tag.
To list the tags that are associated with a volume, use the following command:
# vxassist [-g diskgroup] listtag [volume|vset]
If you do not specify a volume name, all the volumes and vsets in the disk group
are displayed. The acronym vt in the TY field indicates a vset.
The following is a sample listtag command:
# vxassist -g dg1 listtag vol
To list the volumes that have a specified tag name, use the following command:
# vxassist [-g diskgroup] list tag=tagname volume
Tag names and tag values are case-sensitive character strings of up to 256
characters. Tag names can consist of the following ASCII characters:
Numbers (0 through 9)
Dashes (-)
Underscores (_)
Periods (.)
616
A tag name must start with either a letter or an underscore. A tag name must not
be the same as the name of a disk in the disk group.
The tag names site, udid, and vdid are reserved. Do not use them. To avoid
possible clashes with future product features, do not start tag names with any of
the following strings: asl, be, nbu, sf, symc, or vx.
Tag values can consist of any ASCII character that has a decimal value from 32
through 127. If a tag value includes spaces, quote the specification to protect it from
the shell, as follows:
# vxassist -g mydg settag myvol "dbvol=table space 1"
The list operation understands dotted tag hierarchies. For example, the listing for
tag=a.b includes all volumes that have tag names starting with a.b.
You must explicitly upgrade the disk group to the appropriate disk group version to
use the feature.
See Upgrading the disk group version on page 622.
Table 30-7 summarizes the Veritas Volume Manager releases that introduce and
support specific disk group versions. It also summarizes the features that are
supported by each disk group version.
617
Table 30-7
VxVM release
Introduces disk
group version
New features
supported
6.1
190
SmartIO caching
Flexible storage
sharing
6.0.1
180
6.0
170
5.1SP1
160
150
5.1
Supports disk
group versions *
618
Table 30-7
VxVM release
Introduces disk
group version
New features
supported
Supports disk
group versions *
5.0
140
Data migration,
Remote Mirror,
coordinator disk
groups (used by
VCS), linked
volumes, snapshot
LUN import.
5.0
130
VVR
Enhancements
4.1
120
Automatic
Cluster-wide
Failback for A/P
arrays
Persistent DMP
Policies
Shared Disk
Group Failure
Policy
619
Table 30-7
VxVM release
Introduces disk
group version
New features
supported
4.0
110
3.2, 3.5
90
Supports disk
group versions *
Cross-platform
20, 30, 40, 50, 60, 70,
Data Sharing
80, 90, 110
(CDS)
Device Discovery
Layer (DDL) 2.0
Disk Group
Configuration
Backup and
Restore
Elimination of
rootdg as a
Special Disk
Group
Full-Sized and
Space-Optimized
Instant Snapshots
Intelligent Storage
Provisioning (ISP)
Serial Split Brain
Detection
Volume Sets
(Multiple Device
Support for VxFS)
Cluster Support
20, 30, 40, 50, 60, 70,
for Oracle
80, 90
Resilvering
Disk Group Move,
Split and Join
Device Discovery
Layer (DDL) 1.0
Layered Volume
Support in
Clusters
Ordered
Allocation
OS Independent
Naming Support
Persistent
FastResync
620
Table 30-7
VxVM release
Introduces disk
group version
New features
supported
3.1.1
80
VVR
Enhancements
3.1
70
Non-Persistent
FastResync
Sequential DRL
Unrelocate
VVR
Enhancements
Online Relayout
Safe RAID-5
Subdisk Moves
3.0
60
Supports disk
group versions *
2.5
50
SRVM (now
20, 30, 40, 50
known as Veritas
Volume Replicator
or VVR)
2.3
40
Hot-Relocation
20, 30, 40
2.2
30
VxSmartSync
Recovery
Accelerator
20, 30
2.0
20
Dirty Region
20
Logging (DRL)
Disk Group
Configuration
Copy Limiting
Mirrored Volumes
Logging
New-Style Stripes
RAID-5 Volumes
Recovery
Checkpointing
1.3
15
15
1.2
10
10
621
* To support new features, the disk group must be at least the disk group version
of the release when the feature was introduced.
If you need to import a disk group on a system running an older version of Veritas
Volume Manager, you can create a disk group with an earlier disk group version.
See Creating a disk group with an earlier disk group version on page 623.
You can also determine the disk group version by using the vxprint command
with the -l format option.
To upgrade a disk group to the highest version supported by the release of VxVM
that is currently running, use this command:
# vxdg upgrade dgname
622
STATE
enabled
enabled
ID
730344554.1025.tweety
731118794.1213.tweety
To display more detailed information on a specific disk group, use the following
command:
# vxdg list diskgroup
When you apply this command to a disk group named mydg, the output is similar
to the following:
# vxdg list mydg
Group: mydg
dgid: 962910960.1025.bass
import-id: 0.1
flags:
version: 160
local-activation: read-write
alignment: 512 (bytes)
ssb: on
detach-policy: local
copies: nconfig=default nlog=default
623
To verify the disk group ID and name that is associated with a specific disk (for
example, to import the disk group), use the following command:
# vxdisk -s list devicename
This command provides output that includes the following information for the
specified disk. For example, output for disk sdc as follows:
Disk: sdc
type:
simple
flags: online ready private autoconfig autoimport imported
diskid: 963504891.1070.bass
dgname: newdg
dgid:
963504895.1075.bass
hostid: bass
info:
privoffset=128
DISK
mydg01
mydg02
znewdg01
newdg02
oradg01
DEVICE
sda
sdb
sdc
sdd
sde
TAG
sda
sdb
sdc
sdd
sde
OFFSET
0
0
0
0
0
LENGTH
4444228
4443310
4443310
4443310
4443310
FLAGS
-
To display free space for a disk group, use the following command:
# vxdg -g diskgroup free
624
For example, to display the free space in the disk group, mydg, use the following
command:
# vxdg -g mydg free
The following example output shows the amount of free space in sectors:
DISK
DEVICE
mydg01 sda
mydg02 sdb
TAG
sda
sdb
OFFSET
0
0
LENGTH
4444228
4443310
FLAGS
-
For example, to create a disk group named mktdg on device sdc, enter the following:
# vxdg init mktdg mktdg01=sdc
The disk that is specified by the device name, sdc, must have been previously
initialized with vxdiskadd or vxdiskadm. The disk must not currently belong to a
disk group.
You can use the cds attribute with the vxdg init command to specify whether a
new disk group is compatible with the Cross-platform Data Sharing (CDS) feature.
Newly created disk groups are compatible with CDS by default (equivalent to
specifying cds=on). If you want to change this behavior, edit the file
625
/etc/default/vxdg and set the attribute-value pair cds=off in this file before
For example, to remove mydg02 from the disk group mydg, enter the following:
# vxdg -g mydg rmdisk mydg02
If the disk has subdisks on it when you try to remove it, the following error message
is displayed:
VxVM vxdg ERROR V-5-1-552 Disk diskname is used by one or more
subdisks
Use -k to remove device assignment.
Using the -k option lets you remove the disk even if it has subdisks.
See the vxdg(1M) manual page.
Warning: Use of the -k option to vxdg can result in data loss.
After you remove the disk from its disk group, you can (optionally) remove it from
VxVM control completely. Enter the following:
# vxdiskunsetup devicename
For example, to remove the disk sdc from VxVM control, enter the following:
# vxdiskunsetup sdc
626
You can remove a disk on which some subdisks of volumes are defined. For
example, you can consolidate all the volumes onto one disk. If you use vxdiskadm
to remove a disk, you can choose to move volumes off that disk. To do this, run
vxdiskadm and select Remove a disk from the main menu.
If the disk is used by some volumes, this message is displayed:
VxVM ERROR V-5-2-369 The following volumes currently use part of
disk mydg02:
home usrvol
Volumes must be moved from mydg02 before it can be removed.
Move volumes to other disks? [y,n,q,?] (default: n)
If you choose y, all volumes are moved off the disk, if possible. Some volumes may
not be movable. The most common reasons why a volume may not be movable
are as follows:
If vxdiskadm cannot move some volumes, you may need to remove some plexes
from some disks to free more space before proceeding with the disk removal
operation.
Stop all activity by applications to volumes that are configured in the disk group
that is to be deported. Unmount file systems and shut down databases that
are configured on the volumes.
If the disk group contains volumes that are in use (for example, by mounted
file systems or databases), deportation fails.
To stop the volumes in the disk group, use the following command
# vxvol -g diskgroup stopall
627
From the vxdiskadm main menu, select Remove access to (deport) a disk
group .
At prompt, enter the name of the disk group to be deported. In the following
example it is newdg):
Enter name of disk group [<group>,list,q,?] (default: list)
newdg
At the following prompt, enter y if you intend to remove the disks in this disk
group:
Disable (offline) the indicated disks? [y,n,q,?]
(default: n) y
After the disk group is deported, the vxdiskadm utility displays the following
message:
VxVM INFO V-5-2-269 Removal of disk group newdg was
successful.
At the following prompt, indicate whether you want to disable another disk
group (y) or return to the vxdiskadm main menu (n):
Disable another disk group? [y,n,q,?] (default: n)
You can use the following vxdg command to deport a disk group:
# vxdg deport diskgroup
628
To ensure that the disks in the deported disk group are online, use the following
command:
# vxdisk -s list
From the vxdiskadm main menu, select Enable access to (import) a disk
group.
At the following prompt, enter the name of the disk group to import (in this
example, newdg):
Select disk group to import [<group>,list,q,?] (default: list)
newdg
When the import finishes, the vxdiskadm utility displays the following success
message:
VxVM INFO V-5-2-374 The import of newdg was successful.
At the following prompt, indicate whether you want to import another disk group
(y) or return to the vxdiskadm main menu (n):
Select another disk group? [y,n,q,?] (default: n)
You can also use the following vxdg command to import a disk group:
# vxdg import diskgroup
You can also import the disk group as a shared disk group.
629
To turn off automatic volume recovery for the entire system, use the following
command.
# vxtune autostartvolumes off
OR
To turn off automatic volume recovery for a specific disk group import operation,
use the noautostart option.
# vxdg -o noautostart import diskgroup
630
Note: To make the new division take effect, you must run vxdctl enable or restart
vxconfigd after the tunable is changed in the defaults file. The division on all the
cluster nodes must be exactly the same, to prevent node failures for node join,
volume creation, or disk group import operations.
To change the division between shared and private minor numbers
You cannot set the shared minor numbers to start at less than 1000. If
sharedminorstart is set to values between 0 to 999, the division of private
minor numbers and shared disk group minor numbers is set to 1000. The value
of 0 disables dynamic renumbering.
In certain scenarios, you may need to disable the division of between shared minor
numbers and private minor numbers. For example, to prevent the device minor
numbers from being changed when you upgrade from a previous release. In this
case, disable the dynamic reminoring before you install the new VxVM rpm.
To disable the division between shared and private minor numbers
631
Confirm that all disks in the diskgroup are visible on the target system. This
may require masking and zoning changes.
On the source system, stop all volumes in the disk group, then deport (disable
local access to) the disk group with the following command:
# vxdg deport diskgroup
Move all the disks to the target system and perform the steps necessary
(system-dependent) for the target system and VxVM to recognize the new
disks.
This can require a reboot, in which case the vxconfigd daemon is restarted
and recognizes the new disks. If you do not reboot, use the command vxdctl
enable to restart the vxconfigd program so VxVM also recognizes the disks.
Import (enable local access to) the disk group on the target system with this
command:
# vxdg import diskgroup
Warning: All disks in the disk group must be moved to the other system. If they
are not moved, the import fails.
By default, VxVM enables and starts any disabled volumes after the disk group
is imported.
See Setting the automatic recovery of volumes on page 629.
If the automatic volume recovery feature is turned off, start all volumes with
the following command:
# vxrecover -g diskgroup -sb
You can also move disks from a system that has crashed. In this case, you
cannot deport the disk group from the source system. When a disk group is
created or imported on a system, that system writes a lock on all disks in the
disk group.
Warning: The purpose of the lock is to ensure that SAN-accessed disks are
not used by both systems at the same time. If two systems try to access the
same disks at the same time, this must be managed using software such as
the clustering functionality of VxVM. Otherwise, data and configuration
information stored on the disk may be corrupted, and may become unusable.
632
The next message indicates that the disk group does not contains any valid disks
(not that it does not contains any disks):
VxVM vxdg ERROR V-5-1-587 Disk group groupname: import failed:
No valid disk found containing disk group
The disks may be considered invalid due to a mismatch between the host ID in
their configuration copies and that stored in the /etc/vx/volboot file.
To clear locks on a specific set of devices, use the following command:
# vxdisk clearimport devicename ...
633
If some of the disks in the disk group have failed, you can force the disk group to
be imported by specifying the -f option to the vxdg import command:
# vxdg -f import diskgroup
Warning: Be careful when using the -f option. It can cause the same disk group to
be imported twice from different sets of disks. This can cause the disk group
configuration to become inconsistent.
See Handling conflicting configuration copies on page 646.
As using the -f option to force the import of an incomplete disk group counts as a
successful import, an incomplete disk group may be imported subsequently without
this option being specified. This may not be what you expect.
You can also import the disk group as a shared disk group.
These operations can also be performed using the vxdiskadm utility. To deport a
disk group using vxdiskadm, select Remove access to (deport) a disk group
from the main menu. To import a disk group, select Enable access to (import)
a disk group. The vxdiskadm import operation checks for host import locks and
prompts to see if you want to clear any that are found. It also starts volumes in the
disk group.
634
unallocated minor numbers near the top of this range to allow for temporary device
number remapping in the event that a device minor number collision may still occur.
VxVM reserves the range of minor numbers from 0 to 999 for use with volumes in
the boot disk group. For example, the rootvol volume is always assigned minor
number 0.
If you do not specify the base of the minor number range for a disk group, VxVM
chooses one at random. The number chosen is at least 1000, is a multiple of 1000,
and yields a usable range of 1000 device numbers. The chosen number also does
not overlap within a range of 1000 of any currently imported disk groups, and it
does not overlap any currently allocated volume device numbers.
Note: The default policy ensures that a small number of disk groups can be merged
successfully between a set of machines. However, where disk groups are merged
automatically using failover mechanisms, select ranges that avoid overlap.
To view the base minor number for an existing disk group, use the vxprint
command as shown in the following examples for the disk group, mydg:
# vxprint -l mydg | grep minors
minors: >=45000
# vxprint -g mydg -m | egrep base_minor
base_minor=45000
To set a base volume device minor number for a disk group that is being created,
use the following command:
# vxdg init diskgroup minor=base_minor disk_access_name ...
For example, the following command creates the disk group, newdg, that includes
the specified disks, and has a base minor number of 30000:
# vxdg init newdg minor=30000 sdc sdd
If a disk group already exists, you can use the vxdg reminor command to change
its base minor number:
# vxdg -g diskgroup reminor new_base_minor
For example, the following command changes the base minor number to 30000 for
the disk group, mydg:
# vxdg -g mydg reminor 30000
635
If a volume is open, its old device number remains in effect until the system is
rebooted or until the disk group is deported and re-imported. If you close the open
volume, you can run vxdg reminor again to allow the renumbering to take effect
without rebooting or re-importing.
An example of where it is necessary to change the base minor number is for a
cluster-shareable disk group. The volumes in a shared disk group must have the
same minor number on all the nodes. If there is a conflict between the minor numbers
when a node attempts to join the cluster, the join fails. You can use the reminor
operation on the nodes that are in the cluster to resolve the conflict. In a cluster
where more than one node is joined, use a base minor number which does not
conflict on any node.
See the vxdg(1M) manual page.
See Handling of minor number conflicts on page 630.
636
Note: Such a disk group may still not be importable by VxVM 4.0 on Linux with a
pre-2.6 kernel if it would increase the number of minor numbers on the system that
are assigned to volumes to more than 4079, or if the number of available extended
major numbers is smaller than 15.
You can use the following command to discover the maximum number of volumes
that are supported by VxVM on a Linux host:
# cat /proc/sys/vxvm/vxio/vol_max_volumes
4079
637
See Importing the existing disk group with only the cloned disks on page 639.
After DDL recognizes the LUN, turn on name persistence using the following
command:
# vxddladm set namingscheme=ebn persistence=yes use_avid=yes
638
Use the following command to update the unique disk identifier (UDID) for one
or more disks.
# vxdisk [-cf] [-g diskgroup ] updateudid disk ...
For example, the following command updates the UDIDs for the disks sdg and
sdh:
# vxdisk updateudid sdg sdh
The -f option must be specified if VxVM has not set the udid_mismatch flag
for a disk.
If VxVM has set the udid_mismatch flag on a disk that is not a clone, specify
the -c option to remove the udid_mismatch flag and the clone flag.
Importing the existing disk group with only the cloned disks
If the standard (non-clone) disks in a disk group are not imported, you can import
the existing disk group with only the cloned disks. By default, the clone_disk flag
is set on the disks so that you can continue to distinguish between the original disks
and the cloned disks.
This procedure is useful for temporary scenarios. For example, if you want to import
only the clone disks to verify the point-in-time copy. After you have verified the clone
disks, you can deport the clone disks and import the standard disks again.
Be sure to import a consistent set of cloned disks, which represent a single point
of time copy of the original disks. Each of the disks must have either the
udid_mismatch flag or the clone_disk flag or both. No two of the disks should
have the same UDID. That is, there must not be two copies of the same original
disk.
You must use disk tags if multiple copies of disks in a disk group are present on
the system.
See Importing a set of cloned disks with tags on page 641.
VxVM does not support a disk group with both clone and non-clone disks. If you
want to import both clone disks and standard disks simultaneously, you must specify
a new disk group name for the clone disk group.
See Importing the cloned disks as a new standard disk group on page 640.
639
Make sure that at least one of the cloned disks has a copy of the current
configuration database in its private region.
See Setting up configuration database copies (metadata) for a disk group
on page 643.
This form of the command allows only cloned disks to be imported. All
non-cloned disks remain unimported.
Specify the -o updateid option to write new identification attributes to the
disks, and to set the clone_disk flag on the disks. (The vxdisk set clone=on
command can also be used to set the flag.)
640
Make sure that at least one of the cloned disks has a copy of the current
configuration database in its private region.
See Setting up configuration database copies (metadata) for a disk group
on page 643.
641
TYPE
auto:cdsdisk
auto:cdsdisk
auto:cdsdisk
auto:cdsdisk
auto:cdsdisk
auto:cdsdisk
DISK
mydg01
mydg02
-
GROUP
mydg
mydg
(mydg)
(mydg)
(mydg)
(mydg)
STATUS
online
online
online
online
online
online
udid_mismatch
udid_mismatch
udid_mismatch
udid_mismatch
If the disks are not already tagged, use the following command to tag all the
disks in the disk group that are to be imported:
# vxdisk [-g diskgroup ] settag tagname
disk ...
642
To check which disks are tagged, use the vxdisk listtag command:
# vxdisk listtag
DEVICE
EMC0_8
EMC0_15
EMC0_18
EMC0_24
NAME
snaptag1
snaptag1
snaptag2
snaptag2
VALUE
snap1
snap1
snap2
snap2
To import the cloned disks that are tagged as snaptag1, update the UDIDs.
You must assign a disk group name other than mydg, because the mydg disk
group is already imported.
# vxdg -n bcvdg -o useclonedev=on -o tag=snaptag1 -o updateid \
import mydg
# vxdisk -o alldgs list
DEVICE
EMC0_4
EMC0_6
EMC0_8
EMC0_15
EMC0_18
EMC0_24
TYPE
auto:cdsdisk
auto:cdsdisk
auto:cdsdisk
auto:cdsdisk
auto:cdsdisk
auto:cdsdisk
DISK
mydg01
mydg02
mydg01
mydg02
-
GROUP
mydg
mydg
bcvdg
bcvdg
(mydg)
(mydg)
STATUS
online
online
online
online
online udid_mismatch
online udid_mismatch
The cloned disks EMC0_18 and EMC0_24 are not imported, since they do not
have the snaptag1.
The state of the imported cloned disks has changed from online
udid_mismatch to online. The disks are now in a new disk group, so VxVM
removes the clone_disk flag.
See the vxdg(1M) manual page.
643
If you import only a partial set of disks in a disk group, you must ensure that at least
one of the imported disks contains a copy of the current configuration database.
To set up the configuration copies on a set of disks
Use the following command to place a copy of the configuration database and
kernel log on all disks in a disk group that share the specified tag:
# vxdg [-g diskgroup] set tagmeta=on tag=tagname nconfig=all \
nlog=all
If you have set tagmeta=on for a disk group, use the following command to
view the disk tags and the value set for the number of configuration copies. A
value of -1 indicates that all tagged disks maintain configuration or log copies.
# vxdg listmeta diskgroup
Use the following command to place a copy of the configuration copy (metadata)
on the specified disk, regardless of the placement policy for the disk group.
You can set this attribute before or after the disk is added to a disk group.
# vxdisk [-g diskgroup] set disk keepmeta=always
If the -t option is included, the import is temporary and does not persist across
reboots. In this case, the stored name of the disk group remains unchanged on its
original host, but the disk group is known by the name specified by newdg to the
importing host. If the -t option is not used, the name change is permanent.
For example, this command temporarily renames the disk group, mydg, as mytempdg
on import:
# vxdg -t -n mytempdg import mydg
644
When renaming on deport, you can specify the -h hostname option to assign a
lock to an alternate host. This ensures that the disk group is automatically imported
when the alternate host reboots.
For example, this command renames the disk group, mydg, as myexdg, and deports
it to the host, jingo:
# vxdg -h jingo -n myexdg deport mydg
You cannot use this method to rename the boot disk group because it contains
volumes that are in use by mounted file systems (such as /). To rename the boot
disk group, you must first unmirror and unencapsulate the root disk, and then
re-encapsulate and remirror the root disk in a different disk group. This disk group
becomes the new boot disk group.
See Rootability on page 679.
To temporarily move the boot disk group, bootdg, from one host to another (for
repair work on the root volume, for example) and then move it back
On the original host, identify the disk group ID of the bootdg disk group to be
imported with the following command:
# vxdisk -g bootdg -s list
dgname: rootdg
dgid: 774226267.1025.tweety
In this example, the administrator has chosen to name the boot disk group as
rootdg. The ID of this disk group is 774226267.1025.tweety.
This procedure assumes that all the disks in the boot disk group are accessible
by both hosts.
645
On the importing host, import and rename the rootdg disk group with this
command:
# vxdg -tC -n newdg import diskgroup
The -t option indicates a temporary import name, and the -C option clears
import locks. The -n option specifies an alternate name for the rootdg being
imported so that it does not conflict with the existing rootdg. diskgroup is the
disk group ID of the disk group being imported (for example,
774226267.1025.tweety).
If a reboot or crash occurs at this point, the temporarily imported disk group
becomes unimported and requires a reimport.
After the necessary work has been done on the imported disk group, deport it
back to its original host with this command:
# vxdg -h hostname deport diskgroup
Here hostname is the name of the system whose rootdg is being returned
(the system name can be confirmed with the command uname -n).
This command removes the imported disk group from the importing host and
returns locks to its original host. The original host can then automatically import
its boot disk group at the next reboot.
646
Figure 30-5 shows a 2-node cluster with node 0, a Fibre Channel switch and disk
enclosure enc0 in building A, and node 1, another switch and enclosure enc1 in
building B.
Typical arrangement of a 2-node campus cluster
Figure 30-5
Node 0
Node 1
Redundant private
network
Disk enclosures
enc0
enc1
Building A
Building B
647
VxVM increments the serial ID in the disk media record of each imported disk in all
the disk group configuration databases on those disks, and also in the private region
of each imported disk. The value that is stored in the configuration database
represents the serial ID that the disk group expects a disk to have. The serial ID
that is stored in a disks private region is considered to be its actual value. VxVM
detects the serial split brain when the actual serial ID of the disks that are being
attached mismatches with the serial ID in the disk group configuration database of
the imported disk group.
If some disks went missing from the disk group (due to physical disconnection or
power failure) and those disks were imported by another host, the serial IDs for the
disks in their copies of the configuration database, and also in each disks private
region, are updated separately on that host. When the disks are subsequently
re-imported into the original shared disk group, the actual serial IDs on the disks
do not agree with the expected values from the configuration copies on other disks
in the disk group.
Depending on what happened to the different portions of the split disk group, there
are two possibilities for resolving inconsistencies between the configuration
databases:
If the other disks in the disk group were not imported on another host, VxVM
resolves the conflicting values of the serial IDs by using the version of the
configuration database from the disk with the greatest value for the updated ID
(shown as update_id in the output from the vxdg list diskgroup command).
Figure 30-6 shows an example of a serial split brain condition that can be
resolved automatically by VxVM.
648
Figure 30-6
Disk A
Disk B
Disk A = 1
Disk B = 0
Configuration
database
Expected A = 1
Expected B = 0
Configuration
database
Expected A = 0
Expected B = 0
Disk B
Disk A = 1
Disk B = 0
Configuration
database
Expected A = 1
Expected B = 0
Configuration
database
Expected A = 1
Expected B = 0
If the other disks were also imported on another host, no disk can be considered
to have a definitive copy of the configuration database.
Figure 30-7 shows an example of a true serial split brain condition that cannot
be resolved automatically by VxVM.
649
Figure 30-7
650
Disk A
Disk B
Disk A = 1
Configuration
database
Expected A = 1
Expected B = 0
Disk B = 1
Configuration
database
Expected A = 0
Expected B = 1
Disk B
Disk A = 1
Configuration
database
Expected A = 1
Expected B = 0
Configuration
database
Expected A = 0
Expected B = 1
Disk B = 1
In this case, the disk group import fails, and the vxdg utility outputs error messages
similar to the following before exiting:
The import does not succeed even if you specify the -f flag to vxdg.
Although it is usually possible to resolve this conflict by choosing the version of the
configuration database with the highest valued configuration ID (shown as the value
of seqno in the output from the vxdg list diskgroup| grep config command),
this may not be the correct thing to do in all circumstances.
See Correcting conflicting configuration information on page 651.
To see the configuration copy from a disk, enter the following command:
# /etc/vx/diag.d/vxprivutil dumpconfig private path
To import the disk group with the configuration copy from a disk, enter the following
command:
# /usr/sbin/vxdg (-s) -o selectcp=diskid import newdg
Pool 0
DEVICE DISK DISK ID DISK PRIVATE PATH
newdg1 sdp 1215378871.300.vm2850lx13 /dev/vx/rdmp/sdp5
newdg2 sdq 1215378871.300.vm2850lx13 /dev/vx/rdmp/sdp5
Pool 1
DEVICE DISK DISK ID DISK PRIVATE PATH
newdg3 sdo 1215378871.294.vm2850lx13 /dev/vx/rdmp/sdo5
If you do not specify the -v option, the command has the following output:
# vxsplitlines -g mydg listssbinfo
VxVM vxdg listssbinfo NOTICE V-0-0-0 There are 2 pools
All the disks in the first pool have the same config copies
All the disks in the second pool may not have the same config copies
651
652
To import the disk group with the configuration copy from the first pool, enter the
following command:
# /usr/sbin/vxdg (-s) -o selectcp=1221451925.395.vm2850lx13 import mydg
To import the disk group with the configuration copy from the second pool, enter
the following command:
# /usr/sbin/vxdg (-s) -o selectcp=1221451927.401.vm2850lx13 import mydg
In this example, the disk group has four disks, and is split so that two disks appear
to be on each side of the split.
You can specify the -c option to vxsplitlines to print detailed information about
each of the disk IDs from the configuration copy on a disk specified by its disk
access name:
# vxsplitlines
-g newdg -c sde
||
||
||
||
||
Expected SSB
0.0 ssb ids dont match
0.1 ssb ids match
0.1 ssb ids match
0.0 ssb ids dont match
Please note that even though some disks ssb ids might match
that does not necessarily mean that those disks config copies
have all the changes. From some other configuration copies,
those disks ssb ids might not match. To see the configuration
from this disk, run
/etc/vx/diag.d/vxprivutil dumpconfig /dev/vx/dmp/sde
Based on your knowledge of how the serial split brain condition came about, you
must choose one disks configuration to be used to import the disk group. For
example, the following command imports the disk group using the configuration
copy that is on side 0 of the split:
# /usr/sbin/vxdg -o selectcp=1045852127.32.olancha import newdg
When you have selected a preferred configuration copy, and the disk group has
been imported, VxVM resets the serial IDs to 0 for the imported disks. The actual
and expected serial IDs for any disks in the disk group that are not imported at this
time remain unaltered.
Deporting a disk group does not actually remove the disk group. It disables use of
the disk group by the system. Disks in a deported disk group can be reused,
reinitialized, added to other disk groups, or imported for use on other systems. Use
the vxdg import command to re-enable access to the disk group.
Enter the following command to find out the disk group ID (dgid) of one of the
disks that was in the disk group:
# vxdisk -s list disk_access_name
The disk must be specified by its disk access name, such as sdc. Examine the
output from the command for a line similar to the following that specifies the
disk group ID.
dgid:
963504895.1075.bass
653
To back up FSS disk group configuration data on all cluster nodes that have
connectivity to at least one disk in the disk group, type the following command:
# /etc/vx/bin/vxconfigbackup -T diskgroup
654
Check if the primary node has connectivity to at least one disk in the disk group.
The disk can be a direct attached storage (DAS) disk, partially shared disk, or
fully shared disks.
If the primary node does not have connectivity to any disk in the disk group,
switch the primary node to a node that has connectivity to at least one DAS or
partially shared disk, using the following command:
# vxclustadm setmaster node_name
Note: You must restore the configuration data on all secondary nodes that have
connectivity to at least one disk in the disk group.
655
Note: You must abort or decommit the configuration data on all secondary
nodes that have connectivity to at least one disk in the disk group, and all
secondary nodes from which you triggered the precommit.
See the Symantec Storage Foundation and High Availability Solutions
Troubleshooting Guide.
See the vxconfigbackup(1M) manual page.
See the vxconfigrestore(1M) manual page.
656
Check for the presence of storage pools, using the following command:
# vxprint
Sample output:
Disk group: mydg
TY NAME
ASSOC
dg mydg
mydg
KSTATE
-
LENGTH
-
PLOFFS STATE
TUTIL0 PUTIL0
ALLOC_SUP -
dm mydg2
dm mydg3
ams_wms0_359 ams_wms0_360 -
4120320 4120320 -
st mypool
dm mydg1
ams_wms0_358 -
4120320 -
DATA
-
v myvol0
fsgen
pl myvol0-01 myvol0
sd mydg1-01 myvol0-01
ENABLED 20480
ENABLED 20480
ENABLED 20480
ACTIVE
ACTIVE
-
v myvol1
fsgen
pl myvol1-01 myvol1
sd mydg1-02 myvol1-01
ENABLED 20480
ENABLED 20480
ENABLED 20480
ACTIVE
ACTIVE
-
In the sample output, st mypool indicates that mydg is an ISP disk group.
To upgrade an ISP disk group
The ISP volumes in the disk group are not allowed to make any configuration
changes until the disk group is upgraded. Attempting any operations such as grow
shrink, add mirror, disk group split join, etc, on ISP volumes would give the following
error:
657
Note: Non-ISP or VxVM volumes in the ISP disk group are not affected.
Operations that still work on ISP disk group without upgrading:
Reattaching plexes
When a mirror plex encounters irrecoverable errors, Veritas Volume Manager
(VxVM) detaches the plex from the mirrored volume. An administrator may also
detach a plex manually using a utility such as vxplex or vxassist. In order to use a
plex that was previously attached to a volume, the plex must be reattached to the
volume. The reattach operation also ensures that the plex mirror is resynchronized
to the other plexes in the volume.
See Plex synchronization on page 661.
The following methods are available for reattaching plexes:
By default, VxVM automatically reattaches the affected mirror plexes when the
underlying failed disk or LUN becomes visible. When VxVM detects that the
658
If the automatic reattachment feature is disabled, you need to reattach the plexes
manually. You may also need to manually reattach the plexes for devices that
are not automatically reattached. For example, VxVM does not automatically
reattach plexes on site-consistent volumes.
See Reattaching a plex manually on page 660.
If the global detach policy is set, a storage failure from any node causes all
plexes on that storage to be detached globally. When the storage is connected
back to any node, the vxattachd daemon triggers reattaching the plexes on
the master node only.
659
The vxattachd daemon listens for "dmpnode online" events using vxnotify to
trigger its operation. Therefore, an automatic reattachment is not triggered if the
dmpnode online event is not generated when vxattachd is running. The following
are typical examples:
If the volume is currently ENABLED, use the following command to reattach the
plex:
# vxplex [-g diskgroup] att volume plex ...
For example, for a plex named vol01-02 on a volume named vol01 in the disk
group, mydg, use the following command:
# vxplex -g mydg att vol01 vol01-02
If the volume is not in use (not ENABLED), use the following command to re-enable
the plex for use:
# vxmend [-g diskgroup] on plex
For example, to re-enable a plex named vol01-02 in the disk group, mydg, enter:
660
In this case, the state of vol01-02 is set to STALE. When the volume is next
started, the data on the plex is revived from another plex, and incorporated into
the volume with its state set to ACTIVE.
If the vxinfo command shows that the volume is unstartable, set one of the
plexes to CLEAN using the following command:
# vxmend [-g diskgroup] fix clean plex
Plex synchronization
Each plex or mirror of a volume is a complete copy of the data. When a plex is
attached to a volume, the data in the plex must be synchronized with the data in
the other plexes in the volume. The plex that is attached may be a new mirror or a
formerly attached plex. A new mirror must be fully synchronized. A formerly attached
plex only requires the changes that were applied since the plex was detached.
The following operations trigger a plex synchronization:
Moving or copying a subdisk with the vxsd command. The operation creates a
temporary plex that is synchronized with the original subdisk.
FastResync
If the FastResync feature is enabled, VxVM maintains a FastResync map on
the volume. VxVM uses the FastResync map to apply only the updates that the
661
mirror has missed. This behavior provides an efficient way to resynchronize the
plexes.
SmartMove
The SmartMove feature reduces the time and I/O required to attach or reattach
a plex to a VxVM volume with a mounted VxFS file system. The SmartMove
feature uses the VxFS information to detect free extents and avoid copying
them.
When the SmartMove feature is on, less I/O is sent through the host, through
the storage network and to the disks or LUNs. The SmartMove feature can be
used for faster plex creation and faster array migrations.
Decommissioning storage
This section describes how you remove disks and volumes from VxVM.
Removing a volume
If a volume is inactive or its contents have been archived, you may no longer need
it. In that case, you can remove the volume and free up the disk space for other
uses.
To remove a volume
If the volume is listed in the /etc/fstab file, edit this file and remove its entry.
For more information about the format of this file and how you can modify it,
see your operating system documentation.
662
Stop all activity by VxVM on the volume with the following command:
# vxvol [-g diskgroup] stop volume
You can also use the vxedit command to remove the volume as follows:
# vxedit [-g diskgroup] [-r] [-f] rm volume
663
overwrites all of the addressable blocks with a digital pattern in one, three, or seven
passes.
Caution: All data in the volume will be lost when you shred it. Make sure that the
information has been backed up onto another storage medium and verified, or that
it is no longer needed.
VxVM provides the ability to shred the data on the disk to minimize the chance that
the data is recoverable. When you specify the disk shred operation, VxVM shreds
the entire disk, including any existing disk labels. After the shred operation, VxVM
writes a new empty label on the disk to prevent the disk from going to the error
state. The VxVM shred operation provides the following methods of overwriting a
disk:
One-pass algorithm
VxVM overwrites the disk with a randomly-selected digital pattern. This option
takes the least amount of time. The default type is the one-pass algorithm.
Three-pass algorithm
VxVM overwrites the disk a total of three times. In the first pass, VxVM overwrites
the data with a pre-selected digital pattern. The second time, VxVM overwrites
the data with the binary complement of the pattern. In the last pass, VxVM
overwrites the disk with a randomly-selected digital pattern.
Seven-pass algorithm
VxVM overwrites the disk a total of seven times. In each pass, VxVM overwrites
the data with a randomly-selected digital pattern or with the binary complement
of the previous pattern.
VxVM does not currently support shredding of thin-reclaimable LUNs. If you attempt
to start the shred operation on a thin-reclaimable disk, VxVM displays a warning
message and skips the disk.
VxVM does not shred a disk that is in use by VxVM on this system or in a shared
disk group.
664
Symantec does not recommend shredding solid state drives (SSDs). To shred
SSD devices, use the shred operation with the force (-f) option.
665
Where:
The force option (-f) permits you to shred Solid State Drives (SSDs).
1, 3 and 7 are the shred options corresponding to the number of passes. The
default number of passes is 1.
disk... represents one or more disk names. If you specify multiple disk names,
the vxdiskunsetup command processes them sequentially, one at a time.
For example:
# /etc/vx/bin/vxdiskunsetup -o shred=3 hds9970v0_14
disk_shred: Shredding disk hds9970v0_14 with type 3
disk_shred: Disk raw size 2097807360 bytes
disk_shred: Writing 32010 (65536 byte size) pages and 0 bytes
to disk
disk_shred: Wipe Pass 0: Pattern 0x3e
disk_shred: Wipe Pass 1: Pattern 0xca
disk_shred: Wipe Pass 2: Pattern 0xe2
disk_shred: Shred passed random verify of 131072 bytes at
offset 160903168
You can monitor the progress of the shred operation with the vxtask command.
For example:
# vxtask list
TASKID PTID TYPE/STATE PCT
PROGRESS
203
- DISKSHRED/R 90.16% 0/12291840/11081728 DISKSHRED
nodg nodg
You can pause, abort, or resume the shred task. You cannot throttle the shred
task.
See vxtask(1m)
If the disk shred operation fails, the disk may go into an error state with no
label.
See Failed disk shred operation results in a disk with no label on page 667.
666
Create a new label manually or reinitialize the disk under VxVM using the
following command:
# /etc/vx/bin/vxdisksetup -i disk
Start the shred operation. If the disk shows as a non-VxVM disk, reinitialize
the disk with the vxdisksetup command in step 1, then restart the shred
operation.
# /etc/vx/bin/vxdiskunsetup [-Cf] -o shred[=1|3|7] disk...
Select Remove a disk for replacement from the vxdiskadm main menu.
At the following prompt, enter the name of the disk to be replaced (or enter
list for a list of disks):
Enter disk name [<disk>,list,q,?] mydg02
667
When you select a disk to remove for replacement, all volumes that are affected
by the operation are displayed, for example:
VxVM NOTICE V-5-2-371 The following volumes will lose mirrors
as a result of this operation:
home src
No data on these volumes will be lost.
The following volumes are in use, and will be disabled as a
result of this operation:
mkting
Any applications using these volumes will fail future
accesses. These volumes will require restoration from backup.
Are you sure you want do this? [y,n,q,?] (default: n)
To remove the disk, causing the named volumes to be disabled and data to
be lost when the disk is replaced, enter y or press Return.
To abandon removal of the disk, and back up or move the data associated with
the volumes that would otherwise be disabled, enter n or q and press Return.
For example, to move the volume mkting to a disk other than mydg02, use the
following command.
The ! character is a special character in some shells. The following example
shows how to escape it in a bash shell.
# vxassist move mkting \!mydg02
After backing up or moving the data in the volumes, start again from step 1.
668
669
At the following prompt, either select the device name of the replacement disk
(from the list provided), press Return to choose the default disk, or enter none
if you are going to replace the physical disk:
The following devices are available as replacements:
sdb
You can choose one of these disks now, to replace mydg02.
Select none if you do not wish to select a replacement disk.
Choose a device, or select none
[<device>,none,q,?] (default: sdb)
Do not choose the old disk drive as a replacement even though it appears in
the selection list. If necessary, you can choose to initialize a new disk.
You can enter none if you intend to replace the physical disk.
See Replacing a failed or removed disk on page 670.
If you chose to replace the disk in step 4, press Return at the following prompt
to confirm this:
VxVM NOTICE V-5-2-285 Requested operation is to remove mydg02
from group mydg. The removed disk will be replaced with disk device
sdb. Continue with operation? [y,n,q,?] (default: y)
vxdiskadm displays the following messages to indicate that the original disk is
being removed:
VxVM NOTICE V-5-2-265 Removal of disk mydg02 completed
successfully.
VxVM NOTICE V-5-2-260 Proceeding to replace mydg02 with device
sdb.
You can now choose whether the disk is to be formatted as a CDS disk that
is portable between different operating systems, or as a non-portable sliced or
simple disk:
Enter the desired format [cdsdisk,sliced,simple,q,?]
(default: cdsdisk)
Enter the format that is appropriate for your needs. In most cases, this is the
default format, cdsdisk.
At the following prompt, vxdiskadm asks if you want to use the default private
region size of 65536 blocks (32 MB). Press Return to confirm that you want to
use the default value, or enter a different value. (The maximum value that you
can specify is 524288 blocks.)
Enter desired private region length [<privlen>,q,?]
(default: 65536)
If one of more mirror plexes were moved from the disk, you are now prompted
whether FastResync should be used to resynchronize the plexes:
Use FMR for plex resync? [y,n,q,?] (default: n) y
vxdiskadm displays the following success message:
VxVM NOTICE V-5-2-158 Disk replacement completed successfully.
At the following prompt, indicate whether you want to remove another disk (y)
or return to the vxdiskadm main menu (n):
Remove another disk? [y,n,q,?] (default: n)
Select Replace a failed or removed disk from the vxdiskadm main menu.
At the following prompt, enter the name of the disk to be replaced (or enter
list for a list of disks):
Select a removed or failed disk [<disk>,list,q,?] mydg02
670
The vxdiskadm program displays the device names of the disk devices available
for use as replacement disks. Your system may use a device name that differs
from the examples. Enter the device name of the disk or press Return to select
the default device:
The following devices are available as replacements:
sdb sdk
You can choose one of these disks to replace mydg02.
Choose "none" to initialize another disk to replace mydg02.
Choose a device, or select "none"
[<device>,none,q,?] (default: sdb)
If the disk has not previously been initialized, press Return at the following
prompt to replace the disk:
VxVM INFO V-5-2-378 The requested operation is to initialize
disk device sdb and to then use that device to
replace the removed or failed disk mydg02 in disk group mydg.
Continue with operation? [y,n,q,?] (default: y)
If the disk has already been initialized, press Return at the following prompt
to replace the disk:
VxVM INFO V-5-2-382 The requested operation is to use the
initialized device sdb to replace the removed or
failed disk mydg02 in disk group mydg.
Continue with operation? [y,n,q,?] (default: y)
You can now choose whether the disk is to be formatted as a CDS disk that
is portable between different operating systems, or as a non-portable sliced or
simple disk:
Enter the desired format [cdsdisk,sliced,simple,q,?]
(default: cdsdisk)
Enter the format that is appropriate for your needs. In most cases, this is the
default format, cdsdisk.
671
At the following prompt, vxdiskadm asks if you want to use the default private
region size of 65536 blocks (32 MB). Press Return to confirm that you want to
use the default value, or enter a different value. (The maximum value that you
can specify is 524288 blocks.)
Enter desired private region length [<privlen>,q,?]
(default: 65536)
The vxdiskadm program then proceeds to replace the disk, and returns the
following message on success:
VxVM NOTICE V-5-2-158 Disk replacement completed successfully.
At the following prompt, indicate whether you want to replace another disk (y)
or return to the vxdiskadm main menu (n):
Replace another disk? [y,n,q,?] (default: n)
672
Chapter
31
Rootability
This chapter includes the following topics:
Encapsulating a disk
Rootability
Encapsulating a disk
Warning: Encapsulating a disk requires that the system be rebooted several times.
Schedule performance of this procedure for a time when this does not inconvenience
users.
This section describes how to encapsulate a disk for use in VxVM. Encapsulation
preserves any existing data on the disk when the disk is placed under VxVM control.
A root disk can be encapsulated and brought under VxVM control. However, there
are restrictions on the layout and configuration of root disks that can be
encapsulated.
See Restrictions on using rootability with Linux on page 680.
See Rootability on page 679.
Use the format or fdisk commands to obtain a printout of the root disk partition
table before you encapsulate a root disk. For more information, see the appropriate
manual pages. You may need this information should you subsequently need to
recreate the original root disk.
Rootability
Encapsulating a disk
You cannot grow or shrink any volume (rootvol, usrvol, varvol, optvol, swapvol,
and so on) that is associated with an encapsulated root disk. This is because these
volumes map to physical partitions on the disk, and these partitions must be
contiguous.
Disks with msdos disk labels can be encapsulated as auto:sliced disks provided
that they have at least one spare primary partition that can be allocated to the public
region, and one spare primary or logical partition that can be allocated to the private
region.
Disks with sun disk labels can be encapsulated as auto:sliced disks provided
that they have at least two spare slices that can be allocated to the public and
private regions.
Extensible Firmware Interface (EFI) disks with gpt (GUID Partition Table) labels
can be encapsulated as auto:sliced disks provided that they have at least two
spare slices that can be allocated to the public and private regions.
The entry in the partition table for the public region does not require any additional
space on the disk. Instead it is used to represent (or encapsulate) the disk space
that is used by the existing partitions.
Unlike the public region, the partition for the private region requires a small amount
of space at the beginning or end of the disk that does not belong to any existing
partition or slice. By default, the space required for the private region is 32MB, which
is rounded up to the nearest whole number of cylinders. On most modern disks,
one cylinder is usually sufficient.
674
Rootability
Encapsulating a disk
Before encapsulating a root disk, set the device naming scheme used by VxVM
to be persistent.
# vxddladm set namingscheme={osn|ebn} persistence=yes
Select Encapsulate one or more disks from the vxdiskadm main menu.
Your system may use device names that differ from the examples shown here.
At the following prompt, enter the disk device name for the disks to be
encapsulated:
Select disk devices to encapsulate:
[<pattern-list>,all,list,q,?] device name
To continue the operation, enter y (or press Return) at the following prompt:
Here is the disk selected. Output format: [Device]
device name
Continue operation? [y,n,q,?] (default: y) y
Select the disk group to which the disk is to be added at the following prompt:
You can choose to add this disk to an existing disk group or to
a new disk group. To create a new disk group, select a disk
group name that does not yet exist.
Which disk group [<group>,list,q,?]
At the following prompt, either press Return to accept the default disk name
or enter a disk name:
Use a default disk name for the disk? [y,n,q,?] (default: y)
675
Rootability
Encapsulating a disk
To continue with the operation, enter y (or press Return) at the following prompt:
The selected disks will be encapsulated and added to the
disk group name disk group with default disk names.
device name
Continue with operation? [y,n,q,?] (default: y) y
To confirm that encapsulation should proceed, enter y (or press Return) at the
following prompt:
The following disk has been selected for encapsulation.
Output format: [Device]
device name
Continue with encapsulation? [y,n,q,?] (default: y) y
A message similar to the following confirms that the disk is being encapsulated
for use in VxVM:
The disk device device name will be encapsulated and added to
the disk group diskgroup with the disk name diskgroup01.
For non-root disks, you can now choose whether the disk is to be formatted
as a CDS disk that is portable between different operating systems, or as a
non-portable sliced disk:
Enter the desired format [cdsdisk,sliced,simple,q,?]
(default: cdsdisk)
Enter the format that is appropriate for your needs. In most cases, this is the
default format, cdsdisk. Note that only the sliced format is suitable for use
with root, boot or swap disks.
At the following prompt, vxdiskadm asks if you want to use the default private
region size of 65536 blocks (32MB). Press Return to confirm that you want to
use the default value, or enter a different value. (The maximum value that you
can specify is 524288 blocks.)
Enter desired private region length [<privlen>,q,?]
(default: 65536)
676
Rootability
Encapsulating a disk
10 If you entered cdsdisk as the format in step 8, you are prompted for the action
to be taken if the disk cannot be converted this format:
Do you want to use sliced as the format should cdsdisk
fail? [y,n,q,?] (default: y)
If you enter y, and it is not possible to encapsulate the disk as a CDS disk, it
is encapsulated as a sliced disk. Otherwise, the encapsulation fails.
11
vxdiskadm then proceeds to encapsulate the disks. You should now reboot
your system at the earliest possible opportunity, for example by running this
command:
# shutdown -r now
The /etc/fstab file is updated to include the volume devices that are used to
mount any encapsulated file systems. You may need to update any other
references in backup scripts, databases, or manually created swap devices.
The original /etc/fstab file is saved as /etc/fstab.b4vxvm
12 At the following prompt, indicate whether you want to encapsulate more disks
(y) or return to the vxdiskadm main menu (n):
Encapsulate other disks? [y,n,q,?] (default: n) n
677
Rootability
Encapsulating a disk
678
Rootability
Rootability
If it does not exist already, set up a partition on the disk for the area that you
want to access using VxVM.
Rootability
VxVM can place various files from the root file system, swap device, and other file
systems on the root disk under VxVM control. This is called rootability. The root
disk (that is, the disk containing the root file system) can be put under VxVM control
through the process of encapsulation.
Encapsulation converts existing partitions on that disk to volumes. Once under
VxVM control, the root and swap devices appear as volumes and provide the same
characteristics as other VxVM volumes. A volume that is configured for use as a
swap area is referred to as a swap volume, and a volume that contains the root file
system is referred to as a root volume.
679
Rootability
Rootability
Note: Only encapsulate your root disk if you also intend to mirror it. There is no
benefit in root-disk encapsulation for its own sake.
You can mirror the rootvol, and swapvol volumes, as well as other parts of the
root disk that are required for a successful boot of the system (for example, /usr).
This provides complete redundancy and recovery capability in the event of disk
failure. Without mirroring, the loss of the root, swap, or usr partition prevents the
system from being booted from surviving disks.
Mirroring disk drives that are critical to booting ensures that no single disk failure
renders the system unusable. A suggested configuration is to mirror the critical disk
onto another available disk (using the vxdiskadm command). If the disk containing
root and swap partitions fails, the system can be rebooted from a disk containing
mirrors of these partitions.
Recovering a system after the failure of an encapsulated root disk requires the
application of special procedures.
See the Symantec Storage Foundation and High Availability Solutions
Troubleshooting Guide.
680
Rootability
Rootability
Free disk space or a swap partition, from which space can be allocated to the
private region. If the free space or swap partition is not located within an extended
partition, one unused primary partition entry is required for the private region.
Otherwise, one unused logical partition entry is required.
The following sections show examples of root disk layouts for which encapsulation
is either supported or not supported.
See Sample supported root disk layouts for encapsulation on page 682.
See Sample unsupported root disk layouts for encapsulation on page 685.
Note the following additional important restrictions on using rootability with Linux:
Root disk encapsulation is only supported for devices with standard SCSI or
IDE interfaces. It is not supported for most devices with vendor-proprietary
interfaces, except the COMPAQ SMART and SMARTII controllers, which use
device names of the form /dev/ida/cXdXpX and /dev/cciss/cXdXpX.
Root disk encapsulation is only supported for disks with msdos or gpt labels. It
is not supported for disks with sun labels.
The root, boot, and swap partitions must be on the same disk.
Either the GRUB or the LILO boot loader must be used as the boot loader for
SCSI and IDE disks.
The menu entries in the boot loader configuration file must be valid.
The boot loader configuration file must not be edited during the root encapsulation
process.
The /boot partition must be on the first disk as seen by the BIOS, and this
partition must be a primary partition.
Some systems cannot be configured to ignore local disks. The local disk needs
to be removed when encapsulating. Multi-pathing configuration changes (for
multiple HBA systems) can have the same effect. VxVM supports only those
681
Rootability
Rootability
systems where the initial bootstrap installation configuration has not been
changed for root encapsulation.
The boot loader must be located in the master boot record (MBR) on the root
disk or any root disk mirror.
If the GRUB boot loader is used, the root device location of the /boot directory
must be set to the first disk drive, sd0 or hd0, to allow encapsulation of the root
disk.
If the LILO or ELILO boot loader is used, do not use the FALLBACK, LOCK or -R
options after encapsulating the root disk.
Warning: Using the FALLBACK, LOCK or -R options with LILO may render your
system unbootable because LILO does not understand the layout of VxVM
volumes.
Booting from an encapsulated root disk which is connected only to the secondary
controller in an A/P (Active/Passive) array is not supported.
The default Red Hat installation layout is not valid for implementing rootability.
If you change the layout of your root disk, ensure that the root disk is still bootable
before attempting to encapsulate it.
See Example 1: unsupported root disk layouts for encapsulation on page 685.
Do not allocate volumes from the root disk after it has been encapsulated. Doing
so may destroy partition information that is stored on the disk.
682
Rootability
Rootability
Figure 31-1
Root and swap configured on two primary partitions, and free space
on the disk
swap
Free
space
Primary partition
Primary partition
Private
region
swap
Public region
Two primary partitions are in use by / and swap. There are two unused primary
partitions, and free space exists on the disk that can be assigned to a primary
partition for the private region.
swap
Primary partition
swap
Private
region
Public region
Two primary partitions are in use by / and swap. There are two unused primary
partitions, and the private region can be allocated to a new primary partition by
taking space from the end of the swap partition.
683
Rootability
Rootability
Boot and swap configured on two primary partitions, and free space
in the extended partition
swap
Primary
partitions
/ (root)
/var
Extended
partition
/home
Logical
partitions
/home1
Free space in
extended partition
swap
/ (root)
/var
/home
/home1
Private
region
Public region
Three primary partitions are in use by /boot, swap and an extended partition that
contains four file systems including root. There is free space at the end of the
extended primary partition that can be used to create a new logical partition for the
private region.
684
Rootability
Rootability
Figure 31-4
/ (root)
Primary
partitions
Extended
partition
swap
Logical
partitions
/ (root)
swap
Public region
Private
region
Two primary partitions are in use by /boot and an extended partition that contains
the root file system and swap area. A new logical partition can be created for the
private region by taking space from the end of the swap partition.
/boot
Primary partitions
swap
Free space
This layout, which is similar to the default Red Hat layout, cannot be encapsulated
because only one spare primary partition is available, and neither the swap partition
nor the free space lie within an extended partition.
Figure 31-6 shows a workaround by configuring the swap partition or free space
as an extended partition, and moving the swap area to a logical partition (leaving
enough space for a logical partition to hold the private region).
685
Rootability
Rootability
Figure 31-6
Logical partition
/boot
/ (root)
swap
Extended partition
Primary partitions
Free space in
extended partition
The original swap partition should be deleted. After reconfiguration, this root disk
can be encapsulated.
See Example 3: supported root disk layouts for encapsulation on page 684.
Figure 31-7 shows another possible workaround by recreating /boot as a directory
under /, deleting the /boot partition, and reconfiguring LILO or GRUB to use the
new/boot location.
Figure 31-7
swap
Primary partitions
Free space
Free space
Warning: If the start of the root file system does not lie within the first 1024 cylinders,
moving /boot may render your system unbootable.
After reconfiguration, this root disk can be encapsulated.
See Example 1: supported root disk layouts for encapsulation on page 682.
686
Rootability
Rootability
Figure 31-8
/boot
swap
Primary
partitions
/ (root)
Extended
partition
Logical
partition
Free space
This layout cannot be encapsulated because only one spare primary partition is
available, and neither the swap partition nor the free space lie within the extended
partition.
Figure 31-9 shows a simple workaround that uses a partition configuration tool to
grow the extended partition into the free space on the disk.
Figure 31-9
/boot
swap
Primary
partitions
/ (root)
Extended
partition
Logical
partition
Free space in
extended partition
Care should be taken to preserve the boundaries of the logical partition that contains
the root file system. After reconfiguration, this root disk can be encapsulated.
See Example 3: supported root disk layouts for encapsulation on page 684.
/boot
swap
Primary
partitions
/var
/home
Logical
partitions
This layout cannot be encapsulated because only one spare primary partition is
available, the swap partition does not lie in the extended partition, and there is no
free space in the extended partition for an additional logical partition.
687
Rootability
Rootability
Figure 31-11 shows a possible workaround by shrinking one or more of the existing
file systems and the corresponding logical partitions.
Figure 31-11
/boot
swap
Primary
partitions
/var
Logical
partitions
/home
Free space in
extended partition
Shrinking existing logical partitions frees up space in the extended partition for the
private region. After reconfiguration, this root disk can be encapsulated.
See Example 3: supported root disk layouts for encapsulation on page 684.
/boot
/ (root)
Primary
partitions
swap (One of
11 logical
partitions)
Extended
partition
Free space in
extended partition
If this layout exists on a SCSI disk, it cannot be encapsulated because only one
spare primary partition is available, and even though swap is configured on a logical
partition and there is free space in the extended partition, no more logical partitions
can be created. The same problem arises with IDE disks when 12 logical partitions
have been created.
A suggested workaround is to evacuate any data from one of the existing logical
partitions, and then delete this logical partition. This makes one logical partition
available for use by the private region. The root disk can then be encapsulated.
See Example 3: supported root disk layouts for encapsulation on page 684.
See Example 4: supported root disk layouts for encapsulation on page 684.
688
Rootability
Rootability
The root volume (rootvol) must exist in the default disk group, bootdg. Although
other volumes named rootvol can be created in disk groups other than bootdg,
only the volume rootvol in bootdg can be used to boot the system.
The rootvol and swapvol volumes always have minor device numbers 0 and
1 respectively. Other volumes on the root disk do not have specific minor device
numbers.
Restricted mirrors of volumes on the root disk device have overlay partitions
created for them. An overlay partition is one that exactly includes the disk space
occupied by the restricted mirror. During boot, before the rootvol, varvol,
usrvol and swapvol volumes are fully configured, the default volume
configuration uses the overlay partition to access the data on the disk.
rootvol and swapvol cannot be spanned or contain a primary plex with multiple
noncontiguous subdisks. You cannot grow or shrink any volume associated with
an encapsulated boot disk (rootvol, usrvol, varvol, optvol, swapvol, and
so on) because these map to a physical underlying partition on the disk and
must be contiguous. A workaround is to unencapsulate the boot disk, repartition
the boot disk as desired (growing or shrinking partitions as needed), and then
re-encapsulating.
689
Rootability
Rootability
When mirroring parts of the boot disk, the disk being mirrored to must be large
enough to hold the data on the original plex, or mirroring may not work.
The volumes on the root disk cannot use dirty region logging (DRL).
690
Rootability
Rootability
manual pages. You may need this information should you subsequently need to
recreate the original root disk.
See the Symantec Storage Foundation and High Availability Solutions
Troubleshooting Guide.
See Restrictions on using rootability with Linux on page 680.
You can use the vxdiskadm command to encapsulate the root disk.
See Encapsulating a disk on page 673.
You can also use the vxencap command, as shown in this example where the root
disk is sda:
# vxencap -c -g diskgroup rootdisk=sda
where diskgroup must be the name of the current boot disk group. If no boot disk
group currently exists, one is created with the specified name. The name bootdg
is reserved as an alias for the name of the boot disk group, and cannot be used.
You must reboot the system for the changes to take effect.
Both the vxdiskadm and vxencap procedures for encapsulating the root disk also
update the /etc/fstab file and the boot loader configuration file
(/boot/grub/menu.lst or /etc/grub.conf (as appropriate for the platform) for
GRUB or /etc/lilo.conf for LILO):
Entries are changed in /etc/fstab for the rootvol, swapvol and other volumes
on the encapsulated root disk.
A special entry, vxvm_root, is added to the boot loader configuration file to allow
the system to boot from an encapsulated root disk.
The contents of the original /etc/fstab and boot loader configuration files are
saved in the files /etc/fstab.b4vxvm, /boot/grub/menu.lst.b4vxvm or
/etc/grub.conf.b4vxvm for GRUB, and /etc/lilo.conf.b4vxvm for LILO.
Warning: When modifying the /etc/fstab and the boot loader configuration files,
take care not to corrupt the entries that have been added by VxVM. This can prevent
your system from booting correctly.
691
Rootability
Rootability
692
Rootability
Rootability
Choose a disk to use for the mirror that is at least as large as the existing root
disk, whose geometry is seen by Linux to be the same as the existing root disk,
and which is not already in use by VxVM or any other subsystem (such as a
mounted partition or swap area). The disk should be visible to the Basic Input
Output System (BIOS) and to the bootloader of the operating system.
Select Mirror Volumes on a Disk from the vxdiskadm main menu to create
a mirror of the root disk. (These automatically invoke the vxrootmir command
if the mirroring operation is performed on the root disk.)
The disk that is used for the root mirror must not be under Volume Manager
control already.
Alternatively, to mirror all file systems on the root disk, run the following
command:
# vxrootmir mirror_da_name mirror_dm_name
mirror_da_name is the disk access name of the disk that is to mirror the root
disk, and mirror_dm_name is the disk media name that you want to assign to
the mirror disk. The alternate root disk is configured to allow the system to be
booted from it in the event that the primary root disk fails. For example, to mirror
the root disk, sda, onto disk sdb, and give this the disk name rootmir, you
would use the following command:
# vxrootmir sdb rootmir
The operations to set up the root disk mirror take some time to complete.
The following is example output from the vxprint command after the root disk
has been encapsulated and its mirror has been created (the TUTIL0 and PUTIL0
fields and the subdisk records are omitted for clarity):
Disk group: rootdg
TY NAME
dg rootdg
ASSOC
rootdg
KSTATE
-
LENGTH
-
PLOFFS
-
STATE ...
-
dm rootdisk
dm rootmir
sda
sdb
16450497
16450497
v rootvol
root
ENABLED
pl mirrootvol-01 rootvol ENABLED
pl rootvol-01
rootvol ENABLED
12337857
12337857
12337857
ACTIVE
ACTIVE
ACTIVE
4112640
ACTIVE
swapvol
swap
ENABLED
693
Rootability
Rootability
4112640
4112640
ACTIVE
ACTIVE
End
2104514
6297479
39054014
10522574
14715539
18908504
23101469
25205984
Blocks
1052226
2096482+
16362202+
2096451
2096451
2096451
2096451
1052226
Id
83
83
5
83
83
83
83
82
System
Linux
Linux
Extended
Linux
Linux
Linux
Linux
Linux swap
Notice that there is a gap between start of the extended partition (hda3) and the
start of the first logical partition (hda5). For the logical partitions (hda5 through hda9),
there are also gaps between the end of one logical partition and the start of the
next logical partition. These gaps contain metadata for partition information. Because
these metadata regions lie inside the public region, VxVM allocates subdisks over
them to prevent accidental allocation of this space to volumes.
After the root disk has been encapsulated, the output from the vxprint command
appears similar to the following:
Disk group: rootdg
TY
dg
dm
dm
NAME
rootdg
disk01
rootdisk
ASSOC
rootdg
sdh
hda
KSTATE
-
LENGTH
17765181
39053952
PLOFFS
-
STATE
-
TUTIL0
-
PUTIL0
-
694
Rootability
Rootability
sd
sd
sd
sd
sd
sd
sd
meta-rootdisk05
meta-rootdisk06
meta-rootdisk07
meta-rootdisk08
meta-rootdisk09
meta-rootdisk10
rootdiskPriv
ENABLED
ENABLED
ENABLED
ENABLED
ENABLED
ENABLED
ENABLED
63
63
63
63
63
63
2049
METADATA
METADATA
METADATA
METADATA
METADATA
METADATA
PRIVATE
v bootvol
pl bootvol-01
sd rootdisk-07
fsgen
ENABLED
bootvol
ENABLED
bootvol-01 ENABLED
2104452
2104452
2104452
ACTIVE
ACTIVE
-
v homevol
pl homevol-01
sd rootdisk-05
fsgen
ENABLED
homevol
ENABLED
homevol-01 ENABLED
4192902
4192902
4192902
ACTIVE
ACTIVE
-
v optvol
pl optvol-01
sd rootdisk-04
fsgen
optvol
optvol-01
ENABLED
ENABLED
ENABLED
4192902
4192902
4192902
ACTIVE
ACTIVE
-
v rootvol
pl rootvol-01
sd rootdisk-02
root
ENABLED
rootvol
ENABLED
rootvol-01 ENABLED
4192902
4192902
4192902
ACTIVE
ACTIVE
-
v swapvol
pl swapvol-01
sd rootdisk-01
swap
ENABLED
swapvol
ENABLED
swapvol-01 ENABLED
2104452
2104452
2104452
ACTIVE
ACTIVE
-
v usrvol
pl usrvol-01
sd rootdisk-06
fsgen
usrvol
usrvol-01
ENABLED
ENABLED
ENABLED
4192965
4192965
4192965
ACTIVE
ACTIVE
-
v varvol
pl varvol-01
sd rootdisk-03
fsgen
varvol
varvol-01
ENABLED
ENABLED
ENABLED
4192902
4192902
4192902
ACTIVE
ACTIVE
-
The new partition table for the root disk appears similar to the following:
# fdisk -ul /dev/hda
Disk /dev/hda: 255 heads, 63 sectors, 2431 cylinders
Units = sectors of 1 * 512 bytes
Device Boot
/dev/hda1
Start
End
63 2104514
Blocks
1052226
Id
83
System
Linux
695
Rootability
Rootability
/dev/hda2
/dev/hda3
/dev/hda4
/dev/hda5
/dev/hda6
/dev/hda7
/dev/hda8
/dev/hda9
/dev/hda10
2104515
6329610
63
6329673
10522638
14715603
18908568
23101533
39051966
6297479
39054014
39054014
10522574
14715539
18908504
23101469
25205984
39054014
2096482+
16362202+
19526976
2096451
2096451
2096451
2096451
1052226
1024+
83
5
7e
83
83
83
83
82
7f
Linux
Extended
Unknown
Linux
Linux
Linux
Linux
Linux swap
Unknown
In this example, primary partition hda4 and logical partition hda10 have been created
to represent the VxVM public and private regions respectively.
696
Rootability
Rootability
The above commands determine if the kernel upgrade can be applied to the
encapsulated system. If the upgrade is successful, the command displays the
following message:
# upgrade_encapped_root
The VxVM root encapsulation upgrade has succeeded.
Please reboot the machine to load the new kernel.
After the next reboot, the system restarts with the patched kernel and a VxVM
encapsulated root volume.
Some patches may be completely incompatible with the installed version of VxVM.
In this case the script fails, with the following message:
# upgrade_encapped_root
FATAL ERROR: Unencapsulate the root disk manually.
VxVM cannot re-encapsulate the upgraded system.
The upgrade script saves a system configuration file that can be used to boot the
system with the previous configuration. If the upgrade fails, follow the steps to
restore the previous configuration.
Note: The exact steps may vary depending on the operating system.
697
Rootability
Administering an encapsulated boot disk
Interrupt the GRuB bootloader at bootstrap time by pressing the space bar.
The system displays a series of potential boot configurations, named after the
various installed kernel versions and VxVM root encapsulation versions.
For example:
Red Hat Enterprise Linux Server (2.6.18-53.el5)
Red Hat Enterprise Linux Server (2.6.18-8.el5)
vxvm_root_backup
vxvm_root
Select the vxvm_root_backup option to boot the previous kernel version with
the VxVM encapsulated root disk.
If the upgrade script fails, you can manually unencapsulate the root disk to
allow it to boot.
See Unencapsulating the root disk on page 699.
If the reboot succeeds, you can re-encapsulate and remirror the root disk.
See Encapsulating and mirroring the root disk on page 690.
However, after the next reboot, VxVM may not be able to run correctly, making
all VxVM volumes unavailable. To restore the VxVM volumes, you must remove
the kernel upgrade, as follows:
# rpm -e upgrade_kernel_package_name
For example:
# rpm -e kernel-2.6.18-53.el5
698
Rootability
Unencapsulating the root disk
vxrootadm -s srcdisk
For example:
# vxrootadm -s disk_0 -g rootdg mksnap disk_1 snapdg
In this example, disk_0 is the encapsulated boot disk, and rootdg is the
associate boot disk group. disk_1 is the target disk, and snapdg is the new
disk group name
699
Rootability
Unencapsulating the root disk
Do not remove the plexes on the root disk that correspond to the original disk
partitions.
Warning: This procedure requires a reboot of the system.
To remove rootability from a system
Use the vxplex command to remove all the plexes of the volumes rootvol,
swapvol, usr, var, opt and home on the disks other than the root disk.
For example, the following command removes the plexes mirrootvol-01 and
mirswapvol-01 that are configured on the disk rootmir:
# vxplex -g bootdg -o rm dis mirrootvol-01 mirswapvol-01
700
Chapter
32
Quotas
This chapter includes the following topics:
soft limit
Must be lower than the hard limit, and can be exceeded, but only for a
limited time. The time limit can be configured on a per-file system basis
only. The VxFS default limit is seven days.
Soft limits are typically used when a user must run an application that could generate
large temporary files. In this case, you can allow the user to exceed the quota limit
for a limited time. No allocations are allowed after the expiration of the time limit.
Use the vxedquota command to set limits.
See Using Veritas File System quotas on page 704.
Quotas
About quota files on Veritas File System
Although file and data block limits can be set individually for each user and group,
the time limits apply to the file system as a whole. The quota limit information is
associated with user and group IDs and is stored in a user or group quota file.
See About quota files on Veritas File System on page 702.
The quota soft limit can be exceeded when VxFS preallocates space to a file.
See About extent attributes on page 177.
702
Quotas
About Veritas File System quota commands
Edits quota limits for users and groups. The limit changes made by
vxedquota are reflected both in the internal quotas file and the external
quotas file.
vxrepquota
vxquot
vxquota
vxquotaon
vxquotaoff
703
Quotas
About quota checking with Veritas File System
Note: When VxFS file systems are exported via NFS, the VxFS quota commands
on the NFS client cannot query or edit quotas. You can use the VxFS quota
commands on the server to query or edit quotas.
704
Quotas
Using Veritas File System quotas
To turn on user and group quotas for a VxFS file system, enter:
# vxquotaon /mount_point
To turn on user or group quotas for a file system at mount time, enter:
# mount -t vxfs -o quota special /mount_point
705
Quotas
Using Veritas File System quotas
quotas for each mounted file system that has a quotas file. It is not necessary that
quotas be turned on for vxedquota to work. However, the quota limits are applicable
only after quotas are turned on for a given file system.
To edit quotas
Specify the -u option to edit the quotas of one or more users specified by
username:
# vxedquota [-u] username
Editing the quotas of one or more users is the default behavior if the -u option
is not specified.
Specify the -g option to edit the quotas of one or more groups specified by
groupname:
# vxedquota -g groupname
Specify the -g and -t options to modify time limits for any group:
# vxedquota -g -t
706
Quotas
Using Veritas File System quotas
To display a user's quotas and disk usage on all mounted VxFS file systems
where the quotas file exists, enter:
# vxquota -v [-u] username
To display a group's quotas and disk usage on all mounted VxFS file systems
where the quotas.grp file exists, enter:
# vxquota -v -g groupname
To display the number of files and the space owned by each user, enter:
# vxquot [-u] -f filesystem
To display the number of files and the space owned by each group, enter:
# vxquot -g -f filesystem
707
Quotas
Using Veritas File System quotas
To turn off only user quotas for a VxFS file system, enter:
# vxquotaoff -u /mount_point
To turn off only group quotas for a VxFS file system, enter:
# vxquotaoff -g /mount_point
708
Chapter
33
These applications may include: backup utilities, webcrawlers, search engines, and
replication programs.
Note: The FCL tracks when the data has changed and records the change type,
but does not track the actual data changes. It is the responsibility of the application
to examine the files to determine the changed data.
FCL functionality is a separately licensable feature.
See the Symantec Storage Foundation Release Notes.
710
dump
Creates a regular file image of the FCL file that can be downloaded
to an off-host processing system. This file has a different format
than the FCL file.
on
Activates the FCL on a mounted file system. VxFS 5.0 and later
releases support either FCL Versions 3 or 4. If no version is
specified, the default is Version 4. Use fcladm on to specify the
version.
Prints the contents of the FCL file starting from the specified offset.
restore
Restores the FCL file from the regular file image of the FCL file
created by the dump keyword.
rm
Removes the FCL file. You must first deactivate the FCL with the
off keyword, before you can remove the FCL file.
set
state
sync
711
fcl_keeptime
Specifies the duration in seconds that FCL records stay in the FCL
file before they can be purged. The first records to be purged are
the oldest ones, which are located at the beginning of the file.
Additionally, records at the beginning of the file can be purged if
allocation to the FCL file exceeds fcl_maxalloc bytes. The
default value of fcl_keeptime is 0. If the fcl_maxalloc
parameter is set, records are purged from the FCL file if the amount
of space allocated to the FCL file exceeds fcl_maxalloc. This
is true even if the elapsed time the records have been in the log is
less than the value of fcl_keeptime.
fcl_maxalloc
fcl_winterval
Specifies the time in seconds that must elapse before the FCL
records an overwrite, extending write, or a truncate. This helps to
reduce the number of repetitive records in the FCL. The
fcl_winterval timeout is per inode. If an inode happens to go
out of cache and returns, its write interval is reset. As a result, there
could be more than one write record for that file in the same write
interval. The default value is 3600 seconds.
fcl_ointerval
Either or both fcl_maxalloc and fcl_keeptime must be set to activate the FCL
feature. The following are examples of using the fcladm command.
To activate FCL for a mounted file system, type the following:
# fcladm on mount_point
To deactivate the FCL for a mounted file system, type the following:
712
To remove the FCL file for a mounted file system, on which FCL must be turned
off, type the following:
# fcladm rm mount_point
To obtain the current FCL state for a mounted file system, type the following:
# fcladm state mount_point
To enable tracking of the file opens along with access information with each event
in the FCL, type the following:
# fcladm set fileopen,accessinfo mount_point
To stop tracking file I/O statistics in the FCL, type the following:
# fcladm clear filestats mount_point
Print the on-disk FCL super-block in text format to obtain information about the FCL
file by using offset 0. Because the FCL on-disk super-block occupies the first block
of the FCL file, the first and last valid offsets into the FCL file can be determined
by reading the FCL super-block and checking the fc_foff field. Enter:
# fcladm print 0 mount_point
To print the contents of the FCL in text format, of which the offset used must be
32-byte aligned, enter:
# fcladm print offset mount_point
713
Backward compatibility Providing API access for the FCL feature allows backward
compatibility for applications. The API allows applications to parse
the FCL file independent of the FCL layout changes. Even if the
hidden disk layout of the FCL changes, the API automatically
translates the returned data to match the expected output record.
As a result, the user does not need to modify or recompile the
application due to changes in the on-disk FCL layout.
The following sample code fragment reads the FCL superblock, checks that the
state of the FCL is VX_FCLS_ON, issues a call to vxfs_fcl_sync to obtain a finishing
offset to read to, determines the first valid offset in the FCL file, then reads the
entries in 8K chunks from this offset. The section process fcl entries is what an
application developer must supply to process the entries in the FCL file.
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/fcntl.h>
#include <errno.h>
#include <fcl.h>
#include <vxfsutil.h>
#define FCL_READSZ 8192
char* fclname = "/mnt/lost+found/changelog";
int read_fcl(fclname) char* fclname;
{
struct fcl_sb fclsb;
uint64_t off, lastoff;
size_t size;
char buf[FCL_READSZ], *bufp = buf;
int fd;
int err = 0;
if ((fd = open(fclname, O_RDONLY)) < 0) {
return ENOENT;
}
if ((off = lseek(fd, 0, SEEK_SET)) != 0) {
close(fd);
return EIO;
}
size = read(fd, &fclsb, sizeof (struct fcl_sb));
if (size < 0) {
close(fd);
return EIO;
714
}
if (fclsb.fc_state == VX_FCLS_OFF) {
close(fd);
return 0;
}
if (err = vxfs_fcl_sync(fclname, &lastoff)) {
close(fd);
return err;
}
if ((off = lseek(fd, off_t, uint64_t)) != uint64_t) {
close(fd);
return EIO;
}
while (off < lastoff) {
if ((size = read(fd, bufp, FCL_READSZ)) <= 0) {
close(fd);
return errno;
}
/* process fcl entries */
off += size;
}
close(fd);
return 0;
}
Closes the FCL file and cleans up resources associated with the
handle.
vxfs_fcl_cookie()
vxfs_fcl_getinfo() Returns information such as the state and version of the FCL file.
vxfs_fcl_open()
Opens the FCL file and returns a handle that can be used for further
operations.
715
vxfs_fcl_read()
vxfs_fcl_seek()
Extracts data from the specified cookie and then seeks to the
specified offset.
vxfs_fcl_seektime() Seeks to the first record in the FCL after the specified time.
716
Section
Reference
Appendix
719
Appendix
Tunable parameters
This appendix includes the following topics:
Tunable parameters
Tuning the VxFS file system
Partitioned directories
The new parameters take affect after a reboot or after the VxFS module is unloaded
and reloaded. The VxFS module can be loaded using the modprobe command or
automatically when a file system is mounted.
See the modprobe(8) manual page.
Note: New parameters in the /etc/modprobe.conf file are not read by the insmod
vxfs command.
The default value of delicache_enable is 1 for local mounts and 0 for cluster file
systems.
721
Tunable parameters
Tuning the VxFS file system
Partitioned directories
You can enable or disable the partitioned directories feature by setting the
pdir_enable tunable. Specifying a value of 1 enables partitioned directories, while
specifying a value of 0 disables partitioned directories. The default value is 1.
You can set the pdir_threshold tunable to specify the threshold value in terms of
directory size in bytes beyond which VxFS will partition a directory if you enabled
partitioned directories. The default value is 32768.
The -d option of the fsadm command removes empty hidden directories from
partitioned directories. If you disabled partitioned directories, the fsadm -d command
also converts partitioned directories to regular directories.
The partitioned directories feature operates only on disk layout Version 8 or later
file systems.
Warning: If the directories are huge, conversion between partitioned directories and
regular directories or vice versa needs some time. If you enable the feature when
the root directory already contains a large number of files, the conversion can occur
at file system mount time, and can cause the mount to take a long time. Symantec
recommends that the conversion is performed when directories are slightly or not
populated.
722
Tunable parameters
DMP tunable parameters
the file system issues I/O requests that are up to a full stripe in size. If the stripe
size is larger than 256K, those requests are broken up.
To avoid undesirable I/O breakup, you can increase the maximum I/O size by
changing the value of the vol_maxio parameter in the /etc/modprobe.conf file.
Parameter
Description
dmp_cache_open
723
Tunable parameters
DMP tunable parameters
Table B-1
Parameter
Description
dmp_daemon_count
dmp_delayq_interval
dmp_fast_recovery
dmp_health_time
724
Tunable parameters
DMP tunable parameters
Table B-1
Parameter
Description
dmp_log_level
dmp_low_impact_probe
725
Tunable parameters
DMP tunable parameters
Table B-1
Parameter
Description
dmp_lun_retry_timeout
dmp_monitor_fabric
726
Tunable parameters
DMP tunable parameters
Table B-1
Parameter
Description
dmp_monitor_osevent
dmp_monitor_ownership
dmp_native_support
dmp_path_age
727
Tunable parameters
DMP tunable parameters
Table B-1
Parameter
Description
dmp_pathswitch_blks_shift
dmp_probe_idle_lun
dmp_probe_threshold
728
Tunable parameters
DMP tunable parameters
Table B-1
Parameter
Description
dmp_restore_cycles
dmp_restore_interval
dmp_restore_policy
check_all
check_alternate
check_disabled
check_periodic
729
Tunable parameters
DMP tunable parameters
Table B-1
Parameter
Description
dmp_restore_state
dmp_retry_count
dmp_scsi_timeout
dmp_sfg_threshold
dmp_stat_interval
730
Tunable parameters
Methods to change Symantec Dynamic Multi-Pathing tunable parameters
To display the values of the DMP tunable parameters, use the following command:
# vxdmpadm gettune [dmp_tunable]
You can also use the template method to view or change DMP tunable parameters.
See About tuning Symantec Dynamic Multi-Pathing (DMP) with templates
on page 731.
731
Tunable parameters
Methods to change Symantec Dynamic Multi-Pathing tunable parameters
You can load the configuration file to the same host, or to another similar host. The
template method is useful for the following scenarios:
Configure multiple similar hosts with the optimal performance tuning values.
Configure one host for optimal performance. After you have configured the host,
dump the tunable parameters and attributes to a template file. You can then
load the template file to another host with similar requirements. Symantec
recommends that the hosts that use the same configuration template have the
same operating system and similar I/O requirements.
At any time, you can reset the configuration, which reverts the values of the tunable
parameters and attributes to the DMP default values.
You can manage the DMP configuration file with the vxdmpadm config commands.
See the vxdmpadm(1m) man page.
Namingscheme
Arraytype
732
Tunable parameters
Methods to change Symantec Dynamic Multi-Pathing tunable parameters
Arrayname
Enclosurename
Loading is atomic for the section. DMP loads each section only if all of the attributes
in the section are valid. When all sections have been processed, DMP reports the
list of errors and warns the user. DMP does not support a partial rollback. DMP
verifies the tunables and attributes during the load process. However, Symantec
recommends that you check the configuration template file before you attempt to
load the file. Make any required corrections until the configuration file validates
correctly.
The attributes are given priority in the following order when a template is loaded:
Enclosure Section > Array Name Section > Array Type Section
If all enclosures of the same array type need the same settings, then remove the
corresponding array name and enclosure name sections from the template. Define
the settings only in the array type section. If some of the enclosures or array names
need customized settings, retain the attribute sections for the array names or
enclosures. You can remove the entries for the enclosures or the array names if
they use the same settings that are defined for the array type.
When you dump a configuration file from a host, that host may contain some arrays
which are not visible on the other hosts. When you load the template to a target
host that does not include the enclosure, array type, or array name, DMP ignores
the sections.
You may not want to apply settings to non-shared arrays or some host-specific
arrays on the target hosts. Be sure to define an enclosure section for each of those
arrays in the template. When you load the template file to the target host, the
enclosure section determines the settings. Otherwise, DMP applies the settings
from the respective array name or array type sections.
733
Tunable parameters
Methods to change Symantec Dynamic Multi-Pathing tunable parameters
DMP Tunables
dmp_cache_open=on
dmp_daemon_count=10
dmp_delayq_interval=15
dmp_restore_state=enabled
dmp_fast_recovery=on
dmp_health_time=60
dmp_log_level=1
dmp_low_impact_probe=on
dmp_lun_retry_timeout=0
dmp_path_age=300
dmp_pathswitch_blks_shift=9
dmp_probe_idle_lun=on
dmp_probe_threshold=5
dmp_restore_cycles=10
dmp_restore_interval=300
dmp_restore_policy=check_disabled
dmp_retry_count=5
dmp_scsi_timeout=20
dmp_sfg_threshold=1
dmp_stat_interval=1
dmp_monitor_ownership=on
dmp_monitor_fabric=on
dmp_monitor_osevent=off
dmp_native_support=off
Namingscheme
namingscheme=ebn
persistence=yes
lowercase=yes
use_avid=yes
Arraytype
arraytype=CLR-A/PF
iopolicy=minimumq
partitionsize=512
recoveryoption=nothrottle
recoveryoption=timebound iotimeout=300
redundancy=0
Arraytype
arraytype=ALUA
iopolicy=adaptive
partitionsize=512
use_all_paths=no
recoveryoption=nothrottle
734
Tunable parameters
Methods to change Symantec Dynamic Multi-Pathing tunable parameters
recoveryoption=timebound iotimeout=300
redundancy=0
Arraytype
arraytype=Disk
iopolicy=minimumq
partitionsize=512
recoveryoption=nothrottle
recoveryoption=timebound iotimeout=300
redundancy=0
Arrayname
arrayname=EMC_CLARiiON
iopolicy=minimumq
partitionsize=512
recoveryoption=nothrottle
recoveryoption=timebound iotimeout=300
redundancy=0
Arrayname
arrayname=EVA4K6K
iopolicy=adaptive
partitionsize=512
use_all_paths=no
recoveryoption=nothrottle
recoveryoption=timebound iotimeout=300
redundancy=0
Arrayname
arrayname=Disk
iopolicy=minimumq
partitionsize=512
recoveryoption=nothrottle
recoveryoption=timebound iotimeout=300
redundancy=0
Enclosure
serial=CK200051900278
arrayname=EMC_CLARiiON
arraytype=CLR-A/PF
iopolicy=minimumq
partitionsize=512
recoveryoption=nothrottle
recoveryoption=timebound iotimeout=300
redundancy=0
dmp_lun_retry_timeout=0
Enclosure
serial=50001FE1500A8F00
735
Tunable parameters
Methods to change Symantec Dynamic Multi-Pathing tunable parameters
arrayname=EVA4K6K
arraytype=ALUA
iopolicy=adaptive
partitionsize=512
use_all_paths=no
recoveryoption=nothrottle
recoveryoption=timebound iotimeout=300
redundancy=0
dmp_lun_retry_timeout=0
Enclosure
serial=50001FE1500BB690
arrayname=EVA4K6K
arraytype=ALUA
iopolicy=adaptive
partitionsize=512
use_all_paths=no
recoveryoption=nothrottle
recoveryoption=timebound iotimeout=300
redundancy=0
dmp_lun_retry_timeout=0
Enclosure
serial=DISKS
arrayname=Disk
arraytype=Disk
iopolicy=minimumq
partitionsize=512
recoveryoption=nothrottle
recoveryoption=timebound iotimeout=300
redundancy=0
dmp_lun_retry_timeout=0
736
Tunable parameters
Methods to change Symantec Dynamic Multi-Pathing tunable parameters
Edit the file to make any required changes to the tunable parameters in the
template.
The target host may include non-shared arrays or host-specific arrays. To avoid
updating these with settings from the array name or array type, define an
enclosure section for each of those arrays in the template. When you load the
template file to the target host, the enclosure section determines the settings.
Otherwise, DMP applies the settings from the respective array name or array
type sections.
During the loading process, DMP validates each section of the template. DMP
loads all valid sections. DMP does not load any section that contains errors.
737
Tunable parameters
Methods to change Symantec Dynamic Multi-Pathing tunable parameters
To display the name of the template file that the host currently uses
DATE
TIME
==============================================
/tmp/myconfig
Feb 09, 2011
11:28:59
iopolicy
partitionsize
use_all_paths
dmp_lun_retry_timeout
naming scheme
persistence
lowercase
use_avid
OS tunables
738
Tunable parameters
Tunable parameters for VxVM
TPD mode
basevm
Parameters to tune the core functionality of VxVM.
See Tunable parameters for core VxVM on page 739.
cvm
Parameters to tune Cluster Volume Manager (CVM).
See Tunable parameters for CVM on page 751.
fmr
Parameters to tune the FlashSnap functionality (FMR).
See Tunable parameters for FlashSnap (FMR) on page 746.
vvr
Parameters to tune Veritas Volume Replicator (VVR).
See Tunable parameters for VVR on page 752.
739
Tunable parameters
Tunable parameters for VxVM
Table B-2
Parameter
Description
vol_checkpt_default
vol_default_iodelay
vol_max_adminio_poolsz
vol_max_vol
740
Tunable parameters
Tunable parameters for VxVM
Table B-2
Parameter
Description
vol_maxio
vol_maxioctl
vol_maxparallelio
741
Tunable parameters
Tunable parameters for VxVM
Table B-2
Parameter
Description
vol_maxspecialio
vol_stats_enable
vol_subdisk_num
742
Tunable parameters
Tunable parameters for VxVM
Table B-2
Parameter
Description
voliomem_chunk_size
voliomem_maxpool_sz
743
Tunable parameters
Tunable parameters for VxVM
Table B-2
Parameter
Description
voliot_errbuf_dflt
voliot_iobuf_default
voliot_iobuf_limit
voliot_iobuf_max
744
Tunable parameters
Tunable parameters for VxVM
Table B-2
Parameter
Description
voliot_max_open
volraid_minpool_size
volraid_rsrtransmax
autostartvolumes
fssmartmovethreshold
745
Tunable parameters
Tunable parameters for VxVM
Table B-2
Parameter
746
Tunable parameters
Tunable parameters for VxVM
Table B-3
Parameter
Description
vol_fmr_logsz
747
Tunable parameters
Tunable parameters for VxVM
Table B-3
Parameter
Description
voldrl_dirty_regions
voldrl_max_drtregs
voldrl_max_seq_dirty
voldrl_min_regionsz
748
Tunable parameters
Tunable parameters for VxVM
Table B-3
Parameter
Description
voldrl_volumemax_drtregs
749
Tunable parameters
Tunable parameters for VxVM
Table B-3
Parameter
Description
volpagemod_max_memsz
750
Tunable parameters
Tunable parameters for VxVM
autoreminor
same_key_for_alldgs
sharedminorstart
751
Tunable parameters
Tunable parameters for VxVM
Table B-4
storage_connectivity
VVR Tunables
Tunable Name
Description
vol_cmpres_enabled
vol_cmpres_threads
752
Tunable parameters
Tunable parameters for VxVM
Table B-5
Tunable Name
Description
vol_dcm_replay_size
vol_max_nmpool_sz
vol_max_rdback_sz
vol_max_wrspool_sz
vol_min_lowmem_sz
vol_nm_hb_timeout
vol_rvio_maxpool_sz
vol_vvr_use_nat
753
Tunable parameters
Methods to change Veritas Volume Manager tunable parameters
When decreasing the value of the vol_rvio_maxpool_sz tunable, all the RVGs
on the host must be stopped.
In a shared disk group environment, you may choose to set only those tunables
that are required on each host. However, we recommend that you configure the
tunables appropriately even if the tunables are currently not being used. This is
because if the logowner changes, then tunables on the new logowner will be used.
The following list of tunables are required to be set only on the logowner and not
the other hosts:
vol_max_rdback_sz
vol_max_nmpool_sz
vol_max_wrspool_sz
vol_dcm_replay_size
vol_nm_hb_timeout
vol_vvr_use_nat
The tunable changes that are done using the vxtune command affect only the
tunable values on the host on which it is run. Therefore, in a shared disk group
environment, you must run the command separately on each host for which you
want to change the tunable values.
754
Tunable parameters
Methods to change Veritas Volume Manager tunable parameters
755
Tunable parameters
Methods to change Veritas Volume Manager tunable parameters
756
Find the name and current value of the tunable you want to change. Use the
-l option to display a description.
# vxtune -l
The following example shows a truncated output, that shows the format.
Tunable
Current Value Default Value Reboot Clusterwide
------------------- ------------- ------------- ------ ----------vol_checkpt_default
20480
20480
Y
N
vol_cmpres_enabled
vol_cmpres_threads
10
10
vol_default_iodelay
50
50
67108864
67108864
vol_fmr_logsz
vol_max_adminio_poolsz
Description
----------Size of VxVM
checkpoints (sectors)
Allow enabling
compression for VERITAS
Volume Replicator
Maximum number of
compression threads
for VERITAS
Volume Replicator
Time to pause between
I/O requests from VxVM
utilities (10ms units)
Maximum size of bitmap
Fast Mirror Resync
uses to track changed
blocks (KBytes)
Maximum amount of
memory used by
VxVM admin IO's (bytes)
.
.
.
The output displays the default value and the current value. The Reboot field
indicates whether or not a reboot is required before the tunable value takes
effect. The Clusterwide field indicates whether vxtune applies the value to all
nodes in the cluster by default.
See the vxtune(1M) manual page.
Tunable parameters
Methods to change Veritas Volume Manager tunable parameters
Set the new value for a specific tunable. Specify the value with a suffix to
indicate the units: K, M, or G. If not unit is specified, the vxtune command uses
the default unit for the tunable parameter. For most tunables, the default value
is bytes. The description in the vxtune output displays the default units for
each tunable.
# vxtune [-C] tunable_name tunable_value
If the specified tunable parameter is not clusterwide, use the -C option to set
its value for all nodes in the cluster.
For example, to view the changed value for vol_cmpres_enabled, use the
following command:
# vxtune vol_cmpres_enabled
Tunable
Current Value Default Value Reboot
-------------------- ------------- ------------- -----vol_cmpres_enabled
1
0
N
The vxtune command changed the value on all nodes in the cluster, because
the vol_cmpres_enabled tunable parameter is clusterwide.
757
Tunable parameters
Methods to change Veritas Volume Manager tunable parameters
Export the tunable parameters and their values to a tunable template file. You
can export all of the tunable parameters or specify a component.
# vxtune export file=file_name [component]
For example:
# vxtune export file=vxvm-tunables
# vxtune export file=vvr-tunables vvr
Modify the template as required. You must retain the file format that the export
operation provides.
Import the tunable template file to the sytem. The import operation only applies
valid values. If a value is not valid for a specific parameter, that particular value
is discarded.
# vxtune import file=file_name
For example:
# vxtune import file=vxvm-tunables
758
Appendix
Version 6
Version 7
Version 8
Supported
Version 9
Supported
Version 10
Some of the disk layout versions were not supported on all UNIX operating systems.
Currently, only the Version 7, 8, 9, and 10 disk layouts can be created and mounted.
The Version 6 disk layout can be mounted, but only for upgrading to a supported
version. Disk layout Version 6 cannot be cluster mounted. To cluster mount such
a file system, you must first mount the file system on one node and then upgrade
to a supported disk layout version using the vxupgrade command. No other versions
can be created or mounted. Version 10 is the default disk layout version.
The vxupgrade command is provided to upgrade an existing VxFS file system to
the Version 7 layout while the file system remains online.
See the vxupgrade(1M) manual page.
The vxfsconvert command is provided to upgrade ext2 and ext3 file systems to
the Version 7 disk layout while the file system is not mounted.
See the vxfsconvert(1M) manual page.
760
1024 bytes
2048 bytes
4096 bytes
8192 bytes
1024 bytes
2048 bytes
4096 bytes
8192 bytes
761
1024 bytes
2048 bytes
4096 bytes
8192 bytes
1024 bytes
2048 bytes
4096 bytes
8192 bytes
762
Appendix
Command reference
This appendix includes the following topics:
Command reference
Command completion for Veritas commands
The enable command completion creates the .bash_profile file, if it is not present.
To permanently disable the command completion, use the following command:
# vxdctl cmdcompletion disable
vxassist
vxdisk
vxplex
vxprint
vxsnap
vxstat
vxtune
vxcache
vxconfigd
vxtask
vxreattach
vxdmpadm
vxddladm
vxvol
vxcdsconvert
vxresize
vxdctl
vxsd
vxdisksetup
vxdiskunsetup
vxrecover
vxedit
vxdg
vxclustadm
764
Command reference
Veritas Volume Manager command reference
If you are using the Bourne or Korn shell (sh or ksh), use the commands:
$ PATH=$PATH:/usr/sbin:/opt/VRTS/bin:/opt/VRTSvxfs/sbin:\
/opt/VRTSdbed/bin:/opt/VRTSob/bin
$ MANPATH=/usr/share/man:/opt/VRTS/man:$MANPATH
$ export PATH MANPATH
VxVM library commands and supporting scripts are located under the
/usr/lib/vxvm directory hierarchy. You can include these directories in your path
if you need to use them on a regular basis.
For detailed information about an individual command, refer to the appropriate
manual page in the 1M section.
See Veritas Volume Manager manual pages on page 785.
Commands and scripts that are provided to support other commands and scripts,
and which are not intended for general use, are not located in /opt/VRTS/bin and
do not have manual pages.
Commonly-used commands are summarized in the following tables:
Table D-1 lists commands for obtaining information about objects in VxVM.
Table D-3 lists commands for creating and administering disk groups.
Table D-8 lists commands for monitoring and controlling tasks in VxVM.
765
Command reference
Veritas Volume Manager command reference
Table D-1
Command
Description
vxdisk [-g diskgroup] list [diskname] Lists disks under control of VxVM.
See Displaying disk information
on page 261.
Example:
# vxdisk -g mydg list
vxdg -s list
766
Command reference
Veritas Volume Manager command reference
Table D-1
Command
Description
vxlist
vxprint -pt [-g diskgroup] [plex ...] Displays information about plexes.
Example:
# vxprint -pt -g mydg
Table D-2
Administering disks
Command
Description
767
Command reference
Veritas Volume Manager command reference
Table D-2
Command
Description
vxdiskadm
768
Command reference
Veritas Volume Manager command reference
Table D-2
Command
Description
769
Command reference
Veritas Volume Manager command reference
Table D-2
Command
Description
vxdisksetup devicename
vxdiskunsetup devicename
Table D-3
Command
Description
770
Command reference
Veritas Volume Manager command reference
Table D-3
Command
Description
vxdg [-o expand] listmove sourcedg \ Lists the objects potentially affected by
targetdg object ...
moving a disk group.
See Listing objects potentially affected
by a move on page 594.
Example:
# vxdg -o expand listmove \
mydg newdg myvol1
771
Command reference
Veritas Volume Manager command reference
Table D-3
Command
Description
772
Command reference
Veritas Volume Manager command reference
Table D-3
Command
Description
Table D-4
Command
Description
Creates a subdisk.
Example:
# vxmake -g mydg sd \
mydg02-01 mydg02,0,8000
Replaces a subdisk.
Example:
# vxsd -g mydg mv mydg01-01 \
mydg02-01
773
Command reference
Veritas Volume Manager command reference
Table D-4
Command
Description
774
Command reference
Veritas Volume Manager command reference
Table D-4
Command
Description
Removes a subdisk.
Example:
# vxedit -g mydg rm mydg02-01
vxsd [-g diskgroup] -o rm dis subdisk Dissociates and removes a subdisk from
a plex.
Example:
# vxsd -g mydg -o rm dis \
mydg02-01
Table D-5
Command
Description
vxplex [-g diskgroup] att volume plex Attaches a plex to an existing volume.
See Reattaching a plex manually
on page 660.
Example:
# vxplex -g mydg att vol01 \
vol01-02
775
Command reference
Veritas Volume Manager command reference
Table D-5
Command
Description
Detaches a plex.
Example:
# vxplex -g mydg det vol01-02
Replaces a plex.
Example:
# vxplex -g mydg mv \
vol02-02 vol02-03
vxmend [-g diskgroup] fix clean plex Sets the state of a plex in an unstartable
volume to CLEAN.
See Reattaching a plex manually
on page 660.
Example:
# vxmend -g mydg fix clean \
vol02-02
776
Command reference
Veritas Volume Manager command reference
Table D-5
Command
Description
vxplex [-g diskgroup] -o rm dis plex Dissociates and removes a plex from a
volume.
Example:
# vxplex -g mydg -o rm dis \
vol03-01
Table D-6
Creating volumes
Command
Description
Creates a volume.
See Creating a volume on specific
disks on page 146.
Example:
# vxassist -b -g mydg make \
myvol 20g layout=concat \
mydg01 mydg02
777
Command reference
Veritas Volume Manager command reference
Table D-6
Command
Description
778
Command reference
Veritas Volume Manager command reference
Table D-6
Command
Description
vxvol [-g
diskgroup] init zero \
volume
Table D-7
Administering volumes
Command
Description
779
Command reference
Veritas Volume Manager command reference
Table D-7
Command
Description
vxsnap [-g diskgroup] prepare volume \Prepares a volume for instant snapshots
[drl=on|sequential|off]
and for DRL logging.
See Adding an instant snap DCO and
DCO volume on page 340.
Example:
# vxsnap -g mydg prepare \
myvol drl=on
780
Command reference
Veritas Volume Manager command reference
Table D-7
Command
Description
781
Command reference
Veritas Volume Manager command reference
Table D-7
Command
Description
782
Command reference
Veritas Volume Manager command reference
Table D-7
Command
Description
Removes a volume.
See Removing a volume on page 662.
Example:
# vxassist -g mydg remove \
myvol
783
Command reference
Veritas Volume Manager command reference
Table D-8
Command
Description
784
Command reference
Veritas Volume Manager manual pages
Table D-8
Command
Description
Administrative commands.
File formats.
Name
Description
vxassist
vxcache
785
Command reference
Veritas Volume Manager manual pages
Table D-9
Name
Description
vxcached
vxcdsconvert
vxclustadm
vxcmdlog
vxconfigbackup
vxconfigbackupd
vxconfigd
vxconfigrestore
vxdco
vxdctl
vxddladm
vxdefault
vxdg
vxdisk
vxdiskadd
vxdiskadm
vxdisksetup
786
Command reference
Veritas Volume Manager manual pages
Table D-9
Name
Description
vxdiskunsetup
vxdmpadm
vxdmptune
vxedit
vxencap
vxevac
vxinfo
vxinitrd
vxinstall
vxintro
vxiod
vxmake
vxmemstat
vxmend
vxmirror
vxnotify
vxplex
787
Command reference
Veritas Volume Manager manual pages
Table D-9
Name
Description
vxprint
vxr5check
vxreattach
vxrecover
vxrelayout
vxrelocd
vxresize
vxrootadm
vxrootmir
vxscsiinq
vxsd
vxsnap
vxstat
vxtask
vxtrace
vxtranslog
vxtune
vxunreloc
788
Command reference
Veritas File System command summary
Table D-9
Name
Description
vxunroot
vxvol
vxvoltune
vxvset
Name
Description
vol_pattern
vxmake
VxFS commands
Command
Description
df
Reports the number of free disk blocks and inodes for a VxFS file system.
fcladm
ff
Lists file names and inode information for a VxFS file system.
fiostat
789
Command reference
Veritas File System command summary
Table D-11
Command
Description
fsadm
fsapadm
fscat
fscdsadm
fscdsconv
fscdstask
fsck
fsckpt_restore
fsclustadm
fsdb
fsdedupadm
fsfreeze
Freezes VxFS file systems and executes a user command on the file systems.
fsmap
fsppadm
fsppmk
fstag
fstyp
fsvmap
fsvoladm
glmconfig
glmdump
Reports stuck Group Lock Managers (GLM) locks in a cluster file system.
790
Command reference
Veritas File System command summary
Table D-11
Command
Description
glmstat
mkdstfs
mkfs
mount
ncheck
Generates path names from inode numbers for a VxFS file system.
setext
vxcompress
vxdump
vxedquota
vxenablef
vxfilesnap
vxfsconvert
Converts an unmounted file system to VxFS or upgrades a VxFS disk layout version.
vxfsstat
vxlsino
vxquot
vxquota
vxquotaoff
vxquotaon
vxrepquota
vxrestore
vxtunefs
vxupgrade
791
Command reference
Veritas File System manual pages
Section 1
Description
fiostat
fsmap
getext
setext
vxcompress
vxfilesnap
Section 1M
Description
df_vxfs
Reports the number of free disk blocks and inodes for a VxFS file system.
fcladm
ff_vxfs
Lists file names and inode information for a VxFS file system.
fsadm_vxfs
fsapadm
fscat_vxfs
fscdsadm
fscdsconv
792
Command reference
Veritas File System manual pages
Table D-13
Section 1M
Description
fscdstask
fsck_vxfs
fsckptadm
Performs various administrative tasks like creating, deleting, converting, setting, and displaying
the quota on a Storage Checkpoint.
Quota display can be formatted in a human-friendly way, using the -H option.
fsdbencap
Encapsulates databases.
fsdb_vxfs
fsdedupadm
fsfreeze
Freezes VxFS file systems and executes a user command on the file systems.
fsppadm
fstyp_vxfs
fsvmap
fsvoladm
glmconfig
Configures Group Lock Managers (GLM). This functionality is available only with the Symantec
Storage Foundation Cluster File System product.
glmdump
Reports stuck Group Lock Managers (GLM) locks in a cluster file system.
mkdstfs
mkfs_vxfs
mount_vxfs
ncheck_vxfs
Generates path names from inode numbers for a VxFS file system.
quot
793
Command reference
Veritas File System manual pages
Table D-13
Section 1M
Description
vxdump
vxedquota
vxenable
vxfsconvert
Converts an unmounted file system to VxFS or upgrades a VxFS disk layout version.
vxfsstat
vxlsino
vxquot
vxquota
vxquotaoff
vxquotaon
vxrepquota
vxrestore
vxtunefs
vxupgrade
Section 3
Description
vxfs_ap_alloc2
vxfs_ap_assign_ckpt
vxfs_ap_assign_ckptchain
vxfs_ap_assign_ckptdef
vxfs_ap_assign_file
vxfs_ap_assign_file_pat
794
Command reference
Veritas File System manual pages
Table D-14
Section 3
Description
vxfs_ap_assign_fs
Assigns an allocation policy for all file data and metadata within a specified
file system.
vxfs_ap_assign_fs_pat
vxfs_ap_define
vxfs_ap_define2
vxfs_ap_enforce_ckpt
vxfs_ap_enforce_ckptchain
Enforces the allocation policy for all of the Storage Checkpoints of a VxFS
file system.
vxfs_ap_enforce_file
Ensures that all blocks in a specified file match the file allocation policy.
vxfs_ap_enforce_file2
vxfs_ap_enforce_range
vxfs_ap_enumerate
vxfs_ap_enumerate2
vxf_ap_free2
vxfs_ap_query
vxfs_ap_query2
vxfs_ap_query_ckpt
vxfs_ap_query_ckptdef
vxfs_ap_query_file
vxfs_ap_query_file_pat
vxfs_ap_query_fs
vxfs_ap_query_fs_pat
vxfs_ap_remove
795
Command reference
Veritas File System manual pages
Table D-14
Section 3
Description
vxfs_fcl_sync
vxfs_fiostats_dump
vxfs_fiostats_getconfig
vxfs_fiostats_set
Turns on and off file range I/O statistics and resets statistics counters.
vxfs_get_ioffsets
vxfs_inotopath
vxfs_inostat
vxfs_inotofd
vxfs_nattr_check
vxfs_nattr_fcheck
vxfs_nattr_link
vxfs_nattr_open
vxfs_nattr_rename
vxfs_nattr_unlink
vxfs_nattr_utimes
vxfs_vol_add
vxfs_vol_clearflags
vxfs_vol_deencapsulate
vxfs_vol_encapsulate
vxfs_vol_encapsulate_bias
vxfs_vol_enumerate
vxfs_vol_queryflags
vxfs_vol_remove
vxfs_vol_resize
vxfs_vol_setflags
796
Command reference
Veritas File System manual pages
Table D-14
Section 3
Description
vxfs_vol_stat
Section 4
Description
fs_vxfs
inode_vxfs
tunefstab
Section 7
Description
vxfsio
797
Index
Symbols
/boot/grub/menu.lst file 691
/dev/vx/dmp directory 36
/dev/vx/rdmp directory 36
/etc/default/vxassist file 117, 554
/etc/default/vxdg file 626
/etc/fstab file 662
/etc/grub.conf file 691
/etc/init.d/vxvm-recover file 558
/etc/lilo.conf file 691
/etc/volboot file 55
/etc/vx/darecs file 55
/etc/vx/dmppolicy.info file 234
/etc/vx/volboot file 633
A
A/A disk arrays 35
A/A-A disk arrays 35
A/P disk arrays 35
A/P-C disk arrays 3536
A/PF disk arrays 36
A/PG disk arrays 36
about
DMP 28
Veritas Operations Manager 31
access port 35
active path attribute 230
active paths
devices 231232
ACTIVE state 333
Active/Active disk arrays 35
Active/Passive disk arrays 35
adaptive load-balancing 234
adaptiveminq policy 234
adding disks 279
allocation policies 178
default 178
extent 31
extent based 30, 99
APM
configuring 249
Index
attributes (continued)
syncing 339, 367
autogrow
tuning 369
autogrow attribute 342, 347
autogrowby attribute 342
autotrespass mode 35
B
backups
created using snapshots 339
creating for volumes 312
creating using instant snapshots 339
creating using third-mirror snapshots 371
for multiple volumes 356, 376
of disk group configuration 654
of FSS disk group configuration 654
bad block remapping 161
balanced path policy 235
base minor number 635
BIOS
restrictions 680
blkclear mount option 161
block based architecture 105
blockmap for a snapshot file system 414
blocks on disks 52
boot disk
encapsulating 690
mirroring 690
unencapsulating 699
booting root volumes 689
BROKEN state 333
buffered file systems 98
buffered I/O 292
C
cache
for space-optimized instant snapshots 314
cache advisories 294
cache attribute 347
cache objects
creating 342
enabling 343
listing snapshots in 369
caches
creating 342
deleting 371
finding out snapshots configured on 371
caches (continued)
growing 370
listing snapshots in 369
removing 371
resizing 370
shrinking 370
stopping 371
used by space-optimized instant snapshots 314
cachesize attribute 347
campus clusters
serial split brain condition in 646
cascade instant snapshots 334
cascaded snapshot hierarchies
creating 360
categories
disks 188
CDS
compatible disk groups 626
cds attribute 626
check_all policy 247
check_alternate policy 247
check_disabled policy 247
check_periodic policy 248
checkpoint interval 740
cio
Concurrent I/O 167
cloned disks 93, 640
cluster mount 97
clusters
use of DMP in 43
vol_fmr_logsz tunable 747
columns
changing number of 609
in striping 60
mirroring in striped-mirror volumes 144
commands
cron 109
fsadm 109
getext 181
setext 181
compressing files 100
concatenated volumes 57, 139
concatenated-mirror volumes
creating 142
defined 65
recovery 140
concatenation 57
configuration backup and restoration 654
799
Index
configuration changes
monitoring using vxnotify 605
configuration database
listing disks with 644
metadata 643
reducing size of 589
Configuring DMP
using templates 731
contiguous reservation 180
Controller ID
displaying 218
controllers
disabling for DMP 240
disabling in DMP 207
displaying information about 217
enabling for DMP 241
mirroring across 150
specifying to vxassist 146
converting a data Storage Checkpoint to a nodata
Storage Checkpoint 390
convosync mount option 157, 162
copy-on-write
used by instant snapshots 332
copy-on-write technique 319, 385
copymaps 8990
creating a multi-volume support file system 462
creating file systems with large files 165
creating files with mkfs 153, 156
cron 109, 175
cron sample script 176
customized naming
DMP nodes 265
D
data change object
DCO 89
data copy 291
data redundancy 63, 66
data Storage Checkpoints definition 322
data synchronous I/O 162, 292
data transfer 291
data volume configuration 80
database replay logs and sequential DRL 79
databases
integrity of data in 312
resilvering 79
resynchronizing 79
datainlog mount option 161
DCO
adding version 0 DCOs to volumes 380
considerations for disk layout 595
data change object 89
dissociating version 0 DCOs from volumes 383
effect on disk group split and join 595
instant snap version 89
log plexes 87
log volume 89
moving log plexes 382
reattaching version 0 DCOs to volumes 383
removing version 0 DCOs from volumes 383
specifying storage for version 0 plexes 382
used with DRL 79
version 0 89
version 20 8990
versioning 89
dcolen attribute 91, 381
DDL 38
Device Discovery Layer 191
default
allocation policy 178
defaultdg 586
defragmentation 109
extent 175
scheduling with cron 175
delaylog 98
delaylog mount option 159
device discovery
introduced 38
partial 186
Device Discovery Layer 191
Device Discovery Layer (DDL) 38, 191
device names 49
configuring persistent 266
user-specified 265
device nodes
controlling access for volume sets 457
displaying access for volume sets 457
enabling access for volume sets 456
for volume sets 455
devices
adding foreign 203
fabric 186
JBOD 187
listing all 192
making invisible to VxVM 204
nopriv 678
path redundancy 231232
800
Index
801
Index
disks (continued)
Device Discovery Layer 191
disabled path 209
discovery of by DMP 185
discovery of by VxVM 187
disk access records file 55
disk arrays 50
displaying information 261262
displaying information about 261, 624
displaying naming scheme 264
displaying spare 550
dynamic LUN expansion 257
EFI 674, 680
enabled path 209
encapsulation 673, 679
enclosures 38
excluding free space from hot-relocation use 553
failure handled by hot-relocation 545
formatting 270
handling clones 93
handling duplicated identifiers 93
hot-relocation 543
initializing 271
installing 270
invoking discovery of 189
layout of DCO plexes 595
listing tags on 643
listing those supported in JBODs 199
making available for hot-relocation 551
making free space available for hot-relocation
use 554
marking as spare 551
mirroring boot disk 690
mirroring root disk 690
mirroring volumes on 613
moving between disk groups 588, 597
moving disk groups between systems 631
moving volumes from 587
nopriv devices 678
OTHER_DISKS category 188
partial failure messages 547
postponing replacement 667
primary path 209
reinitializing 279
releasing from disk groups 653
removing 280, 667
removing from disk groups 626
removing from DISKS category 202
removing from pool of hot-relocation spares 552
disks (continued)
removing from VxVM control 626, 663
removing with subdisks 282283
renaming 283
replacing 667
replacing removed 670
root disk 679
scanning for 185
secondary path 209
setting tags on 642
spare 549
specifying to vxassist 146
UDID flag 95
unique identifier 95
VxVM 52
writing a new identifier to 639
DISKS category 188
adding disks 200
listing supported disks 199
removing disks 202
displaying
DMP nodes 213
HBA information 218
redundancy levels 231
supported disk arrays 197
displaying mounted file systems 172
displaying statistics
erroneous I/Os 227
queued I/Os 227
DMP
check_all restore policy 247
check_alternate restore policy 247
check_disabled restore policy 247
check_periodic restore policy 248
configuring disk devices 185
configuring DMP path restoration policies 247
configuring I/O throttling 244
configuring response to I/O errors 242, 246
disabling array ports 240
disabling controllers 240
disabling multi-pathing 204
disabling paths 240
disk discovery 185
displaying DMP database information 207
displaying DMP node for a path 212
displaying DMP node for an enclosure 212213
displaying DMP nodes 213
displaying information about array ports 219
displaying information about controllers 217
802
Index
DMP (continued)
displaying information about enclosures 218
displaying information about paths 208
displaying LUN group for a node 214
displaying paths controlled by DMP node 215
displaying paths for a controller 215
displaying paths for an array port 216
displaying recoveryoption values 246
displaying status of DMP path restoration
thread 249
displaying TPD information 219
dynamic multi-pathing 34
enabling array ports 241
enabling controllers 241
enabling multi-pathing 206
enabling paths 241
enclosure-based naming 37
gathering I/O statistics 224
in a clustered environment 43
load balancing 42
logging levels 725
metanodes 36
nodes 36
path aging 724
path failover mechanism 41
path-switch tunable 728
renaming an enclosure 242
restore policy 247
scheduling I/O on secondary paths 237
setting the DMP restore polling interval 247
stopping the DMP restore daemon 248
tuning with templates 731
vxdmpadm 210
DMP nodes
displaying consolidated information 213
setting names 265
DMP support
JBOD devices 187
dmp_cache_open tunable 723
dmp_daemon_count tunable 724
dmp_delayq_interval tunable 724
dmp_fast_recovery tunable 724
dmp_health_time tunable 724
dmp_log_level tunable 725
dmp_low_impact_probe 725
dmp_lun_retry_timeout tunable 726
dmp_monitor_fabric tunable 726
dmp_monitor_osevent tunable 727
dmp_monitor_ownership tunable 727
E
EFI disks 674, 680
EMC arrays
moving disks between disk groups 597
EMC PowerPath
coexistence with DMP 190
EMC Symmetrix
autodiscovery 189
enabled paths
displaying 209
encapsulating disks 673, 679
encapsulating volumes 460
encapsulation
failure of 677
root disk 691
supported layouts for root disk 682
unsupported layouts for root disk 685
enclosure-based naming 38, 263
DMP 37
enclosures 38
displaying information about 218
path redundancy 231232
setting attributes of paths 230, 232
enhanced data integrity modes 98
ENOSPC 399
erroneous I/Os
displaying statistics 227
803
Index
error messages
Disk for disk group not found 633
Disk group has no valid configuration copies 633
Disk group version doesn't support feature 617
Disk is in use by another host 633
Disk is used by one or more subdisks 626
Disk not moving
but subdisks on it are 594
import failed 633
It is not possible to encapsulate 677
No valid disk found containing disk group 633
The encapsulation operation failed 677
tmpsize too small to perform this relayout 73
unsupported layout 677
vxdg listmove failed 594
errord daemon 40
errors
handling transient errors 726
expansion 109
explicit failover mode 36
Extensible Firmware Interface (EFI) disks 674, 680
extent 30, 99, 177
attributes 99, 177
reorganization 176
extent allocation 31
aligned 178
control 99, 177
fixed size 177
extent attributes 99, 177
external quotas file 702
F
fabric devices 186
FAILFAST flag 41
failover mode 35
failure handled by hot-relocation 545
failure in RAID-5 handled by hot-relocation 545
FastResync
effect of growing volume on 91
limitations 92
Non-Persistent 85
Persistent 8586, 312
size of bitmap 747
snapshot enhancements 330
use with snapshots 84
FastResync/cache object metadata cache size
tunable 750
fc_foff 712
file
sparse 179
file compression 100
file system
block size 183
buffering 98
displaying mounted 172
file systems
unmounting 662
fileset
primary 317
FileSnaps
about 324
data mining, reporting, and testing 409
virtual desktops 408
write intensive applications 409
backup 326
best practices 408
block map fragmentation 326
concurrent I/O 325
copy-on-write 325
creation 406
properties 324
reading from 326
using 407
fixed extent size 177
fixed write size 179
FlashSnap 310
FMR.. See FastResync
foreign devices
adding 203
formatting disks 270
fragmentation
monitoring 175176
reorganization facilities 175
reporting 175
fragmented file system characteristics 175
free space in disk groups 549
free space monitoring 174
Freeze 101
freeze 294
freezing and thawing, relation to Storage
Checkpoints 317
fsadm 109
how to minimize file system free space
fragmentation 171
how to reorganize a file system 171
how to resize a file system 168
reporting extent fragmentation 176
804
Index
fsadm (continued)
scheduling defragmentation using cron 176
thin reclamation 436
fsadm_vxfs 166
fscat 413
fsck 390
fsckptadm
Storage Checkpoint administration 386
FSS configuration backup and restoration 654
FSS disk groups
configuration backup and restoration 654
fstyp
how to determine the file system type 173
fsvoladm 462
full-sized instant snapshots 331
creating 349
creating volumes for use as 344
fullinst snapshot type 366
G
get I/O parameter ioctl 295
getext 181
GPT labels 674, 680
GUID Partition Table (GPT) labels 674, 680
H
HBA information
displaying 218
HBAs
listing ports 193
listing supported 192
listing targets 193
highwatermark attribute 342
hot-relocation
complete failure messages 548
configuration summary 550
daemon 544
defined 78
detecting disk failure 545
detecting plex failure 545
detecting RAID-5 subdisk failure 545
excluding free space on disks from use by 553
limitations 545
making free space on disks available for use
by 554
marking disks as spare 551
modifying behavior of 558
notifying users other than root 559
hot-relocation (continued)
operation of 543
partial failure messages 547
preventing from running 559
reducing performance impact of recovery 559
removing disks from spare pool 552
subdisk relocation 550
subdisk relocation messages 555
unrelocating subdisks 555
unrelocating subdisks using vxunreloc 556
use of free space in disk groups 549
use of spare disks 549
use of spare disks and free space 549
using only spare disks for 554
vxrelocd 544
how to access a Storage Checkpoint 388
how to create a Storage Checkpoint 387
how to determine the file system type 173
how to display mounted file systems 168
how to minimize file system free space
fragmentation 171
how to mount a Storage Checkpoint 388
how to remove a Storage Checkpoint 388
how to reorganize a file system 171
how to resize a file system 168
how to unmount a Storage Checkpoint 390
I
I/O
direct 291
gathering statistics for DMP 224
kernel threads 48
scheduling on secondary paths 237
sequential 292
synchronous 292
throttling 41
I/O operations
maximum size of 741
I/O policy
displaying 233
example 237
specifying 233
I/O requests
asynchronous 162
synchronous 161
I/O throttling 244
I/O throttling options
configuring 246
identifiers for tasks 602
805
Index
J
JBOD
DMP support 187
JBODs
adding disks to DISKS category 200
listing supported disks 199
removing disks from DISKS category 202
K
kernel logs
for disk groups 643
kernel tunable parameters 720
L
large files 105, 165
creating file systems with 165
mounting file systems with 166
largefiles mount option 166
layered volumes
defined 70, 140
striped-mirror 64
layouts
left-symmetric 68
types of volume 139
left-symmetric layout 68
LILO
restrictions 680
link objects 333
linked break-off snapshots 333
creating 354
linked third-mirror snapshots
reattaching 362
listing
DMP nodes 213
supported disk arrays 197
load balancing 35
displaying policy for 233
specifying policy for 233
local mount 97
lock clearing on disks 633
log mount option 157, 159
log subdisks
DRL 79
logdisk 145
logical units 35
logiosize mount option 160
logs
kernel 643
806
Index
logs (continued)
RAID-5 70, 77
specifying number for RAID-5 144
LUN 35
LUN expansion 257
LUN group failover 36
LUN groups
displaying details of 214
LUNs
idle 728
thin provisioning 425
M
Master Boot Record
restrictions 680
maxautogrow attribute 342
maxdev attribute 637
maximum I/O size 722
memory
granularity of allocation by VxVM 743
maximum size of pool for VxVM 743
minimum size of pool for VxVM 745
persistence of FastResync in 85
messages
complete disk failure 548
hot-relocation of subdisks 555
partial disk failure 547
metadata 643
multi-volume support 460
METADATA subdisks 694
metanodes
DMP 36
migrating to thin storage 425
mincache mount option 157, 161
minimum queue load balancing policy 235
minimum redundancy levels
displaying for a device 231
specifying for a device 232
minor numbers 634
mirbrk snapshot type 366
mirdg attribute 355
mirrored volumes
changing read policies for 151
configuring VxVM to create by default 613
creating 140
creating across controllers 150
creating across targets 148
defined 139
dirty region logging 78
807
Index
N
name space
preserved by Storage Checkpoints 386
names
changing for disk groups 644
defining for snapshot volumes 376
device 49
renaming disks 283
naming
DMP nodes 265
naming scheme
changing for disks 263
changing for TPD enclosures 267
displaying for disks 264
native asynchronous I/O
with cloned processes 723
ncachemirror attribute 347
ncheck 718
ndcomirror attribute 381
ndcomirs attribute 340
newvol attribute 353
nmirror attribute 352353
nodata Storage Checkpoints 390
nodata Storage Checkpoints definition 322
nodatainlog mount option 157, 161
nodes
DMP 36
nomanual path attribute 230
non-autotrespass mode 36
Non-Persistent FastResync 85
nopreferred path attribute 230
nopriv devices 678
O
O_SYNC 157
objects
physical 49
virtual 51
online invalid status 262
online relayout
changing number of columns 609
changing region size 612
changing speed of 612
changing stripe unit size 609
controlling progress of 611
defined 72
destination layouts 606
failure recovery 76
how it works 72
limitations 75
monitoring tasks for 611
pausing 611
performing 606
resuming 611
reversing direction of 612
specifying non-default 609
specifying plexes 610
specifying task tags for 610
temporary area 73
transformation characteristics 76
transformations and volume length 76
types of transformation 607
viewing status of 611
online status 262
ordered allocation 145, 148
OTHER_DISKS category 188
P
parity in RAID-5 66
partial device discovery 186
partition size
displaying the value of 233
specifying 235
partition table 694
partitions
number 50
slices 50
808
Index
Q
queued I/Os
displaying statistics 227
quota commands 703
quotacheck 704
quotas 701
exceeding the soft limit 702
hard limit 405, 701
soft limit 701
quotas file 702
quotas.grp file 702
809
Index
R
RAID-0 59
RAID-0+1 63
RAID-1 63
RAID-1+0 64
RAID-5
hot-relocation limitations 545
logs 70, 77
parity 66
specifying number of logs 144
subdisk failure handled by hot-relocation 545
volumes 66
RAID-5 volumes
changing number of columns 609
changing stripe unit size 609
creating 144
defined 139
raw device nodes
controlling access for volume sets 457
displaying access for volume sets 457
enabling access for volume sets 456
for volume sets 455
read policies
changing 151
prefer 151
round 151
select 151
siteread 151
split 151
recovery
checkpoint interval 740
I/O delay 740
recovery accelerator 79
recovery option values
configuring 246
redo log configuration 80
redundancy
of data on mirrors 139
of data on RAID-5 139
redundancy levels
displaying for a device 231
specifying for a device 232
redundant-loop access 40
regionsize attribute 340, 342
reinitialization of disks 279
relayout
changing number of columns 609
changing region size 612
changing speed of 612
relayout (continued)
changing stripe unit size 609
controlling progress of 611
limitations 75
monitoring tasks for 611
online 72
pausing 611
performing online 606
resuming 611
reversing direction of 612
specifying non-default 609
specifying plexes 610
specifying task tags for 610
storage 72
transformation characteristics 76
types of transformation 607
viewing status of 611
relocation
automatic 543
complete failure messages 548
limitations 545
partial failure messages 547
removable Storage Checkpoints definition 323
removing devices
from VxVM control 204
removing disks 667
removing physical disks 280
reorganization
directory 176
extent 176
replacing disks 667
replay logs and sequential DRL 79
report extent fragmentation 175
reservation space 177
resilvering
databases 79
restoration of disk group configuration 654
restoration of FSS disk group configuration 654
restore policy
check_all 247
check_alternate 247
check_disabled 247
check_periodic 248
restored daemon 40
restrictions
at boot time 689
on BIOS 680
on Master Boot Record 680
on rootability 680
810
Index
restrictions (continued)
on using LILO 680
resyncfromoriginal snapback 338
resyncfromreplica snapback 338
resynchronization
checkpoint interval 740
I/O delay 740
of volumes 77
resynchronizing
databases 79
snapshots 315
retry option values
configuring 246
Reverse Path Name Lookup 718
root disk
defined 679
encapsulating 690
encapsulation 691
mirroring 690
supported layouts for encapsulation 682
unencapsulating 699
unsupported layouts for encapsulation 685
root volume 689
rootability 679
removing 699
restrictions 680
round read policy 151
round-robin
load balancing 236
read policy 151
S
s# 50
scandisks
vxdisk subcommand 185
secondary path 35
secondary path attribute 231
secondary path display 209
select read policy 151
sequential DRL
defined 79
maximum number of dirty regions 748
sequential I/O 292
serial split brain condition
correcting 651
in campus clusters 646
in disk groups 646
setext 181
setting
path redundancy levels 232
single active path policy 236
siteread read policy 151
slices
partitions 50
SmartMove feature 425
SmartSync 79
SmartTier 451
multi-volume file system support 460
snap objects 88
snap volume naming 338
snapabort 330
snapback
defined 331
merging snapshot volumes 377
resyncfromoriginal 338
resyncfromreplica 338, 377
snapclear
creating independent volumes 378
snapmir snapshot type 366
snapof 416
snapped file systems 100, 327
performance 413
unmounting 327
snapread 413
snapshot file systems 100, 327
blockmap 414
creating 416
data block area 414
disabled 328
fscat 413
fuser 327
mounting 416
multiple 327
on cluster file systems 327
performance 413
read 413
super-block 414
snapshot hierarchies
creating 360
splitting 364
snapshot mirrors
adding to volumes 359
removing from volumes 360
snapshots
adding mirrors to volumes 359
adding plexes to 378
and FastResync 84
811
Index
snapshots (continued)
backing up multiple volumes 356, 376
backing up volumes online using 339
cascaded 334
comparison of features 82
converting plexes to 375
creating a hierarchy of 360
creating backups using third-mirror 371
creating for volume sets 357
creating full-sized instant 349
creating independent volumes 378
creating instant 339
creating linked break-off 354
creating snapshots of 335
creating space-optimized instant 346
creating third-mirror break-off 351
creating volumes for use as full-sized instant 344
defining names for 376
displaying information about 379
displaying information about instant 365
dissociating instant 363
finding out those configured on a cache 371
full-sized instant 83, 331
hierarchy of 334
improving performance of synchronization 368
instant 314
linked break-off 333
listing for a cache 369
merging with original volumes 377
of volumes 81
on multiple volumes 338
reattaching instant 361
reattaching linked third-mirror 362
refreshing instant 361
removing 375
removing instant 364
removing linked snapshots from volumes 360
removing mirrors from volumes 360
restoring from instant 363
resynchronization on snapback 338
resynchronizing 315
resynchronizing volumes from 377
space-optimized instant 314
synchronizing instant 367
third-mirror 82
use of copy-on-write mechanism 332
snapsize 415
snapstart 330
snapvol attribute 349, 355
812
Index
super-block 414
SVID requirement
VxFS conformance to 110
synchronization
controlling for instant snapshots 367
improving performance of 368
synchronous I/O 292
syncing attribute 339, 367
syncpause 367
syncresume 367
syncstart 367
syncstop 367
syncwait 367
system failure recovery 29, 99
system performance
overall 155
T
tags
for tasks 602
listing for disks 643
removing from volumes 616
renaming 616
setting on disks 642
setting on volumes 616
specifying for online relayout tasks 610
specifying for tasks 602
target IDs
specifying to vxassist 146
target mirroring 148
targets
listing 193
task monitor in VxVM 602
tasks
aborting 604
changing state of 604
identifiers 602
listing 604
managing 603
modifying parameters of 604
monitoring 604
monitoring online relayout 611
pausing 604
resuming 604
specifying tags 602
specifying tags on online relayout operation 610
tags 602
temporary area used by online relayout 73
temporary directories 105
813
Index
Thaw 101
thaw 294
thin provisioning
using 425
Thin Reclamation 431
thin reclamation
fsadm 436
thin storage
using 425
third-mirror
snapshots 82
third-mirror break-off snapshots
creating 351
third-party driver (TPD) 189
throttling 41
tmplog mount option 160
TPD
displaying path information 219
support for coexistence 189
tpdmode attribute 267
trigger point in striped-mirror volumes 143
tunable I/O parameters
Volume Manager maximum I/O size 722
tunables
changing values of 731, 754
dmp_cache_open 723
dmp_daemon_count 724
dmp_delayq_interval 724
dmp_fast_recovery 724
dmp_health_time 724
dmp_log_level 725
dmp_low_impact_probe 725
dmp_lun_retry_timeout 726
dmp_monitor_fabric 726
dmp_monitor_osevent 727
dmp_monitor_ownership 727
dmp_native_support 727
dmp_path_age 727
dmp_pathswitch_blks_shift 728
dmp_probe_idle_lun 728
dmp_probe_threshold 728
dmp_restore_cycles 729
dmp_restore_interval 729
dmp_restore_state 730
dmp_scsi_timeout 730
dmp_sfg_threshold 730
dmp_stat_interval 730
FastrResync/cache object metadata cache
size 750
tunables (continued)
maximum vol_stats_enable 742
vol_checkpt_default 740
vol_default_iodelay 740
vol_fmr_logsz 85, 747
vol_max_vol 740
vol_maxio 741
vol_maxioctl 741
vol_maxparallelio 741
vol_maxspecialio 742
vol_subdisk_num 742
voldrl_max_drtregs 748
voldrl_max_seq_dirty 79, 748
voldrl_min_regionsz 748
voliomem_chunk_size 743
voliomem_maxpool_sz 743
voliot_errbuf_dflt 744
voliot_iobuf_default 744
voliot_iobuf_limit 744
voliot_iobuf_max 744
voliot_max_open 745
volpagemod_max_memsz 750
volraid_minpool_size 745
volraid_rsrtransmax 745
Tuning DMP
using templates 731
tuning VxFS 720
U
UDID flag 95
udid_mismatch flag 95
umount command 168
unencapsulating the root disk 699
uninitialized storage, clearing 161
unmount 390
a snapped file system 327
Upgrading
ISP disk group 657
use_all_paths attribute 237
use_avid
vxddladm option 264
user-specified device names 265
utility
vxtune 755, 757
V
V-5-1-2829 617
V-5-1-552 626
814
Index
V-5-1-587 633
V-5-2-3091 594
V-5-2-369 627
V-5-2-4292 594
version 0
of DCOs 89
Version 10 disk layout 760
version 20
of DCOs 8990
Version 6 disk layout 760
Version 7 disk layout 760
Version 8 disk layout 760
Version 9 disk layout 760
versioning
of DCOs 89
versions
disk group 622
displaying for disk group 622
upgrading 622
virtual disks 109
virtual objects 51
VM disks
displaying spare 550
excluding free space from hot-relocation use 553
making free space available for hot-relocation
use 554
postponing replacement 667
removing from pool of hot-relocation spares 552
renaming 283
vol_checkpt_default tunable 740
vol_default_iodelay tunable 740
vol_fmr_logsz tunable 85, 747
vol_max_vol tunable 740
vol_maxio tunable 741
vol_maxio tunable I/O parameter 722
vol_maxioctl tunable 741
vol_maxparallelio tunable 741
vol_maxspecialio tunable 742
vol_subdisk_num tunable 742
volbrk snapshot type 366
voldrl_max_drtregs tunable 748
voldrl_max_seq_dirty tunable 79, 748
voldrl_min_regionsz tunable 748
voliomem_chunk_size tunable 743
voliomem_maxpool_sz tunable 743
voliot_errbuf_dflt tunable 744
voliot_iobuf_default tunable 744
voliot_iobuf_limit tunable 744
voliot_iobuf_max tunable 744
815
Index
volumes (continued)
flagged as dirty 77
layered 64, 70, 140
maximum number of 740
merging snapshots 377
mirrored 63, 139
mirrored-concatenated 63
mirrored-stripe 63, 140
mirroring across controllers 150
mirroring across targets 148
mirroring all 613
mirroring on disks 613
moving from VxVM disks 587
naming snap 338
performing online relayout 606
RAID-0 59
RAID-0+1 63
RAID-1 63
RAID-1+0 64
RAID-10 64
RAID-5 66, 139
reattaching plexes 660
reattaching version 0 DCOs to 383
recovering after correctable hardware failure 548
removing 662
removing from /etc/fstab 662
removing linked snapshots from 360
removing mirrors from 615
removing plexes from 615
removing snapshot mirrors from 360
removing version 0 DCOs from 383
restarting moved 601
restoring from instant snapshots 363
resynchronizing from snapshots 377
snapshots 81
spanned 57
specifying non-default number of columns 143
specifying non-default relayout 609
specifying non-default stripe unit size 143
specifying storage for version 0 DCO plexes 382
specifying use of storage to vxassist 146
stopping activity on 662
striped 59, 139
striped-mirror 64, 140
taking multiple snapshots 338
trigger point for mirroring in striped-mirror 143
types of layout 139
vx_allow_cloned_naio 723
VX_DSYNC 292
816
Index
vxassist (continued)
specifying storage attributes 146
specifying storage for version 0 DCO plexes 382
specifying tags for online relayout tasks 610
taking snapshots of multiple volumes 376
vxcache
listing snapshots in a cache 369
resizing caches 370
starting cache objects 343
stopping a cache 371
tuning cache autogrow 370
vxcached
tuning 369
vxconfigd
managing with vxdctl 55
monitoring configuration changes 605
vxdco
dissociating version 0 DCOs from volumes 383
reattaching version 0 DCOs to volumes 383
removing version 0 DCOs from volumes 383
vxdctl
managing vxconfigd 55
setting default disk group 587
vxdctl enable
configuring new disks 185
invoking device discovery 189
vxddladm
adding disks to DISKS category 200
adding foreign devices 203
changing naming scheme 264
displaying the disk-naming scheme 264
listing all devices 192
listing configured devices 195
listing configured targets 194
listing excluded disk arrays 199200
listing ports on a Host Bus Adapter 193
listing supported disk arrays 197
listing supported disks in DISKS category 199
listing supported HBAs 192
removing disks from DISKS category 190, 202
203
setting iSCSI parameters 195
used to exclude support for disk arrays 198
used to re-include support for disk arrays 198
vxdg
clearing locks on disks 633
controlling CDS compatibility of new disk
groups 626
correcting serial split brain condition 652
vxdg (continued)
creating disk groups 625
deporting disk groups 628
destroying disk groups 653
disabling a disk group 653
displaying boot disk group 586
displaying default disk group 586
displaying disk group version 622
displaying free space in disk groups 624
displaying information about disk groups 623
forcing import of disk groups 634
importing a disk group containing cloned
disks 640641
importing cloned disks 643
importing disk groups 629
joining disk groups 600
listing disks with configuration database
copies 644
listing objects affected by move 594
listing spare disks 550
moving disk groups between systems 632
moving disks between disk groups 588
moving objects between disk groups 596
placing a configuration database on disks 644
recovering destroyed disk groups 653
removing disks from disk groups 626
renaming disk groups 644
setting base minor number 635
setting maximum number of devices 637
splitting disk groups 599
upgrading disk group version 622
vxdisk
clearing locks on disks 633
displaying information about disks 624
displaying multi-pathing information 209
listing disks 262
listing spare disks 551
listing tags on disks 643
notifying dynamic LUN expansion 257
placing a configuration database on a disk 644
scanning disk devices 185
setting tags on disks 642
updating the disk identifier 639
vxdisk scandisks
rescanning devices 186
scanning devices 186
vxdiskadd
creating disk groups 625
placing disks under VxVM control 279
817
Index
vxdiskadm
Add or initialize one or more disks 271, 625
adding disks 271
changing the disk-naming scheme 263
creating disk groups 625
deporting disk groups 627
Enable access to (import) a disk group 629
Encapsulate one or more disks 675
Exclude a disk from hot-relocation use 553
excluding free space on disks from hot-relocation
use 553
importing disk groups 629
initializing disks 271
List disk information 262
listing spare disks 551
Make a disk available for hot-relocation use 554
making free space on disks available for
hot-relocation use 554
Mark a disk as a spare for a disk group 552
marking disks as spare 552
Mirror volumes on a disk 613
mirroring disks 680
mirroring root disks 693
mirroring volumes 613
Move volumes from a disk 587
moving disk groups between systems 634
moving disks between disk groups 588
moving subdisks from disks 627
moving volumes from VxVM disks 587
Remove a disk 281, 627
Remove a disk for replacement 667
Remove access to (deport) a disk group 627
removing disks from pool of hot-relocation
spares 553
Replace a failed or removed disk 670
Turn off the spare flag on a disk 553
vxdiskunsetup
removing disks from VxVM control 626, 663
vxdmpadm
changing TPD naming scheme 267
configuring an APM 250
configuring I/O throttling 244
configuring response to I/O errors 242, 246
disabling controllers in DMP 207
disabling I/O in DMP 240
displaying APM information 249
displaying DMP database information 207
displaying DMP node for a path 212, 214
displaying DMP node for an enclosure 212213
vxdmpadm (continued)
displaying I/O error recovery settings 246
displaying I/O policy 233
displaying I/O throttling settings 246
displaying information about controllers 217
displaying information about enclosures 218
displaying partition size 233
displaying paths controlled by DMP node 215
displaying status of DMP restoration thread 249
displaying TPD information 219
enabling I/O in DMP 241
gathering I/O statistics 224
listing information about array ports 219
removing an APM 250
renaming enclosures 242
setting I/O policy 235236
setting path attributes 231
setting restore polling interval 247
specifying DMP path restoration policy 247
stopping DMP restore daemon 248
vxdmpadm list
displaying DMP nodes 213
vxdump 181
vxedit
excluding free space on disks from hot-relocation
use 553
making free space on disks available for
hot-relocation use 554
marking disks as spare 551
removing a cache 371
removing disks from pool of hot-relocation
spares 552
removing instant snapshots 364
removing snapshots from a cache 371
removing volumes 663
renaming disks 284
vxencap
encapsulating the root disk 691
VxFS
storage allocation 155
vxfs_inotopath 718
vxfs_ninode 721
vxiod I/O kernel threads 48
vxlsino 718
vxmake
creating cache objects 342
creating plexes 612
vxmend
re-enabling plexes 660
818
Index
vxmirror
configuring VxVM default behavior 613
mirroring root disks 693
mirroring volumes 613
vxnotify
monitoring configuration changes 605
vxplex
attaching plexes to volumes 612
converting plexes to snapshots 375
reattaching plexes 660
removing mirrors 615
removing mirrors of root disk volumes 700
removing plexes 615
vxprint
displaying DCO information 382
displaying snapshots configured on a cache 371
listing spare disks 551
verifying if volumes are prepared for instant
snapshots 340
viewing base minor number 635
vxrecover
recovering plexes 548
restarting moved volumes 601
vxrelayout
resuming online relayout 611
reversing direction of online relayout 612
viewing status of online relayout 611
vxrelocd
hot-relocation daemon 544
modifying behavior of 558
notifying users other than root 559
operation of 545
preventing from running 559
reducing performance impact of recovery 559
vxrestore 181
vxsnap
adding snapshot mirrors to volumes 359
administering instant snapshots 332
backing up multiple volumes 356
controlling instant snapshot synchronization 367
creating a cascaded snapshot hierarchy 360
creating full-sized instant snapshots 349, 355
creating linked break-off snapshot volumes 355
creating space-optimized instant snapshots 347
displaying information about instant
snapshots 365
dissociating instant snapshots 363
preparing volumes for instant snapshots 340
reattaching instant snapshots 361
vxsnap (continued)
reattaching linked third-mirror snapshots 362
refreshing instant snapshots 361
removing a snapshot mirror from a volume 360
restore 332
restoring volumes 363
splitting snapshot hierarchies 364
vxsplitlines
diagnosing serial split brain condition 651
vxstat
determining which disks have failed 548
vxtask
aborting tasks 605
listing tasks 604
monitoring online relayout 611
monitoring tasks 605
pausing online relayout 611
resuming online relayout 611
resuming tasks 605
vxtune
setting volpagemod_max_memsz 750
vxtune utility 755, 757
vxunreloc
listing original disks of hot-relocated subdisks 557
moving subdisks after hot-relocation 556
restarting after errors 558
specifying different offsets for unrelocated
subdisks 557
unrelocating subdisks after hot-relocation 556
unrelocating subdisks to different disks 556
vxunroot
removing rootability 700
unencapsulating the root disk 700
VxVM
configuration daemon 55
configuring to create mirrored volumes 613
dependency on operating system 48
disk discovery 187
granularity of memory allocation by 743
maximum number of subdisks per plex 742
maximum number of volumes 740
maximum size of memory pool 743
minimum size of memory pool 745
objects in 51
removing disks from 626
removing disks from control of 663
rootability 679
task monitor 602
types of volume layout 139
819
Index
VxVM (continued)
upgrading 622
upgrading disk group version 622
VxVM disks
defined 52
marking as spare 551
mirroring volumes on 613
moving volumes from 587
vxvol
restarting moved volumes 601
setting read policy 152
stopping volumes 663
vxvset
adding volumes to volume sets 453
controlling access to raw device nodes 457
creating volume sets 452
creating volume sets with raw device access 456
listing details of volume sets 453
removing volumes from volume sets 453
starting volume sets 454
stopping volume sets 454
W
warning messages
Specified region-size is larger than the limit on
the system 340
writable Storage Checkpoints 388
write size 179
820