Cluster Management Using Oncommand System Manager: Ontap 9
Cluster Management Using Oncommand System Manager: Ontap 9
Cluster Management Using Oncommand System Manager: Ontap 9
Contents
Supportability Dashboard.............................................................................................18
Managing clusters.......................................................................................................... 54
What a cluster is............................................................................................................................................................54
Understanding quorum and epsilon.............................................................................................................................. 54
What a node in the cluster is ........................................................................................................................................55
Dashboard window ...................................................................................................................................................... 55
Monitoring a cluster using the dashboard.........................................................................................................57
Applications..................................................................................................................................................................57
Configuration update.................................................................................................................................................... 57
Configuring the administration details of an SVM...........................................................................................57
Configuration Updates window........................................................................................................................ 58
Service Processors........................................................................................................................................................ 59
Assigning IP addresses to Service Processors.................................................................................................. 59
Cluster Management Using OnCommand System Manager iii
Contents
System Manager enables you to perform many common tasks such as the following:
• Create a cluster, configure a network, and set up support details for the cluster.
• Configure and manage storage objects such as disks, aggregates, volumes, qtrees, and quotas.
• Configure protocols such as CIFS and NFS, and provision file sharing.
• Configure protocols such as FC, FCoE, and iSCSI for block access.
• Create and configure network components such as subnets, broadcast domains, data and
management interfaces, and interface groups.
• Set up and manage mirroring and vaulting relationships.
• Perform cluster management, storage node management, and Storage Virtual Machine (SVM,
formerly known as Vserver) management operations.
• Create and configure SVMs, manage storage objects associated with SVMs, and manage SVM
services.
• Monitor and manage HA configurations in a cluster.
• Configure Service Processors to remotely log in, manage, monitor, and administer the node,
regardless of the state of the node.
Cluster Management Using OnCommand System Manager 16
Icons used in the application interface
You can click the filter icon ( ) to display only those entries that match the conditions
that are provided. You can then use the character filter (?) or string filter (*) to narrow your
search. The filter icon is displayed when you move the pointer over the column headings.
You can apply filters to one or more columns.
Note: When you apply filters to the physical size field or the usable size field, any value
that you enter without the unit suffix in these fields is considered to be in bytes. For
example, if you enter a value of 1000 without specifying the unit in the physical size
field, the value is automatically considered as 1000 bytes.
You can click the column display icon ( ) to select which columns you want to display.
Customizing the layout
You can drag the bottom of the list of objects area up or down to resize the main areas of
the window. You can also display or hide the list of related objects and list of views panels.
You can drag the vertical dividers to resize the width of the columns or other areas of the
window.
Searching
You can use the search box to search for volumes, LUNs, qtrees, network interfaces,
Storage Virtual Machines (SVMs), aggregates, disks, or Ethernet ports, or all of these
objects. You can click the results to navigate to the exact location of the object.
Notes:
• When you search for objects that contain one or more of the { \ ? ^ > | characters, the
results are displayed correctly, but they do not navigate to the correct row in the page.
• You must not use the question mark (?) character to search for an object.
Cluster Management Using OnCommand System Manager 18
Supportability Dashboard
Supportability Dashboard
You can use the Supportability Dashboard to access product documentation and AutoSupport
tools, download software, and visit sites such as the Community and NetApp University for
additional information.
The Supportability Dashboard contains the following sources of information.
Community
Provides access to online collaborative resources on a range of NetApp products.
NetApp Support Site
Provides access to technical assistance, troubleshooting tools, and the Interoperability Matrix Tool.
NetApp University
Provides course material for learning about NetApp products.
Downloads
Provides access to NetApp firmware and software that you can download.
Documentation
Provides access to NetApp product documentation.
My AutoSupport
Provides access to the MyAutoSupport portal and the Manual AutoSupport Upload tool.
Cluster Management Using OnCommand System Manager 19
Where to find additional ONTAP information
Steps
1. Open the web browser, and then enter the node management IP address that you have
configured: https://node-management-IP
• If you have set up the credentials for the cluster, the Login page is displayed.
You must enter the credentials to log in.
• If you have not set up the credentials for the cluster, the Guided Setup window is
displayed.
2. Download the .xlsx template file or the .csv template file.
3. Provide all the required values in the template file, and save the file.
Note:
• Do not edit any other column in the template other than Value.
• Do not change the version of the template file.
5. Click Upload.
The details that you have provided in the template file are used to complete the cluster setup
process.
6. Click the Guided Setup icon to view the details for the cluster.
7. Verify the details in the Cluster window, and then click Submit and Continue.
You can edit the cluster details, if required.
If you log in to the Cluster window for the second time, the Feature Licenses field is enabled
by default. You can add new feature license keys or retain the pre-populated license keys.
8. Verify the details in the Network window, and then click Submit and Continue.
You can edit the network details, if required.
9. Verify the details in the Support window, and then click Submit and Continue.
You can edit the support details, if required.
10. Verify the details in the Storage window, and then create aggregates or exit the cluster setup:
13. Verify all the details in the Summary window, and then click Provision an Application to
provision storage for applications, or click Manage your Cluster to complete the cluster
setup process and launch System Manager, or click Export Configuration to download the
configuration file.
Creating a cluster
You can use OnCommand System Manager to create and set up a cluster in your data center.
About this task
If the cluster supports ONTAP 9.1 or later, you can add only those storage systems that are
running ONTAP 9.1 or later.
Steps
1. Open the web browser, and then enter the node management IP address that you have
configured: https://node-management-IP
• If you have set up the credentials for the cluster, the Login page is displayed.
You must enter the credentials to log in.
• If you have not set up the credentials for the cluster, the Guided Setup window is displayed.
Click the Guided Setup icon to set up a cluster.
2. In the Cluster page, enter a name for the cluster.
Note: If all the nodes are not discovered, click Refresh.
The nodes in that cluster network are displayed in the Nodes field.
3. Optional: If desired, update the node names in the Nodes field.
4. Enter the password for the cluster.
5. Optional: Enter the feature license keys.
6. Click Submit.
After you finish
Enter the network details in the Network page to continue with the cluster setup.
Related reference
Licenses window on page 70
Your storage system arrives from the factory with preinstalled software. If you want to add or
remove a software license after you receive the storage system, you can use the Licenses window.
Configuration Updates window on page 58
You can use the Configuration Updates window to update the configuration details of the cluster,
Storage Virtual Machine (SVM), and nodes.
Setting up a network
By setting up a network, you can manage your cluster, nodes, and Service Processors. You can
also set up DNS and NTP details by using the network window.
Before you begin
You must have set up the cluster.
Cluster Management Using OnCommand System Manager 23
Setting up your cluster environment
Option Description
You have a range of IP addresses Enter the IP address range, and then click Apply.
in the same netmask
IP addresses are applied to cluster management, node management, and Service
Processor management networks sequentially.
You have a range of IP addresses Enter the IP address range in rows, and then click Apply.
in different netmasks
The first IP address applied to cluster management and other IP addresses are
applied to node management and Service Processor management networks
sequentially.
Note: After entering the IP address range for cluster management, node management, and
Service Processor management, you must not manually modify the IP address values in
these fields. You must ensure that all the IP addresses are IPv4 addresses.
2. Enter the netmask and gateway details.
3. Select the port for cluster management in the Port field.
4. If the Port field in the node management is not populated with e0M, enter the port details.
Note: By default, the Port field displays e0M.
5. For Service Processor management, if you are overriding the default values, ensure that you
have entered the mandatory gateway details.
6. If you have enabled the DNS Details field, enter the DNS server details.
7. If you have enabled the NTP Details field, enter the NTP server details.
Note: Providing alternative NTP server details is optional.
8. Click Submit.
After you finish
Enter AutoSupport message details and event notifications in the Support page to continue with
the cluster setup.
Related information
What is a Service Processor and how do I use it?
How to configure and troubleshoot NTP on clustered Data ONTAP 8.2 and later using CLI
NetApp Documentation: ONTAP 9
Cluster Management Using OnCommand System Manager 24
Setting up your cluster environment
How to configure and troubleshoot NTP on clustered Data ONTAP 8.2 and later using CLI
NetApp Documentation: ONTAP 9
Creating an SVM
You can use the Storage Virtual Machine (SVM) window to create fully configured SVMs. The
SVMs serve data after storage objects are created on these SVMs.
Before you begin
• You must have created an aggregate and the aggregate must be online
• You must have ensured that the aggregate has sufficient space for the SVM root volume
Steps
1. Enter a name for the SVM.
2. Select data protocols for the SVM:
3. Optional: Click the Advanced Options icon and provide details to configure advanced options
such as the default language, security style, CIFS server details, and NFS details.
Cluster Management Using OnCommand System Manager 27
Setting up your cluster environment
Only HTTPS is supported for the browser access of OnCommand System Manager.
If the cluster uses a self-signed digital certificate, the browser might display a warning
indicating that the certificate is not trusted. You can either acknowledge the risk to continue the
access or install a Certificate Authority (CA) signed digital certificate on the cluster for server
authentication.
2. Log in to OnCommand System Manager by using your cluster administrator credential.
Related information
NetApp Documentation: ONTAP 9
Only HTTPS is supported for the browser access of OnCommand System Manager.
If the cluster uses a self-signed digital certificate, the browser might display a warning
indicating that the certificate is not trusted. You can either acknowledge the risk to continue the
access or install a Certificate Authority (CA) signed digital certificate on the cluster for server
authentication.
2. Log in to OnCommand System Manager by using your cluster administrator credential.
Related information
NetApp Documentation: ONTAP 9
Adding licenses
If your storage system software was installed at the factory, System Manager automatically adds
the software to its list of licenses. If the software was not installed at the factory or if you want to
add additional software licenses, you can add the software license by using System Manager.
Before you begin
The software license code for the specific ONTAP service must be available.
About this task
• When you add a new license in a MetroCluster configuration, it is a best practice to add the
license on the surviving site cluster as well.
• You cannot use System Manager to add the ONTAP Cloud license.
The ONTAP Cloud license is not listed in the license page. System Manager does not raise any
alert about the entitlement risk status of the ONTAP Cloud license.
• You can upload only capacity based licenses.
The capacity based licences are of "json" type.
Steps
1. Click the Configurations tab.
2. In the Cluster Settings pane, click Licenses.
3. In the Licenses window, click Add.
4. In the Add License dialog box, perform the appropriate steps:
Add a capacity based license a. Click Browse and select the capacity based license file.
b. Click Add.
Your storage system arrives from the factory with preinstalled software. If you want to add or
remove a software license after you receive the storage system, you can use the Licenses window.
Monitoring HA pairs
You can use System Manager to monitor the state and interconnect status of all the HA pairs in a
cluster. You can verify whether takeover or giveback is enabled or has occurred, and view reasons
why takeover or giveback is not currently possible.
Steps
1. Click the Configurations tab.
2. In the Cluster Settings pane, click High Availability.
3. In the High Availability window, click the HA pair image to view details such as the cluster
HA status, node status, interconnect status, and hardware model of each node.
If the cluster management LIF or the data LIFs of a node are not in their home node, a warning
message is displayed indicating that the node has some LIFs that are not in the home node.
Related reference
High Availability window on page 66
Cluster Management Using OnCommand System Manager 35
Setting up your cluster environment
The High Availability window provides a pictorial representation of the HA state, interconnect
status, and takeover or giveback status of all the HA pairs in clustered Data ONTAP. You can also
manually initiate a takeover or giveback operation.
Creating IPspaces
You can create an IPspace by using System Manager to configure a single Data ONTAP cluster for
client access from more than one administratively separate network domain, even when the clients
use the same IP address subnet range. This enables you to separate client traffic for privacy and
security.
About this task
All IPspace names must be unique within a cluster and must not consist of names reserved by the
system, such as local or localhost.
Steps
1. Click the Network tab.
2. In the IPspaces tab, click Create.
3. In the Create IPspaces dialog box, specify a name for the IPspace that you want to create.
4. Click Create.
Creating subnets
You can create a subnet by using System Manager to provide a logical subdivision of an IP
network to pre-allocate the IP addresses. A subnet enables you to create interfaces more easily by
specifying a subnet instead of an IP address and network mask values for each new interface.
Before you begin
You must have created the broadcast domain on which the subnet is used.
About this task
If you specify a gateway when creating a subnet, a default route to the gateway is added
automatically to the SVM when a LIF is created using that subnet.
Cluster Management Using OnCommand System Manager 36
Setting up your cluster environment
Steps
1. Click the Network tab.
2. In the Subnets tab, click Create.
3. In the Create Subnet dialog box, specify subnet details, such as the name, subnet IP address or
subnet mask, range of IP addresses, gateway address, and broadcast domain.
You can specify the IP addresses as a range, as comma-separated multiple addresses, or as a
mix of both.
4. Click Create.
Related reference
Network window on page 103
You can use the Network window to view the list of network components, such as subnets,
network interfaces, Ethernet ports, broadcast domains, FC/FCoE adapters, and IPspaces, and to
create, edit, or delete these components in your storage system.
4. In the Zero Spares dialog box, select a node or "All nodes" from which you want to zero the
disks.
5. Select the Zero all non-zeroed spares check box to confirm the zeroing operation.
6. Click Zero Spares.
Related information
Storage Recommendations
a. Specify the name of the aggregate, the disk type, and the number of disks or partitions to
include in the aggregate.
The minimum hot spare rule is applied to the disk group that has the largest disk size.
b. Optional: Modify the RAID configuration of the aggregate:
i. Click Change.
ii. In the Change RAID Configuration dialog box, specify the RAID type and the
RAID group size.
RAID-DP is the only supported RAID type for shared disks.
iii. Click Save.
c. If you want to mirror the aggregate, select the Mirror this aggregate check box.
For MetroCluster configurations, creating unmirrored aggregates is restricted; therefore, the
mirroring option is enabled by default.
4. Click Create.
Result
The aggregate is created with the specified configuration, and is added to the list of aggregates in
the Aggregates window.
Provisioning storage by creating a Flash Pool aggregate
You can use System Manager to create a Flash Pool aggregate, or to convert an existing HDD
aggregate to a Flash Pool aggregate by adding SSDs. When you create a new HDD aggregate, you
can provision an SSD cache to it and create a Flash Pool aggregate.
Before you begin
• You must be aware of platform-specific and workload-specific best practices for the Flash Pool
aggregate SSD tier size and configuration.
• All HDDs must be in zeroed state.
• If you want to add SSDs to the aggregate, you must ensure that all the existing and dedicated
SSDs are of the same size.
About this task
• You cannot use partitioned SSDs while creating the Flash Pool aggregate.
• You cannot mirror the aggregates if the cache source is storage pools.
• Starting with ONTAP 9.0, you can create aggregates with disk size equal to or larger than 10
TB.
• If the disk type of the aggregate disks is FSAS or MSATA, and the disk size is equal to or
larger than 10 TB, then RAID-TEC is the only option available for RAID type.
Steps
1. Choose one of the following methods:
• Click the Storage Tiers tab.
• Click Hardware and Diagnostics > Aggregates.
2. Click Create.
3. In the Create Aggregate dialog box, specify the name of the aggregate, the disk type, and the
number of HDD disks or partitions to include in the aggregate.
4. If you want to mirror the aggregate, select the Mirror this aggregate check box.
For MetroCluster configurations, creating unmirrored aggregates is restricted; therefore, the
mirroring option is enabled by default.
5. Click Use Flash Pool Cache with this aggregate.
6. Specify the cache source by choosing one of the following actions:
Cluster Management Using OnCommand System Manager 39
Setting up your cluster environment
7. Click Create.
Result
The Flash Pool aggregate is created with the specified configuration, and is added to the list of
aggregates in the Aggregates window.
Related concepts
How storage pool works on page 136
A storage pool is a collection of SSDs. You can combine SSDs to create a storage pool, which
enables you to share the SSDs and SSD spares across multiple Flash Pool aggregates, at the same
time.
Related information
NetApp Technical Report 4070: Flash Pool Design and Implementation Guide
2. Click Create.
3. In the Create Aggregate dialog box, perform the following steps:
a. Specify the name of the aggregate, the disk type, and the number of disks or partitions to
include in the aggregate.
You cannot change the name of a SnapLock Compliance aggregate after you create it.
The minimum hot spare rule is applied to the disk group that has the largest disk size.
b. Optional: Modify the RAID configuration of the aggregate:
i. Click Change.
ii. In the Change RAID Configuration dialog box, specify the RAID type and the
RAID group size.
Shared disks support two RAID types: RAID-DP and RAID-TEC.
iii. Click Save.
c. Specify the SnapLock type.
d. If you have not initialized the system ComplianceClock, select the Initialize
ComplianceClock check box.
This option is not displayed if the ComplianceClock is already initialized on the node.
Note: Ensure that the current system time is correct. The ComplianceClock is set based
on the system clock, and once it is set, you cannot modify or stop the ComplianceClock.
e. Optional: If you want to mirror the aggregate, select the Mirror this aggregate check box.
For MetroCluster configurations, creating unmirrored aggregates is restricted; therefore, the
mirroring option is enabled by default.
The mirroring option is disabled for SnapLock Compliance aggregates.
4. Click Create.
Provisioning storage by creating a FabricPool
You can use System Manager to create a FabricPool or to convert an existing SSD aggregate to a
FabricPool by attaching an external capacity tier to the SSD aggregate.
Before you begin
• You must have created an external capacity tier and attached it to the cluster in which the SSD
aggregate resides.
• An on-premises external capacity tier must be present.
• A dedicated network connection must exist between the external capacity tier and the
aggregate.
About this task
• Supported external capacity tiers are StorageGRID Webscale and Amazon AWS S3.
Note: If you want to use Amazon AWS S3 as an external capacity tier, you must have the
FabricPool capacity license.
• FabricPool is not supported on ONTAP Select and MetroCluster configurations.
Steps
1. Choose one of the following methods:
• Click the Storage Tiers tab.
• Click Hardware and Diagnostics > Aggregates.
2. Click Create.
3. In the Create Aggregate dialog box, perform the following steps:
Cluster Management Using OnCommand System Manager 41
Setting up your cluster environment
a. Specify the name of the aggregate, the disk type, and the number of disks or partitions to
include in the aggregate.
Note: Only all flash (all SSD) aggregates support FabricPool.
The minimum hot spare rule is applied to the disk group that has the largest disk size.
b. Optional: Modify the RAID configuration of the aggregate:
i. Click Change.
ii. In the Change RAID Configuration dialog box, specify the RAID type and the
RAID group size.
RAID-DP is the only supported RAID type for shared disks.
iii. Click Save.
4. Select the FabricPool box, and then select an external capacity tier from the list.
5. Click Create.
Creating SVMs
You can use System Manager to create fully configured Storage Virtual Machines (SVMs) that can
serve data immediately. A cluster can have one or more SVMs with FlexVol volumes.
Before you begin
• The cluster must have at least one non-root aggregate in the online state.
• The aggregate must have sufficient space for the SVM root volume.
• You must have synchronized the time across the cluster by configuring and enabling NTP to
prevent CIFS creation and authentication failures.
• Protocols that you want to configure on the SVM must be licensed.
• You must have configured the CIFS protocol for secure DDNS to work.
About this task
• While creating SVMs, you can perform the following tasks:
◦ Create and fully configure SVMs.
◦ Configure the volume type allowed on SVMs.
◦ Create and configure SVMs with minimal network configuration.
◦ Delegate the administration to SVM administrators.
• To name the SVM, you can use alphanumeric characters and the following special characters:
"." (period), "-" (hyphen), and "_" (underscore).
The SVM name should start with an alphabet or "_" (underscore) and must not contain more
than 47 characters.
Note: You should use unique fully qualified domain names (FQDNs) for the SVM name
such as vs0.example.com.
• You can establish SnapMirror relationships only between volumes that have the same language
settings.
The language of the SVM determines the character set that is used to display file names and
data for all NAS volumes in the SVM.
• You cannot use a SnapLock aggregate as the root aggregate of SVMs.
Steps
1. Click the SVMs tab.
Cluster Management Using OnCommand System Manager 42
Setting up your cluster environment
2. Click Create.
3. In the Storage Virtual Machine (SVM) Setup window, specify details such as the following:
• SVM name
• IPspace allocated to the SVM
• Volume type allowed
• Protocols allowed
• SVM language
• Security style of the root volume
• Root aggregate
The default language setting for any SVM is C.UTF-8.
By default, the aggregate with the maximum free space is selected as the container for root
volume of the SVM. Based on the protocols selected, the default security style and the root
aggregate are selected.
The security style is set to NTFS if you select CIFS protocol or a combination of CIFS
protocol with the other protocols. The security style is set to UNIX if you select NFS, iSCSI,
or FC/FCoE or a combination of these protocols.
In a MetroCluster configuration, only the aggregates that are contained in the cluster are
displayed.
4. Specify the DNS domain names and the name server IP addresses to configure the DNS
services.
The default values are selected from the existing SVM configurations.
5. Optional: When configuring a data LIF to access data using a protocol, specify the target alias,
subnets, and the number of LIFs per node details.
You can select the Review or Modify LIFs configuration (Advanced Settings) check box to
modify the number of portsets in the LIF.
You can edit the details of the portset in a particular node by selecting the node from the nodes
list in the details area.
6. Optional: Enable host-side applications such as SnapDrive and SnapManager for the SVM
administrator by providing the SVM credentials.
7. Optional: Create a new LIF for SVM management by clicking Create a new LIF for SVM
management, and then specify the portsets and the IP address with or without a subnet for the
new management LIF.
For CIFS and NFS protocols, data LIFs have management access by default. You must create a
new management LIF only if required. For iSCSI and FC protocols, a dedicated SVM
management LIF is required because data and management protocols cannot share the same
LIF.
8. Click Submit & Continue.
The SVM is created with the specified configuration.
Result
The SVM that you created is started automatically. The root volume name is automatically
generated as SVM name_root. By default, the vsadmin user account is created and is in the
locked state.
After you finish
• You must configure at least one protocol on the SVM to allow data access.
Cluster Management Using OnCommand System Manager 43
Setting up your cluster environment
LIFs to the portsets. LIFs are created on the most suitable adapters and assigned to portsets to
ensure data path redundancy.
Before you begin
• The iSCSI license must be enabled on the cluster.
If the protocol is not allowed on the SVM, you can use the Edit Storage Virtual Machine
window to enable the protocol for the SVM.
• All the nodes in the cluster must be healthy.
• Each node must have at least two data ports and the port state must be up.
About this task
• You can configure the iSCSI protocol while creating the SVM or you can do so at a later time.
• SnapLock aggregates are not considered for automatically creating volumes.
Steps
1. If you have not configured the protocols while creating the SVM, click the SVMs tab.
2. Select the SVM, and then click Manage.
3. Click the SVM Settings tab.
4. In the Protocols pane, click iSCSI.
5. Optional: In the Network Access section, specify an alias for the iSCSI target.
The maximum number of characters for an alias name is 128. If you do not specify a target
alias, the SVM name is used as an alias.
6. Specify the number of iSCSI LIFs that can be assigned to a single node.
The minimum number of LIFs per node is one. The maximum number is the minimum of all
the ports in the up state across the nodes. If the maximum value is an odd number, the
previous even number is considered as the maximum value. You can choose any even number
in the minimum and maximum value range.
A 4-node cluster has node1, node2, and node3 with 6 ports each in the up state, and node4
with 7 ports in the up state. The effective maximum value for the cluster is 6.
If the number of LIFs that you want to assign to the node is more than 2, you must assign at
least one portset to each LIF.
7. Specify the network details, including the subnet details, to create iSCSI LIFs:
2. If you want a dedicated LIF for SVM management, select Create a LIF for SVM
management, and then specify the network details.
A dedicated SVM management LIF is required for SAN protocols, where data and
management protocols cannot share the same LIF. SVM management LIFs can be created only
on data ports.
3. Specify the network details, including subnet details, to create iSCSI LIFs:
11. Select Default, Thin provisioned or Thick provisioned for the volume.
When thin provisioning is enabled, space is allocated to the volume from the aggregate only
when data is written to it.
Note:
• For All Flash FAS(AFF) storage systems, the value of thin provisioning is "Default"
and for other storage systems, the value of thick provisioning is "Default".
• For FabricPool, the value of thin provisioning is "Default".
12. If you want to enable deduplication on this volume, make the necessary changes in the
Storage Efficiency tab.
System Manager uses the default deduplication schedule. If the specified volume size
exceeds the limit required for running deduplication, the volume is created and deduplication
is not enabled.
For All Flash Optimized personality systems, inline compression is enabled by default.
13. If you want to enable storage QoS for the FlexVol volume to manage workload performance,
select the Manage Storage Quality of Service check box in the Quality of Service tab.
14. Create a new storage QoS policy group or select an existing policy group to control the input/
output (I/O) performance of the FlexVol volume:
Select an existing policy group a. Select Existing Policy Group, and then click Choose to select an existing
policy group from the Select Policy Group dialog box.
b. Specify the minimum throughput limit.
If you do not specify the minimum throughput value or when the minimum
throughput value is set to 0, the system automatically displays "None" as
the value and this value is case-sensitive.
c. Specify the maximum throughput limit to ensure that the workload of the
objects in the policy group do not exceed the specified throughput limit.
• The minimum throughput limit and the maximum throughput limit must
be of the same unit type.
• If you do not specify the minimum throughput limit, then you can set
the maximum throughput limit in IOPs and B/s, KB/s, MB/s, and so on.
• If you do not specify the maximum throughput value, the system
automatically displays "Unlimited" as the value and this value is case-
sensitive. The unit that you specify does not affect the maximum
throughput.
If the policy group is assigned to more than one object, the maximum
throughput that you specify is shared among the objects.
Managing clusters
You can use System Manager to manage clusters.
Related information
ONTAP concepts
What a cluster is
A cluster consists of one or more nodes grouped together as (HA pairs) to form a scalable cluster.
Creating a cluster enables the nodes to pool their resources and distribute work across the cluster,
while presenting administrators with a single entity to manage. Clustering also enables continuous
service to end users if individual nodes go offline.
• The maximum number of nodes within a cluster depends on the platform model and licensed
protocols.
• Each node in the cluster can view and manage the same volumes as any other node in the
cluster.
The total file-system namespace, which comprises all of the volumes and their resultant paths,
spans the cluster.
• The nodes in a cluster communicate over a dedicated, physically isolated and secure Ethernet
network.
The cluster logical interfaces (LIFs) on each node in the cluster must be on the same subnet.
• When new nodes are added to a cluster, there is no need to update clients to point to the new
nodes.
The existence of the new nodes is transparent to the clients.
• If you have a two-node cluster (a single HA pair), you must configure cluster high availability
(HA).
• You can create a cluster on a stand-alone node, called a single-node cluster.
This configuration does not require a cluster network, and enables you to use the cluster ports
to serve data traffic. However, nondisruptive operations are not supported on single-node
clusters.
two of the nodes have failed. However, because one of the surviving nodes holds epsilon, the
cluster remains in quorum even though there is not a simple majority of healthy nodes.
Epsilon is automatically assigned to the first node when the cluster is created. If the node that
holds epsilon becomes unhealthy, takes over its high-availability partner, or is taken over by its
high-availability partner, then epsilon is automatically reassigned to a healthy node in a different
HA pair.
Taking a node offline can affect the ability of the cluster to remain in quorum. Therefore, ONTAP
issues a warning message if you attempt an operation that will either take the cluster out of
quorum or else put it one outage away from a loss of quorum. You can disable the quorum warning
messages by using the cluster quorum-service options modify command at the
advanced privilege level.
In general, assuming reliable connectivity among the nodes of the cluster, a larger cluster is more
stable than a smaller cluster. The quorum requirement of a simple majority of half the nodes plus
epsilon is easier to maintain in a cluster of 24 nodes than in a cluster of two nodes.
A two-node cluster presents some unique challenges for maintaining quorum. Two-node clusters
use cluster HA, in which neither node holds epsilon; instead, both nodes are continuously polled to
ensure that if one node fails, the other has full read-write access to data, as well as access to
logical interfaces and management functions.
Dashboard window
The Dashboard window contains multiple panels that provide cumulative at-a-glance information
about your system and its performance.
You can use the Dashboard window to view information about important alerts and notifications,
efficiency and capacity of aggregates and volumes, the nodes that are available in a cluster, the
status of the nodes in a high-availability (HA) pair, the most active objects, and the performance
metrics of the cluster or a node.
Alerts and Notifications
Displays all alerts in red, such as emergency EMS events, offline node details, broken disk
details, license entitlements that are in high risk, and offline network port details. Displays
all notifications in yellow, such as health monitor notifications that occurred in the past 24
hours at the cluster level, license entitlements that are in medium risk, unassigned disk
details, the number of migrated LIFs, volume move operations that failed, and volume move
operations that required administrative intervention in the past 24 hours.
Cluster Management Using OnCommand System Manager 56
Managing clusters
The panel displays up to three alerts and notifications beyond which a View-All link is
displayed. You can click the View-All link to view more information about the alerts and
notifications.
The refresh interval for this panel is one minute.
Efficiency and Capacity
Displays the aggregates and volumes that are nearing capacity, and the storage efficiency of
the cluster or a node.
The Efficiency tab displays the storage efficiency savings for the cluster or a node. You can
view the total logical space used, total physical space used, overall savings from storage
efficiency, data reduction ratio, FlexClone volume ratio, and Snapshot copies ratio. You can
select the cluster or a specific node to view the storage efficiency savings.
Note: During a takeover operation or giveback operation, the storage efficiency data may
not be fully reported. In such cases, the reported storage efficiency data of these
operations is corrected after some time, depending on the number of Snapshot copies
across all the volumes in the nodes.
In the Aggregates tab, the graph displays the top five online aggregates that are nearing
capacity, in descending order of used space. You can click the View All link to navigate to
the Aggregates inventory page.
The Volumes tab displays the top three SVMs—including destination SVMs for disaster
recovery and SVMs in a locked state—that contain the volumes with the highest capacity
utilized when you enter a valid value in the "Volumes exceeding used capacity of" field.
You can click the View All link to view the Volumes dialog box, and then navigate to the
Volumes page.
The refresh interval for this panel is 15 minutes.
Nodes
Displays a pictorial representation of the number and names of the nodes that are available
in the cluster, and the status of the nodes that are in an HA pair. You must position the
cursor over the pictorial representation of the nodes to view the status of the nodes in an HA
pair.
You can view more information about all the nodes by using the Nodes link. You can also
click the pictorial representation to view the model of the nodes and the number of
aggregates, storage pools, shelves, and disks that are available in the nodes. You can
manage the nodes by using the Manage Nodes link. You can manage the nodes in an HA
pair by using the Manage HA link.
The refresh interval for this panel is 15 minutes.
Applications and Objects
The Applications tab displays information about the top five applications of the cluster. You
can view the top five applications based on capacity, from low to high or high to low. You
must click the specific bar chart to view more information about the application. You can
click View details to open the Applications window of the specific application.
The Objects tab displays information about the top five active clients and files in the cluster.
You can view the top five active clients and files based on IOPS or throughput.
The refresh interval for this panel is one minute.
Cluster Management Using OnCommand System Manager 57
Managing clusters
Performance
Displays the average performance metrics, read performance metrics, and write
performance metrics of the cluster based on latency, IOPS, and throughput. The average
performance metrics is displayed by default. You can click Read or Write to view the read
or write performance metrics, respectively. You can view the performance metrics of the
cluster or a node.
If the information about cluster performance cannot be retrieved from ONTAP, you cannot
view the respective graph. In such cases, System Manager displays the specific error
message.
The refresh interval for the charts in this tab is 15 seconds.
Applications
Applications are predefined templates which can be used to create new configurations based on
existing application templates, and then use these descriptions to provision instances of the
application on ONTAP. You can create basic and enhanced applications.
You can create the basic and enhanced applications by clicking Applications or by navigating to
SVMs > Application Provisioning.
Note: The Application Provisioning tab is displayed only on All Flash FAS platforms.
Related information
ONTAP concepts
Configuration update
You can use System Manager to configure the administration details of Storage Virtual Machines
(SVMs).
A dedicated SVM management LIF is required for SAN protocols, where data and
management protocols cannot share the same LIF. SVM management LIFs can be created only
on data ports.
6. Specify the network details:
Command buttons
Edit Node Name
Opens the Edit Node Name dialog box, which enables you to modify the name of the node.
Cluster Management Using OnCommand System Manager 59
Managing clusters
Command button
Configure Administration Details
Opens the Configure Administration Details dialog box, which enables you configure the
administration details of the SVM.
Related tasks
Creating a cluster on page 22
You can use OnCommand System Manager to create and set up a cluster in your data center.
Setting up a network when an IP address range is disabled on page 24
You can set up a network by disabling an IP address range and entering individual IP addresses for
cluster management, node management, and service provider networks.
Service Processors
You can use a Services Processor to monitor and manage your storage system parameters such as
temperature, voltage, current, and fan speeds through System Manager.
Global Settings
Opens the Global Settings dialog box, which allows you to configure the source of IP
address for all your Service Processors as one of the following: DHCP, subnet, or manual.
Refresh
Updates the information in the window.
Service processors list
Node
Specifies the node on which the Service Processor is located.
IP Address
Specifies the IP addresses of the Service Processor.
Status
Specifies the status the Service Processor, which can be online, offline, daemon offline,
node offline, degraded, rebooted, or unknown.
MAC Address
Specifies the MAC address of the Service Processor.
Details area
The area below the Service Processor list displays detailed information about the Service
Processor, including network details, such as the IP address, network mask (IPv4) or prefix-length
(IPv6), gateway, IP source, and MAC address, as well as general details, such as the firmware
version and whether automatic update of the firmware is enabled.
Related tasks
Setting up a network when an IP address range is disabled on page 24
You can set up a network by disabling an IP address range and entering individual IP addresses for
cluster management, node management, and service provider networks.
Cluster peers
You can use System Manager to peer two clusters so that the peered clusters can coordinate and
share resources between them.
Related information
Data protection using SnapMirror and SnapVault technology
The intercluster network must be configured so that cluster peers have pair-wise full-mesh
connectivity within the applicable IPspace, which means that each pair of clusters in a cluster peer
relationship has connectivity among all of their intercluster LIFs.
A cluster’s intercluster LIFs have an IPv4 address or an IPv6 address.
Port requirements
The ports that are used for intercluster communication must meet the following requirements:
• All ports that are used to communicate with a given remote cluster must be in the same
IPspace.
You can use multiple IPspaces to peer with multiple clusters. Pair-wise full-mesh connectivity
is required only within an IPspace.
• The broadcast domain that is used for intercluster communication must include at least two
ports per node so that intercluster communication can fail over from one port to another port.
Ports added to a broadcast domain can be physical network ports, VLANs, or interface groups
(ifgrps).
• All ports must be cabled.
• All ports must be in a healthy state.
• The MTU settings of the ports must be consistent.
• You must decide whether the ports that are used for intercluster communication are shared with
data communication.
Firewall requirements
Firewalls and the intercluster firewall policy must allow the following protocols:
• ICMP service
• TCP to the IP addresses of all the intercluster LIFs over the ports 10000, 11104, and 11105
• HTTPS
The default intercluster firewall policy allows access through the HTTPS protocol and from
all IP addresses (0.0.0.0/0), but the policy can be altered or replaced.
Cluster requirements
Clusters must meet the following requirements:
• The time on the clusters in a cluster peering relationship must be synchronized within 300
seconds (5 minutes).
Cluster peers can be in different time zones.
Related information
NetApp Documentation: ONTAP 9
• In a MetroCluster configuration, when you create a peer relationship between the primary
cluster and an external cluster, it is a best practice to create a peer relationship between the
surviving site cluster and the external cluster as well.
Steps
1. Click the Configurations tab.
2. In the Cluster Settings pane, click Cluster Peers.
3. Click Create.
4. In the Details of the local cluster area, select the IPspace for the cluster peer relationship.
The operational intercluster interface for the selected IPspace is displayed.
5. Optional: If the node does not contain an operational intercluster interface, click Create
intercluster interface to configure the LIF for the node.
6. In the Details of the remote cluster to be peered area, specify a passphrase for the cluster
peer relationship.
The passphrase that you enter will be validated against the passphrase of the peered cluster to
ensure an authenticated cluster peer relationship.
The minimum default length of the passphrase is eight characters.
If the name of the local cluster and remote cluster are identical, an alias is created for the
remote cluster.
If the name of the local cluster and remote cluster are identical, or if the local cluster is in a
peer relationship with another remote cluster of the same name, an Enter Cluster Alias Name
dialog box is displayed.
7. Enter an alias name for the remote cluster.
8. Enter the intercluster interface IP addresses for the remote cluster.
9. Click Create.
10. Log in to the remote cluster and perform the above steps to create a peer relationship between
the local and remote clusters.
Peers window
You can use the Peers window to manage peer relationships, which enable you to move data from
one cluster to another.
Command buttons
Create
Opens the Create Cluster Peering dialog box, which enables you to create a relationship
with a remote cluster.
Modify Passphrase
Opens the Modify Passphrase dialog box, which enables you to enter a new passphrase for
the local cluster.
Modify Peer Network Parameters
Opens the Modify Peer Network Parameters dialog box, which enables you to modify the
IPspace, add new intercluster IP addresses, or remove existing IP addresses.
You can add multiple IP addresses, separated by commas.
Delete
Opens the Delete Cluster Peer Relationship dialog box, which enables you to delete the
selected peer cluster relationship.
Refresh
Updates the information in the window.
Peer cluster list
Peer Cluster
Specifies the name of the peer cluster in the relationship.
Availability
Specifies whether the peer cluster is available for communication.
Authentication Status
Specifies whether the peer cluster is authenticated or not.
IPspace
Displays IPspace associated to the cluster peer relation.
Details area
The details area displays detailed information about the selected peer cluster relationship,
including the active IP addresses discovered by the system to set up the intercluster network and
the last updated time.
High availability
You can use System Manager to create high availability (HA) pairs that provide hardware
redundancy that is required for nondisruptive operations and fault tolerance.
Cluster Management Using OnCommand System Manager 66
Managing clusters
Related information
ONTAP concepts
Understanding HA pairs
HA pairs provide hardware redundancy that is required for nondisruptive operations and fault
tolerance and give each node in the pair the software functionality to take over its partner's storage
and subsequently give back the storage.
Command buttons
Refresh
Updates the information in the window.
Note: Information displayed in the High Availability window is automatically refreshed
every 60 seconds.
Related tasks
Monitoring HA pairs on page 34
Cluster Management Using OnCommand System Manager 67
Managing clusters
You can use System Manager to monitor the state and interconnect status of all the HA pairs in a
cluster. You can verify whether takeover or giveback is enabled or has occurred, and view reasons
why takeover or giveback is not currently possible.
Licenses
You can use System Manager to view, manage, or delete any software licenses installed on a
cluster or node.
Related information
System administration
Deleting licenses
You can use the Licenses window in System Manager to delete any software license installed on a
cluster or a node.
Before you begin
The software license you want to delete must not be used by any service or feature.
Steps
1. Click the Configurations tab.
2. In the Cluster Settings pane, click Licenses.
3. In the Licenses window, perform the appropriate action:
You can find license keys for your initial or add-on software orders at the NetApp Support Site
under My Support > Software Licenses (login required). If you cannot locate your license keys
from the Software Licenses page, contact your sales or support representative.
ONTAP enables you to manage feature licenses in the following ways:
• Add one or more license keys (system license add)
• Display information about installed licenses (system license show)
• Display the packages that require licenses and their current license status on the cluster
(system license status show)
• Delete a license from the cluster or a node whose serial number you specify (system
license delete)
The cluster base license is required for the cluster to operate. ONTAP does not enable you to
delete it.
• Display or remove expired or unused licenses (system license clean-up)
ONTAP enables you to monitor feature usage and license entitlement risk in the following ways:
• Display a summary of feature usage in the cluster on a per-node basis (system feature-
usage show-summary)
The summary includes counter information such as the number of weeks a feature was in use
and the last date and time the feature was used.
• Display feature usage status in the cluster on a per-node and per-week basis (system
feature-usage show-history)
The feature usage status can be not-used, configured, or in-use. If the usage information
is not available, the status shows not-available.
• Display the status of license entitlement risk for each license package (system license
entitlement-risk show)
The risk status can be low, medium, high, unlicensed, or unknown. The risk status is also
included in the AutoSupport message. License entitlement risk does not apply to the base
license package.
The license entitlement risk is evaluated by using a number of factors, which might include but
are not limited to the following:
◦ Each package's licensing state
◦ The type of each license, its expiry status, and the uniformity of the licenses across the
cluster
◦ Usage for the features associated with the license package
If the evaluation process determines that the cluster has a license entitlement risk, the
command output also suggests a corrective action.
Related information
What are Data ONTAP 8.2 and 8.3 licensing overview and references?
How to verify Data ONTAP Software Entitlements and related License Keys using the Support
Site
NetApp: Data ONTAP Entitlement Risk Status
◦ If the site license is not installed, and the node-locked license is non-uniformly installed on
the nodes in a cluster
• No risk
There is no entitlement risk if a node-locked license is installed on all the nodes, irrespective of
the usage.
• Unknown
The risk is unknown if the API is sometimes unable to retrieve the data related to entitlement
risk that is associated with the cluster or the nodes in the cluster.
Licenses window
Your storage system arrives from the factory with preinstalled software. If you want to add or
remove a software license after you receive the storage system, you can use the Licenses window.
Note: System Manager does not monitor evaluation licenses and does not provide any warning
when an evaluation license is nearing expiry. An evaluation license is a temporary license that
expires after a certain period of time.
• Command buttons
• Packages tab
• Packages details area
• Details tab
Command buttons
Add
Opens the Add License window, which enables you to add new software licenses.
Delete
Deletes the software license that you select from the software license list.
Refresh
Updates the information in the window.
Packages tab
Displays information about the license packages that are installed on your storage system.
Package
Displays the name of the license package.
Entitlement Risk
Indicates the level of risk as a result of license entitlement issues for a cluster. The
entitlement risk level can be high risk ( ), medium risk ( ), no risk ( ), unknown
( ), or unlicensed (-).
Description
Displays the level of risk as a result of license entitlement issues for a cluster.
License Package details area
The area below the license packages list displays additional information about the selected license
package. This area includes information about the cluster or node on which the license is installed,
the serial number of the license, usage in the previous week, whether the license is installed, the
expiration date of the license, and whether the license is a legacy one.
Cluster Management Using OnCommand System Manager 71
Managing clusters
Details tab
Displays additional information about the license packages that are installed on your storage
system.
Package
Displays the name of the license package.
Cluster/Node
Displays the cluster or node on which the license package is installed.
Serial Number
Displays the serial number of the license package that is installed on the cluster or node.
Type
Displays the type of the license package, which can be the following:
• Temporary: Specifies that the license is a temporary license, which is valid only during
the demonstration period.
• Master: Specifies that the license is a master license, which is installed on all the nodes
in the cluster.
• Node Locked: Specifies that the license is a node-locked license, which is installed on a
single node in the cluster.
• Capacity:
◦ For ONTAP Select, specifies that the license is a capacity license, which defines the
total amount of data capacity that the instance is licensed to manage.
◦ For FabricPool, specifies that the license is a capacity license, which defines the
amount of data that can be managed in the attached third-party storage (for example,
AWS).
State
Displays the state of the license package, which can be the following:
• Evaluation: Specifies that the installed license is an evaluation license.
• Installed: Specifies that the installed license is a valid purchased license.
• Warning: Specifies that the installed license is a valid purchased license and is
approaching maximum capacity.
• Enforcement: Specifies that the installed license is a valid purchased license and has
exceeded the expiry date.
• Waiting for License: Specifies that the license has not yet been installed.
Legacy
Displays whether the license is a legacy license.
Maximum Capacity
• For ONTAP Select, displays the maximum amount of storage that can be attached to the
ONTAP Select instance.
• For FabricPool, displays the maximum amount of third-party object store storage that
can be used as external tiered storage.
Current Capacity
• For ONTAP Select, displays the total amount of storage that is currently attached to the
ONTAP Select instance.
• For FabricPool, displays the total amount of third-party object store storage that is
currently used as external tiered storage.
Cluster Management Using OnCommand System Manager 72
Managing clusters
Expiration Date
Displays the expiration date of the software license package.
Related tasks
Adding licenses on page 33
If your storage system software was installed at the factory, System Manager automatically adds
the software to its list of licenses. If the software was not installed at the factory or if you want to
add additional software licenses, you can add the software license by using System Manager.
Deleting licenses on page 67
You can use the Licenses window in System Manager to delete any software license installed on a
cluster or a node.
Creating a cluster on page 22
You can use OnCommand System Manager to create and set up a cluster in your data center.
Cluster Expansion
You can use System Manager to increase the size and capabilities of your storage by adding
compatible nodes to the cluster and configuring the node network details. You can also view the
summary of the nodes.
When you log in to System Manager, System Manager automatically detects compatible nodes
that have been cabled but have not been added to the cluster and prompts you to add the nodes.
You can add compatible nodes as and when System Manager detects the nodes or you can
manually add the nodes at a later time.
Steps
1. Adding nodes to a cluster on page 72
2. Configuring the network details of the nodes on page 73
Cluster update
You can use System Manager to update a cluster or individual nodes in HA pairs.
Cluster Management Using OnCommand System Manager 74
Managing clusters
You can use System Manager to update a cluster nondisruptively to a specific Data ONTAP
version. In a nondisruptive update, you have to select a Data ONTAP image, validate that your
cluster is ready for the update, and then perform the update.
Batch update
The cluster is separated into two batches, each of which contains multiple HA pairs.
A batch update is performed for a cluster that consists of eight or more nodes. In such
clusters, you can perform either a batch update or a rolling update. By default, a batch
update is performed.
Related tasks
Updating the cluster nondisruptively on page 74
You can use System Manager to update a cluster or individual nodes in HA pairs that are running
Data ONTAP 8.3.1 to a specific version of ONTAP software without disrupting access to client
data.
Command buttons
Refresh
Updates the information in the window.
Select
You can select the version of the software image for the update.
• Cluster Version Details: Displays the current cluster version in use and the version
details of the nodes or HA pairs.
• Available Software Images: Enables you to select an already available software image
for the update. Alternatively, you can download a software image from the NetApp
Support Site and add it for the update.
Validate
You can view and validate the cluster against the software image version for the update. A
pre-update validation checks whether the cluster is in a state that is ready for an update. If
the validation is completed with errors, a table displays the status of the various components
and the required corrective action for the errors.
You can perform the update only when the validation is completed successfully.
Update
You can update all the nodes in the cluster or an HA pair in the cluster to the selected
version of the software image. While the update is in progress, you can choose to pause and
then either cancel or resume the update.
Cluster Management Using OnCommand System Manager 78
Managing clusters
If an error occurs, the update is paused and an error message is displayed with the remedial
steps. You can choose to either resume the update after performing the remedial steps or
cancel the update. You can view the table with the node name, uptime, state, and Data
ONTAP version when the update is successfully completed.
Update History tab
◦ For redundancy and quality of time service, you should associate at least three external
NTP servers with the cluster.
◦ You can specify an NTP server by using its IPv4 or IPv6 address or fully qualified host
name.
◦ You can manually specify the NTP version (v3 or v4) to use.
Cluster Management Using OnCommand System Manager 79
Managing clusters
By default, Data ONTAP automatically selects the NTP version that is supported for a
given external NTP server.
If the NTP version you specify is not supported for the NTP server, time exchange cannot
take place.
◦ At the advanced privilege level, you can specify an external NTP server that is associated
with the cluster to be the primary time source for correcting and adjusting the cluster time.
• You can display the NTP servers that are associated with the cluster (cluster time-
service ntp server show).
• You can modify the cluster's NTP configuration (cluster time-service ntp server
modify).
• You can disassociate the cluster from an external NTP server (cluster time-service ntp
server delete).
• At the advanced privilege level, you can reset the configuration by clearing all external NTP
servers' association with the cluster (cluster time-service ntp server reset).
A node that joins a cluster automatically adopts the NTP configuration of the cluster.
In addition to using NTP, Data ONTAP also enables you to manually manage the cluster time.
This capability is helpful when you need to correct erroneous time (for example, a node's time has
become significantly incorrect after a reboot). In that case, you can specify an approximate time
for the cluster until NTP can synchronize with an external time server. The time you manually set
takes effect across all nodes in the cluster.
You can manually manage the cluster time in the following ways:
• You can set or modify the time zone, date, and time on the cluster (cluster date modify).
• You can display the current time zone, date, and time settings of the cluster (cluster date
show).
Note: Job schedules do not adjust to manual cluster date and time changes. These jobs are
scheduled to run based on the current cluster time when the job was created or when the job
most recently ran. Therefore, if you manually change the cluster date or time, you must use the
job show and job history show commands to verify that all scheduled jobs are queued
and completed according to your requirements.
Network Time Protocol (NTP) Support
You can set up a network by disabling an IP address range and entering individual IP addresses for
cluster management, node management, and service provider networks.
SNMP
You can use System Manager to configure SNMP to monitor SVMs in your cluster.
Related information
Network and LIF management
Steps
1. Click the Configurations tab.
2. In the Services pane, click SNMP.
3. In the SNMP window, click Edit.
4. In the Edit SNMP Settings dialog box, select the Trap hosts tab, and either select or clear the
Enable traps check box.
5. If you enable SNMP traps, add the host name or IP address of the hosts to which the traps are
sent.
6. Click OK.
Related reference
SNMP window on page 82
The SNMP window enables you to view the current SNMP settings for your system. You can also
change your system's SNMP settings, enable SNMP protocols, and add trap hosts.
SNMP network management workstations, or managers, can query the SVM SNMP agent for
information. The SNMP agent gathers information and forwards it to the SNMP managers. The
SNMP agent also generates trap notifications whenever specific events occur. The SNMP agent on
the SVM has read-only privileges; it cannot be used for any set operations or for taking a
corrective action in response to a trap. Data ONTAP provides an SNMP agent compatible with
SNMP versions v1, v2c, and v3. SNMPv3 offers advanced security by using passphrases and
encryption.
For more information about SNMP support in clustered Data ONTAP systems, see TR-4220 on
the NetApp Support site.
mysupport.netapp.com
SNMP window
The SNMP window enables you to view the current SNMP settings for your system. You can also
change your system's SNMP settings, enable SNMP protocols, and add trap hosts.
Command buttons
Enable/Disable
Enables or disables SNMP.
Edit
Opens the Edit SNMP Settings dialog box, which enables you to specify the SNMP
communities for your storage system and enable or disable traps.
Test Trap Host
Sends a test trap to all the configured hosts to check whether the test trap reaches all the
hosts and whether the configurations for SNMP are set correctly.
Refresh
Updates the information in the window.
Details
The details area displays the following information about the SNMP server and host traps for your
storage system:
SNMP
Displays whether SNMP is enabled or not.
Traps
Displays if SNMP traps are enabled or not.
Location
Displays the address of the SNMP server.
Contact
Displays the contact details for the SNMP server.
Trap host IP Address
Displays the IP addresses of the trap host.
Community Names
Displays the community name of the SNMP server.
Security Names
Displays the security style for the SNMP server.
Cluster Management Using OnCommand System Manager 83
Managing clusters
Related tasks
Setting SNMP information on page 80
You can use the Edit SNMP Settings dialog box in System Manager to update information about
the storage system location, contact personnel, and to specify SNMP communities of your system.
Enabling or disabling SNMP traps on page 80
SNMP traps enable you to monitor the health and state of various components of the storage
system. You can use the Edit SNMP Settings dialog box in System Manager to enable or disable
SNMP traps on your storage system.
LDAP
You can use System Manager to configure an LDAP server that centrally maintains user
information.
Related tasks
Adding an LDAP client configuration on page 286
You can use System Manager to add an LDAP client configuration to use the LDAP services. An
LDAP server enables you to centrally maintain user information. You must first set up an LDAP
client to use LDAP services.
Deleting an LDAP client configuration on page 286
You can delete an LDAP client configuration when you do not want any Storage Virtual Machine
(SVM) to be associated with it by using System Manager.
Editing an LDAP client configuration on page 287
You can edit an LDAP client configuration using the Edit LDAP Client window in System
Manager.
Data ONTAP supports LDAP for user authentication, file access authorization, and user lookup
and mapping services between NFS and CIFS.
LDAP window
You can use the LDAP window to view LDAP clients for user authentication, file access
authorization, user search, and mapping services between NFS and CIFS.
The LDAP window is displayed as view-only at the cluster level. However, you can create, edit,
and delete LDAP clients from the Storage Virtual Machine (SVM) level.
Cluster Management Using OnCommand System Manager 84
Managing clusters
Command button
Refresh
Updates the information in the window.
LDAP client list
Displays, in tabular format, details about LDAP clients.
LDAP Client Configuration
Displays the name of the LDAP client configuration that you specified.
Storage Virtual Machine
Displays the name of the SVM for each LDAP client configuration.
Active Directory Domain
Displays the Active Directory domain for each LDAP client configuration.
Active Directory Servers
Displays the Active Directory server for each LDAP client configuration.
Preferred Active Directory Servers
Displays the preferred Active Directory server for each LDAP client configuration.
Users
You can use System Manager to add, edit, and manage a cluster user account, and specify a login
user method to access the storage system.
Roles
You can use an access-control role to control the level of access a user has to the system. In
addition to using the predefined roles, you can create new access-control roles, modify them,
delete them, or specify account restrictions for the users of a role.
Users window
You can use the Users window to manage user accounts, reset a user's password, or display
information about all user accounts.
Command buttons
Add
Opens the Add User dialog box, which enables you to add user accounts.
Cluster Management Using OnCommand System Manager 86
Managing clusters
Edit
Opens the Modify User dialog box, which enables you to modify user login methods.
Note: It is best to use a single role for all access and authentication methods of a user
account.
Delete
Enables you to delete a selected user account.
Change Password
Opens the Change Password dialog box, which enables you to reset the user password.
Lock
Locks the user account.
Refresh
Updates the information in the window.
Users list
The area below the users list displays detailed information about the selected user.
User
Displays the name of the user account.
Account Locked
Displays whether the user account is locked.
User Login Methods area
Application
Displays the access method that a user can use to access the storage system. The supported
access methods include the following:
• System console (console)
• HTTP(S) (http)
• Data ONTAP API (ontapi)
• Service Processor (service-processor)
• SSH (ssh)
Authentication
Displays the default supported authentication method, which is "password".
Role
Displays the role of a selected user.
Roles
You can use System Manager to create access-controlled user roles.
Related information
Administrator authentication and RBAC
Adding roles
You can use System Manager to add an access-control role and specify the command or command
directory that the role's users can access. You can also control the level of access the role has to the
Cluster Management Using OnCommand System Manager 87
Managing clusters
command or command directory and specify a query that applies to the command or command
directory.
Steps
1. Click the Configurations tab.
2. In the Cluster User Details pane, click Roles.
3. In the Roles window, click Add.
4. In the Add Role dialog box, type the role name and add the role attributes.
5. Click Add.
Editing roles
You can use System Manager to modify an access-control role's access to a command or command
directory and restrict a user's access to only a specified set of commands. You can also remove a
role's access to the default command directory.
Steps
1. Click the Configurations tab.
2. In the Cluster User Details pane, click Roles.
3. In the Roles window, select the role that you want to modify, and then click Edit.
4. In the Edit Role dialog box, modify the role attributes, and then click Modify.
5. Verify the changes that you made in the Roles window.
This role... Has this level of access... To the following commands or command directories
admin all All command directories (DEFAULT)
Cluster Management Using OnCommand System Manager 88
Managing clusters
This role... Has this level of access... To the following commands or command directories
autosupport all • set
• system node autosupport
readonly volume
none security
Note: The autosupport role is assigned to the predefined autosupport account, used by
AutoSupport OnDemand. Data ONTAP prevents you from modifying or deleting the
autosupport account. It also prevents you from assigning the autosupport role to other user
accounts.
Roles window
You can use the Roles window to manage roles for user accounts.
Command buttons
Add
Opens the Add Role dialog box, which enables you to create an access-control role and
specify the command or command directory that the role's users can access.
Edit
Opens the Edit Role dialog box, which enables you to add or modify role attributes.
Refresh
Updates the information in the window.
Roles list
The roles list provides a list of roles that are available to be assigned to users.
Role Attributes area
The details area displays the role attributes, such as the command or command directory that the
selected role can access, the access level, and the query that applies to the command or command
directory.
Cluster Management Using OnCommand System Manager 89
Managing the network
IPspaces
You can use System Manager to create and manage IPspaces.
Related information
Network and LIF management
Editing IPspaces
You can use System Manager to rename an existing IPspace.
About this task
• All IPspace names must be unique within a cluster and must not consist of names reserved by
the system, such as local or localhost.
• The system-defined "Default" and "Cluster" IPspaces cannot be modified.
Steps
1. Click the Network tab.
2. In the IPspaces tab, select the IPspace that you want to modify, and then click Edit.
3. In the Edit IPspace dialog box, rename the IPspace.
4. Click Rename.
Deleting IPspaces
You can use the System Manager to delete an IPspace when you no longer require it.
Before you begin
There must be no broadcast domains, network interfaces, peer relationships, or SVMs associated
with the IPspace that you want to delete.
About this task
The system-defined "Default" and "Cluster" IPspaces cannot be deleted.
Steps
1. Click the Network tab.
2. In the IPspaces tab, select the IPspace that you want to delete, and then click Delete.
3. Select the confirmation check box, and then click Yes.
Note: IPspaces support both IPv4 and IPv6 addresses on their routing domains.
If you are managing storage for a single organization, then you do not need to configure IPspaces.
If you are managing storage for multiple companies on a single ONTAP cluster, and you are
certain that none of your customers have conflicting networking configurations, then you also do
not need to use IPspaces. In many cases, the use of Storage Virtual Machines (SVMs), with their
own distinct IP routing tables, can be used to segregate unique networking configurations instead
of using IPspaces.
Broadcast domains
You can use System Manager to create and manage broadcast domains.
Related information
Network and LIF management
Steps
1. Click the Network tab.
2. In the Broadcast Domains tab, select the broadcast domain that you want to modify, and then
click Edit.
3. In the Edit Broadcast Domain dialog box, make the necessary changes.
4. Click Save and Close.
Related reference
Network window on page 103
You can use the Network window to view the list of network components, such as subnets,
network interfaces, Ethernet ports, broadcast domains, FC/FCoE adapters, and IPspaces, and to
create, edit, or delete these components in your storage system.
If you have created unique IPspaces to separate client traffic, then you need to create a broadcast
domain in each of those IPspaces.
Subnets
You can use System Manager to manage subnets.
Editing subnets
You can use System Manager to modify subnet attributes, such as the name, subnet address, range
of IP addresses, and gateway address of the subnet.
About this task
• You cannot use System Manager to edit subnets in the Cluster IPspace.
You must use the command-line interface (CLI) instead.
• Modifying the gateway address does not update the route.
You must use the CLI to update the route.
Steps
1. Click the Network tab.
2. In the Subnets tab, select the subnet that you want to modify, and then click Edit.
You can modify the subnet even when the LIF in that subnet is still in use.
3. In the Edit Subnet dialog box, make the necessary changes.
4. Click Save and Close.
Related reference
Network window on page 103
You can use the Network window to view the list of network components, such as subnets,
network interfaces, Ethernet ports, broadcast domains, FC/FCoE adapters, and IPspaces, and to
create, edit, or delete these components in your storage system.
Deleting subnets
You can use System Manager to delete a subnet when you no longer require the subnet and you
want to reallocate the IP addresses that were assigned to the subnet.
Before you begin
The subnet you want to delete must not have any LIFs using IP addresses from the subnet.
About this task
You cannot use System Manager to delete subnets in the Cluster IPspace. You must use the
command-line interface instead.
Steps
1. Click the Network tab.
2. In the Subnets tab, select the subnet that you want to delete, and then click Delete.
3. Select the confirmation check box, and then click Delete.
Related reference
Network window on page 103
Cluster Management Using OnCommand System Manager 93
Managing the network
You can use the Network window to view the list of network components, such as subnets,
network interfaces, Ethernet ports, broadcast domains, FC/FCoE adapters, and IPspaces, and to
create, edit, or delete these components in your storage system.
Network interfaces
You can use System Manager to create and manage network interfaces.
Related information
ONTAP concepts
Network and LIF management
You can use the Network window to view the list of network components, such as subnets,
network interfaces, Ethernet ports, broadcast domains, FC/FCoE adapters, and IPspaces, and to
create, edit, or delete these components in your storage system.
You can use the Network window to view the list of network components, such as subnets,
network interfaces, Ethernet ports, broadcast domains, FC/FCoE adapters, and IPspaces, and to
create, edit, or delete these components in your storage system.
Migrating a LIF
You can use System Manager to migrate a data LIF or a cluster-management LIF to a different
port on the same node or a different node within the cluster, if the port is either faulty or requires
maintenance.
Before you begin
The destination node and ports must be operational and must be able to access the same network
as the source port.
About this task
• If you are removing the NIC from the node, you must migrate LIFs hosted on the ports
belonging to the NIC to other ports in the cluster.
• You cannot migrate iSCSI or FC LIFs.
Steps
1. Click the Network tab.
2. In the Network Interfaces tab, select the interface that you want to migrate, and then click
Migrate.
3. In the Migrate Interface dialog box, select the destination port to which you want to migrate
the LIF.
4. Optional: Select the Migrate Permanently check box to set the destination port as the new
home port for the LIF.
5. Click Migrate.
VLAN VLAN
VLAN VLAN
data LIF
A LIF that is associated with a Storage Virtual Machine (SVM) and is used for
communicating with clients.
You can have multiple data LIFs on a port. These interfaces can migrate or fail over
throughout the cluster. You can modify a data LIF to serve as an SVM management LIF by
modifying its firewall policy to mgmt.
Sessions established to NIS, LDAP, Active Directory, WINS, and DNS servers use data
LIFs.
intercluster LIF
A LIF that is used for cross-cluster communication, backup, and replication. You must
create an intercluster LIF on each node in the cluster before a cluster peering relationship
can be established.
These LIFs can only fail over to ports in the same node. They cannot be migrated or failed
over to another node in the cluster.
• FC LIFs can be configured only on FC ports; iSCSI LIFs cannot coexist with any other
protocols.
SAN administration
• NAS and SAN protocols cannot coexist on the same LIF.
• You must use valid characters that are supported in ONTAP for naming LIFs.
• You should avoid using characters that are not in Unicode basic plane.
Ethernet ports
You can use System Manager to create and manage Ethernet ports.
Related information
Network and LIF management
ONTAP concepts
Cluster Management Using OnCommand System Manager 99
Managing the network
You can use the Network window to view the list of network components, such as subnets,
network interfaces, Ethernet ports, broadcast domains, FC/FCoE adapters, and IPspaces, and to
create, edit, or delete these components in your storage system.
Deleting VLANs
You can delete VLANs that are configured on network ports by using System Manager. You might
have to delete a VLAN before removing a NIC from its slot. When you delete a VLAN, it is
automatically removed from all the failover rules and groups that use the VLAN.
Before you begin
There must be no LIFs associated with the VLAN.
Steps
1. Click the Network tab.
2. Click Ethernet Ports.
3. Select the VLAN that you want to delete, and then click Delete.
4. Select the confirmation check box, and then click Delete.
Related reference
Network window on page 103
You can use the Network window to view the list of network components, such as subnets,
network interfaces, Ethernet ports, broadcast domains, FC/FCoE adapters, and IPspaces, and to
create, edit, or delete these components in your storage system.
For example, eob indicates that an Ethernet port is the second port on the node's motherboard.
VLANs must be named by using the syntax port_name-vlan-id. "port_name" specifies the
physical port or interface group and "vlan-id" specifies the VLAN identification on the network.
For example, e1c-80 is a valid VLAN name.
For example, in this figure, if a member of VLAN 10 on Floor 1 sends a frame for a member of
VLAN 10 on Floor 2, Switch 1 inspects the frame header for the VLAN tag (to determine the
VLAN) and the destination MAC address. The destination MAC address is not known to Switch 1.
Therefore, the switch forwards the frame to all other ports that belong to VLAN 10, that is, port 4
of Switch 2 and Switch 3. Similarly, Switch 2 and Switch 3 inspect the frame header. If the
destination MAC address on VLAN 10 is known to either switch, that switch forwards the frame
to the destination. The end-station on Floor 2 then receives the frame.
FC/FCoE adapters
You can use System Manager to create and manage FC/FCoE adapters.
Related information
Network and LIF management
Cluster Management Using OnCommand System Manager 103
Managing the network
Network window
You can use the Network window to view the list of network components, such as subnets,
network interfaces, Ethernet ports, broadcast domains, FC/FCoE adapters, and IPspaces, and to
create, edit, or delete these components in your storage system.
• Tabs
• Subnet tab
• Network Interfaces tab
• Ethernet Ports tab
• Broadcast Domain tab
• FC/FCoE Adapters tab
• IPspaces tab
Tabs
Subnet
Enables you to view a list of subnets, and create, edit, or delete subnets from your storage
system.
Cluster Management Using OnCommand System Manager 104
Managing the network
Network Interfaces
Enables you to view a list of network interfaces, create, edit, or delete interfaces from your
storage system, migrate the LIFs, change the status of the interface, and send the interface
back to the home port.
Ethernet Ports
Enables you to view and edit the ports of a cluster, and create, edit, or delete interface
groups and VLAN ports.
Broadcast Domains
Enables you to view a list of broadcast domains, and create, edit, or delete domains from
your storage system.
FC/FCoE Adapters
Enables you to view the ports in a cluster, and edit the FC/FCoE adapter settings.
IPspaces
Enables you to view a list of IPspaces and broadcast domains, and create, edit, or delete an
IPspace from your storage system.
Subnet tab
Command buttons
Create
Opens the Create Subnet dialog box, which enables you to create new subnets that contain
configuration information for creating a network interface.
Edit
Opens the Edit Subnet dialog box, which enables you to modify certain attributes of a
subnet such as the name, subnet address, range of IP addresses, and gateway details.
Delete
Deletes the selected subnet.
Refresh
Updates the information in the window.
Subnet list
Name
Specifies the name of the subnet.
Subnet IP/Subnet mask
Specifies the subnet address details.
Gateway
Specifies the IP address of the gateway.
Available
Specifies the number of IP addresses available in the subnet.
Used
Specifies the number of IP addresses used in the subnet.
Total Count
Specifies the total number of IP addresses (available and used) in the subnet.
Broadcast domain
Specifies the broadcast domain to which the subnet belongs.
Cluster Management Using OnCommand System Manager 105
Managing the network
IPspace
Specifies the IPspace to which the subnet belongs.
Details area
The area below the subnet list displays detailed information about the selected subnet, including
the subnet range and a graph showing the available, used, and total number of IP addresses.
Network Interfaces tab
• For cluster LIFs and node management LIFs, you cannot use System Manager to perform the
following actions:
◦ Create, edit, delete, enable, or disable the LIFs
◦ Migrate the LIFs or send the LIFs back to the home port
• For cluster management LIFs, you can use System Manager to migrate the LIFs, or send the
LIFs back to the home port.
However, you cannot create, edit, delete, enable, or disable the LIFs.
• For intercluster LIFs, you can use System Manager to create, edit, delete, enable, or disable the
LIFs.
However, you cannot migrate the LIFs, or send the LIFs back to the home port.
• You cannot create, edit, or delete network interfaces in the following configurations:
◦ A MetroCluster configuration
◦ SVMs configured for disaster recovery (DR).
Command buttons
Create
Opens the Create Network Interface dialog box, which enables you to create network
interfaces and intercluster LIFs to serve data and manage SVMs.
Edit
Opens the Edit Network Interface dialog box, which you can use to enable management
access for a data LIF.
Delete
Deletes the selected network interface.
This button is enabled only if the data LIF is disabled.
Status
Open the drop-down menu, which provides the option to enable or disable the selected
network interface.
Migrate
Enables you to migrate a data LIF or a cluster management LIF to a different port on the
same node or a different node within the cluster.
Send to Home
Enables you to host the LIF back on its home port.
This command button is enabled only when the selected interface is hosted on a non-home
port and when the home port is available.
This command button is disabled when any node in the cluster is down.
Refresh
Updates the information in the window.
Cluster Management Using OnCommand System Manager 106
Managing the network
Interface list
You can move the pointer over the color-coded icon to view the operational status of the interface:
• Green specifies that the interface is enabled.
• Red specifies that the interface is disabled.
Interface Name
Specifies the name of the network interface.
Storage Virtual Machine
Specifies the SVM to which the interface belongs.
IP Address/WWPN
Specifies the IP address or WWPN of the interface.
Current Port
Specifies the name of the node and port on which the interface is hosted.
Data Protocol Access
Specifies the protocol used to access data.
Management Access
Specifies whether management access is enabled on the interface.
Subnet
Specifies the subnet to which the interface belongs.
Role
Specifies the operational role of the interface, which can be data, intercluster, cluster,
cluster management, or node management.
Details area
The area below the interface list displays detailed information about the selected interface: failover
properties such as the home port, current port, speed of the ports, failover policy, failover group,
and failover state, and general properties such as the administrative status, role, IPspace, broadcast
domain, network mask, gateway, and DDNS status.
Ethernet Ports tab
Command buttons
Create Interface Group
Opens the Create Interface Group dialog box, which enables you create interface groups by
choosing the ports, and determining the use of ports and network traffic distribution.
Create VLAN
Opens the Create VLAN dialog box, which enables you to create a VLAN by choosing an
Ethernet port or an interface group, and adding VLAN tags.
Edit
Opens one of the following dialog boxes:
• Edit Ethernet Port dialog box: Enables you to modify Ethernet port settings.
• Edit VLAN dialog box: Enables you to modify VLAN settings.
• Edit Interface Group dialog box: Enables you to modify interface groups.
You can only edit VLANs that are not associated with a broadcast domain.
Cluster Management Using OnCommand System Manager 107
Managing the network
Delete
Opens one of the following dialog boxes:
• Delete VLAN dialog box: Enables you to delete a VLAN.
• Delete Interface Group dialog box: Enables you to delete an interface group.
Refresh
Updates the information in the window.
Ports list
You can move the pointer over the color-coded icon to view the operational status of the port:
• Green specifies that the port is enabled.
• Red specifies that the port is disabled.
Port
Displays the port name of the physical port, VLAN port, or the interface group.
Node
Displays the node on which the physical interface is located.
Broadcast Domain
Displays the broadcast domain of the port.
IPspace
Displays the IPspace to which the port belongs.
Type
Displays the type of the interface such as interface group, physical interface, or VLAN.
Details area
The area below the ports list displays detailed information about the port properties.
Details tab
Displays administrative details and operational details.
As part of the operational details, the tab displays the health status of the ports. The ports
can be healthy or degraded. A degraded port is a port on which continuous network
fluctuations occur, or a port that has no connectivity to any other ports in the same
broadcast domain.
In addition, the tab also displays the interface name, SVM details, and IP address details of
the network interfaces that are hosted on the selected port. It also indicates whether the
interface is at the home port or not.
Performance tab
Displays performance metrics graphs of the ethernet ports, including error rate and
throughput.
Changing the client time zone or the cluster time zone impacts the performance metrics
graphs. You should refresh your browser to view the updated graphs.
Broadcast Domain tab
Command buttons
Create
Opens the Create Broadcast Domain dialog box, which enables you to create new broadcast
domains to contain ports.
Cluster Management Using OnCommand System Manager 108
Managing the network
Edit
Opens the Edit Broadcast Domain dialog box, which enables you to modify the attributes of
a broadcast domain, such as the name, MTU size, and associated ports.
Delete
Deletes the selected broadcast domain.
Refresh
Updates the information in the window.
Broadcast domain list
Broadcast Domain
Specifies the name of the broadcast domain.
MTU
Specifies the MTU size.
IPspace
Specifies the IPspace.
Combined Port Update Status
Specifies the status of the port updates when you create or edit a broadcast domain. Any
errors in the port updates are displayed in a separate window, which you can open by
clicking the associated link.
Details area
The area below the broadcast domain list displays all the ports in a broadcast domain. In a non-
default IPspace, if a broadcast domain has ports with update errors, such ports are not displayed in
the details area. You can move the pointer over the color-coded icon to view the operational status
of the ports:
• Green specifies that the port is enabled.
• Red specifies that the port is disabled.
FC/FCoE Adapters tab
Command buttons
Edit
Opens the Edit FC/FCoE Settings dialog box, which enables you to modify the speed of the
adapter.
Status
Enables you to bring the adapter online or take it offline.
Refresh
Updates the information in the window.
FC/FCoE adapters list
WWNN
Specifies the unique identifier of the FC/FCoE adapter.
Node Name
Specifies the name of the node that is using the adapter.
Slot
Specifies the slot that is using the adapter.
Cluster Management Using OnCommand System Manager 109
Managing the network
WWPN
Specifies the FC worldwide port name (WWPN) of the adapter.
Status
Specifies whether the status of the adapter is online or offline.
Speed
Specifies whether the speed settings are automatic or manual.
Details area
The area below the FC/FCoE adapters list displays detailed information about the selected
adapters.
Details tab
Displays adapter details such as the media type, port address, data link rate, connection
status, operation status, fabric status, and the speed of the adapter.
Performance tab
Displays performance metrics graphs of the FC/FCoE adapter, including IOPS and response
time.
Changing the client time zone or the cluster time zone impacts the performance metrics
graphs. You should refresh your browser to see the updated graphs.
IPspaces tab
Command buttons
Create
Opens the Create IPspace dialog box, which enables you to create a new IPspace.
Edit
Opens the Edit IPspace dialog box, which enables you to rename an existing IPspace.
Delete
Deletes the selected IPspace.
Refresh
Updates the information in the window.
IPspaces list
Name
Specifies the name of the IPspace.
Broadcast Domains
Specifies the broadcast domain.
Details area
The area below the IPspaces list displays the list of Storage Virtual Machines (SVMs) in the
selected IPspace.
Related tasks
Creating network interfaces on page 93
You can use System Manager to create a network interface or LIF to access data from Storage
Virtual Machines (SVMs), manage SVMs, and to provide an interface for intercluster connectivity.
Editing network interfaces on page 95
Cluster Management Using OnCommand System Manager 110
Managing the network
You can use System Manager to modify the network interface to enable management access for a
data LIF.
Deleting network interfaces on page 95
You can use System Manager to delete a network interface to free the IP address of the interface
and use the IP address for a different purpose.
Creating subnets on page 35
You can create a subnet by using System Manager to provide a logical subdivision of an IP
network to pre-allocate the IP addresses. A subnet enables you to create interfaces more easily by
specifying a subnet instead of an IP address and network mask values for each new interface.
Editing subnets on page 92
You can use System Manager to modify subnet attributes, such as the name, subnet address, range
of IP addresses, and gateway address of the subnet.
Deleting subnets on page 92
You can use System Manager to delete a subnet when you no longer require the subnet and you
want to reallocate the IP addresses that were assigned to the subnet.
Creating VLAN interfaces on page 99
You can create a VLAN for maintaining separate broadcast domains within the same network
domain by using System Manager.
Creating interface groups on page 99
You can use System Manager to create an interface group—single-mode, static multimode, or
dynamic multimode (LACP)—to present a single interface to clients by combining the capabilities
of the aggregated network ports.
Editing the FC/FCoE adapter speed on page 103
You can modify the FC/FCoE adapter speed setting by using the Edit FC/FCoE Adapter Settings
dialog box in System Manager.
Editing interface group settings on page 100
You can use System Manager to add ports to an interface group or remove ports from an interface
group, and modify the usage mode and load distribution pattern of the ports in the interface group.
Deleting VLANs on page 101
You can delete VLANs that are configured on network ports by using System Manager. You might
have to delete a VLAN before removing a NIC from its slot. When you delete a VLAN, it is
automatically removed from all the failover rules and groups that use the VLAN.
Creating broadcast domains on page 35
You can create a broadcast domain by using System Manager to provide a logical division of a
computer network. In a broadcast domain, all associated nodes can be reached through broadcast
at the datalink layer.
Editing broadcast domains on page 90
You can use System Manager to modify the attributes of a broadcast domain, such as the name, the
MTU size, and the ports associated with the broadcast domain.
Deleting broadcast domains on page 91
You can delete a broadcast domain by using System Manager when you no longer require the
broadcast domain.
Setting up a network when an IP address range is disabled on page 24
You can set up a network by disabling an IP address range and entering individual IP addresses for
cluster management, node management, and service provider networks.
Cluster Management Using OnCommand System Manager 111
Managing physical storage
Storage Tiers
You can use System Manager to create, edit, and delete external capacity tiers; attach an external
capacity tier to the existing aggregates; and create, edit, and delete aggregates.
You can use the local performance tier or the external capacity tier to store your data based on
whether the data is frequently accessed.
Related information
Disk and aggregate management
Steps
1. Click the Storage Tiers tab.
2. Click Add External Capacity Tier.
The Add External Capacity Tier window is displayed.
3. In the External Capacity Provider field, select the external capacity tier that you want to add.
4. Enter the server name that hosts the external capacity tier, the port to access the external
capacity tier, the access key ID of the external capacity tier, the secret key of the external tier,
and the container name.
5. Enable the SSL button if you want to transfer the data securely to the external capacity tier.
6. From the IPspace list, select the IPspace that is used to connect to the external capacity tier.
7. Click Save to save the external capacity tier.
8. Optional: Click Save and Attach Aggregates to save the external capacity tier and to attach
aggregates to it.
Cluster Management Using OnCommand System Manager 112
Managing physical storage
Related concepts
What external capacity tiers and tiering policies are on page 125
External capacity tiers provide storage for infrequently accessed data. You can attach an all flash
(all SSD) aggregate to an external capacity tier to store infrequently used data. You can use tiering
policies to decide whether data should be moved to an external capacity tier.
What a FabricPool is on page 125
FabricPool is a hybrid storage solution that uses an all flash (all SSD) aggregate as the
performance tier and an object store as the external capacity tier. Data in a FabricPool is stored in a
tier based on whether it is frequently accessed or not. Using a FabricPool helps you reduce storage
cost without compromising performance, efficiency, or protection.
Related tasks
Installing a CA certificate if you use StorageGRID Webscale on page 120
For ONTAP to authenticate with StorageGRID Webscale as the object store for FabricPool, you
must install a StorageGRID Webscale CA certificate on the cluster.
Related reference
Storage Tiers window on page 126
You can use the Storage Tiers window to view cluster-wide space details; view aggregate details;
and view external capacity tier details.
You can use the Storage Tiers window to view cluster-wide space details; view aggregate details;
and view external capacity tier details.
Editing aggregates
You can use System Manager to change the aggregate name, RAID type, and RAID group size of
an existing aggregate when required.
Before you begin
For modifying the RAID type of an aggregate from RAID4 to RAID-DP, the aggregate must
contain enough compatible spare disks, excluding the hot spares.
About this task
• You cannot change the RAID group of ONTAP systems that support array LUNs.
RAID0 is the only available option.
• You cannot change the RAID type of partitioned disks.
RAID-DP is the only option that is available for partitioned disks.
• You cannot rename a SnapLock Compliance aggregate.
• If the aggregate consists of SSDs with storage pool, you can modify only the name of the
aggregate.
• If the triple parity disk size is 10 TB, and the other disks are smaller than 10 TB in size, then
you can select RAID-DP or RAID-TEC as the RAID type.
Cluster Management Using OnCommand System Manager 114
Managing physical storage
• If the triple parity disk size is 10 TB, and if even one of the other disks is larger than 10 TB in
size, then RAID-TEC is the only available option for RAID type.
Steps
1. Choose one of the following methods:
• Click the Storage Tiers tab.
• Click Hardware and Diagnostics > Aggregates.
2. Select the aggregate that you want to edit, and then click Edit.
3. In the Edit Aggregate dialog box, modify the aggregate name, the RAID type, and the RAID
group size, as required.
4. Click Save.
Related concepts
What compatible spare disks are on page 122
In System Manager, compatible spare disks are disks that match the properties of other disks in the
aggregate. When you want to increase the size of an existing aggregate by adding HDDs (capacity
disks) or change the RAID type of an aggregate from RAID4 to RAID-DP, the aggregate must
contain sufficient compatible spare disks.
Related reference
Aggregates window on page 128
You can use the Aggregates window to create, display, and manage information about aggregates.
Storage Tiers window on page 126
You can use the Storage Tiers window to view cluster-wide space details; view aggregate details;
and view external capacity tier details.
Deleting aggregates
You can use System Manager to delete aggregates when you no longer require the data in the
aggregates. However, you cannot delete the root aggregate because it contains the root volume,
which contains the system configuration information.
Before you begin
• All the volumes and the associated Storage Virtual Machines (SVMs) contained by the
aggregate must be deleted.
• The aggregate must be offline.
Steps
1. Choose one of the following methods:
• Click the Storage Tiers tab.
• Click Hardware and Diagnostics > Aggregates.
2. Select one or more aggregates that you want to delete, and then click Delete.
3. Select the confirmation check box, and then click Delete.
Related reference
Aggregates window on page 128
You can use the Aggregates window to create, display, and manage information about aggregates.
Storage Tiers window on page 126
Cluster Management Using OnCommand System Manager 115
Managing physical storage
You can use the Storage Tiers window to view cluster-wide space details; view aggregate details;
and view external capacity tier details.
Dedicated SSDs Select the SSD size and the number of SSDs to include, and optionally modify
the RAID configuration:
a. Click Change.
b. In the Change RAID Configuration dialog box, specify the RAID type and
RAID group size, and then click Save.
4. Click Add.
For mirrored aggregates, an Add Cache dialog box is displayed with the information that twice
the number of selected disks will be added.
5. In the Add Cache dialog box, click Yes.
Result
The cache disks are added to the selected aggregate.
Related information
NetApp Technical Report 4070: Flash Pool Design and Implementation Guide
2. In the Aggregates window, select the Flash Pool aggregate, and then click Add Cache.
3. In the Add Cache dialog box, perform the appropriate action:
• If the RAID group is RAID-TEC, and if you are adding FSAS or MSATA type of disks that are
equal to or larger than 10 TB in size, then you can add them to All RAID groups, New RAID
group, and Specific RAID group.
The disks are added after downsizing the disk size to the size of the disks in the pre-existing
RAID group of the existing aggregate.
Steps
1. Choose one of the following methods:
• Click the Storage Tiers tab.
• Click Hardware and Diagnostics > Aggregates.
2. In the Storage Tiers window, select the aggregate to which you want to add capacity disks,
and then click Actions > Add Capacity.
3. In the Add Capacity dialog box, perform the following steps:
a. Specify the disk type for the capacity disks by using the Disk Type to Add option.
b. Specify the number of capacity disks by using the Number of Disks or Partitions option.
4. Specify the RAID group to which the capacity disks are to be added by using the Add Disks
To option.
By default, System Manager adds the capacity disks to All RAID groups.
a. Click Change.
b. In the RAID Group Selection dialog box, specify the RAID group as New RAID group
or Specific RAID group by using the Add Disks To option.
Shared disks can be added only to the New RAID group option.
5. Click Add.
For mirrored aggregates, an Add Capacity dialog box is displayed with the information that
twice the number of selected disks will be added.
6. In the Add Capacity dialog box, click Yes to add the capacity disks.
Result
The capacity disks are added to the selected aggregate, and the aggregate size is increased.
Related concepts
What compatible spare disks are on page 122
In System Manager, compatible spare disks are disks that match the properties of other disks in the
aggregate. When you want to increase the size of an existing aggregate by adding HDDs (capacity
disks) or change the RAID type of an aggregate from RAID4 to RAID-DP, the aggregate must
contain sufficient compatible spare disks.
Steps
1. Choose one of the following methods:
• Click the Storage Tiers tab.
• Click Hardware and Diagnostics > Aggregates.
2. In the Storage Tiers window, select the aggregate to which you want to add capacity disks,
and then click Actions > Add Capacity.
3. In the Add Capacity dialog box, perform the following steps:
a. Click Change.
b. In the Change RAID Configuration dialog box, specify the RAID group to which you
want to add the capacity disks.
You can change the default value All RAID groups to either Specific RAID group or
New RAID group.
c. Click Save.
Mirroring aggregates
You can use System Manager to protect data and provide increased resiliency by mirroring data in
real-time, within a single aggregate. Mirroring aggregates removes single points of failure in
connecting to disks and array LUNs.
Before you begin
There must be sufficient free disks in the other pool to mirror the aggregate.
About this task
You cannot mirror Flash Pool aggregate when the cache source is storage pools.
Cluster Management Using OnCommand System Manager 120
Managing physical storage
Steps
1. Choose one of the following methods:
• Click the Storage Tiers tab.
• Click Hardware and Diagnostics > Aggregates.
2. Select the aggregate that you want to mirror, and then click Actions > Mirror.
Note: SyncMirror is not supported on FabricPool.
3. In the Mirror this aggregate dialog box, click Mirror to initiate the mirroring.
Understanding aggregates
You can help meet the different security, backup, performance, and data sharing needs of users by
grouping the physical storage resources on your system into one or more aggregates. Each
aggregate has its own RAID configuration, plex structure, and set of assigned drives or array
LUNs.
Aggregates have the following characteristics:
• They can be composed of drives or array LUNs.
• They can be mirrored or unmirrored.
• If they are composed of drives, they can be single-tier (composed of only HDDs or only SSDs)
or they can be Flash Pool aggregates, which include both HDD RAID groups and an SSD
cache.
The cluster administrator can assign one or more aggregates to an SVM, in which case you can use
only those aggregates to contain volumes for that SVM.
Related tasks
Adding an external capacity tier on page 111
You can use System Manager to add an external capacity tier to an SSD aggregate or a VMDISK
aggregate. External capacity tiers provide storage for infrequently used data.
RAID groups are added, and the size of the SSD RAID groups are determined by the number of
SSDs in the storage pool.
When you enable data compression manually for a volume in a Flash Pool aggregate, adaptive
compression is enabled by default.
There is a platform-dependent maximum size for the SSD cache.
How you can use effective ONTAP disk type for mixing HDDs
Starting with Data ONTAP 8.1, certain ONTAP disk types are considered equivalent for the
purposes of creating and adding to aggregates, and managing spares. ONTAP assigns an effective
disk type for each disk type. You can mix HDDs that have the same effective disk type.
When the raid.disktype.enable option is set to off, you can mix certain types of HDDs
within the same aggregate. When the raid.disktype.enable option is set to on, the effective
disk type is the same as the ONTAP disk type. Aggregates can be created using only one disk type.
The default value for the raid.disktype.enable option is off.
Starting with Data ONTAP 8.2, the option raid.mix.hdd.disktype.capacity must be set to
on to mix disks of type BSAS, FSAS, and ATA. The option
raid.mix.hdd.disktype.performance must be set to on to mix disks of type FCAL and
SAS.
The following table shows how the disk types map to the effective disk type:
You can increase the size of an existing non-root aggregate or a root aggregate containing disks by
adding capacity disks. You can use System Manager to add HDDs or SSDs of the selected ONTAP
disk type and to modify the RAID group options.
Editing aggregates on page 113
You can use System Manager to change the aggregate name, RAID type, and RAID group size of
an existing aggregate when required.
Aggregate
rg0 rg0
rg1 rg1
rg2 rg2
rg3 rg3
pool0 pool1
The following diagram shows an aggregate composed of array LUNs with SyncMirror enabled
and implemented. A second plex has been created for the aggregate, plex1. Plex1 is a copy of
plex0, and the RAID groups are also identical.
Cluster Management Using OnCommand System Manager 125
Managing physical storage
Aggregate
rg0 rg0
rg1 rg1
What a FabricPool is
FabricPool is a hybrid storage solution that uses an all flash (all SSD) aggregate as the
performance tier and an object store as the external capacity tier. Data in a FabricPool is stored in a
tier based on whether it is frequently accessed or not. Using a FabricPool helps you reduce storage
cost without compromising performance, efficiency, or protection.
Related tasks
Adding an external capacity tier on page 111
You can use System Manager to add an external capacity tier to an SSD aggregate or a VMDISK
aggregate. External capacity tiers provide storage for infrequently used data.
Attaching an aggregate to external capacity tier on page 112
Cluster Management Using OnCommand System Manager 126
Managing physical storage
You can use System Manager to attach an all flash aggregate to an external capacity tier. External
capacity tier provides storage for infrequently used data.
The Internal Tier panel, or the Performance Tier panel if the cluster has all flash (all SSD)
aggregates, displays cluster-wide space details such as sum of the total sizes of all the aggregates,
space used by the aggregates in the cluster, and available space in the cluster.
The External Capacity Tier panel displays total licensed external capacity tiers in the cluster,
licensed space that is used in the cluster, and licensed space that is available in the cluster. It also
displays the unlicensed external capacity that is used.
Aggregates are grouped by type and the aggregate panel displays details about the total aggregate
space, space used, and the available space. You can select the aggregate and perform the aggregate
related actions.
You can also create, edit, view, and delete external capacity tier details.
Command buttons
Add Aggregate
Enables you to create an aggregate.
Actions
Provides the following options:
Change status to
Changes the status of the selected aggregate to one of the following status:
• Online
Read and write access to volumes contained in this aggregate is allowed.
• Offline
Some operations—such as parity reconstruction—are allowed, but data access is
not allowed.
• Restrict
No read or write access is allowed.
Add Capacity
Enables you to add capacity (HDDs or SSDs) to existing aggregates.
Add Cache
Enables you to add cache disks (SSDs) to existing HDD aggregates or Flash Pool
aggregates.
You cannot add cache to FabricPool.
This button is not available for a cluster containing nodes with All Flash Optimized
personality.
Mirror
Enables you to mirror the aggregates.
Volume Move
Enables you to move a FlexVol volume.
Attach External Capacity Tier
Enables you to attach an external capacity tier to the aggregate.
Cluster Management Using OnCommand System Manager 127
Managing physical storage
You can use System Manager to change the aggregate name, RAID type, and RAID group size of
an existing aggregate when required.
Aggregates
You can use System Manager to create aggregates to support the differing security, backup,
performance, and data sharing requirements of your users.
Aggregates window
You can use the Aggregates window to create, display, and manage information about aggregates.
• Aggregates window on page 128
• Aggregate list on page 129
• Details area on page 130
• Command buttons on page 128
Command buttons
Create
Opens the Create Aggregate dialog box, which enables you to create an aggregate.
Edit
Opens the Edit Aggregate dialog box, which enables you to change the name of an
aggregate or the level of RAID protection that you want to provide for this aggregate.
Delete
Deletes the selected aggregate.
Note: This button is disabled for the root aggregate.
Actions
Provides the following options:
Change status to
Changes the status of the selected aggregate to one of the following status:
• Online
Read and write access to volumes contained in this aggregate is allowed.
• Offline
No read or write access is allowed.
• Restrict
Some operations—such as parity reconstruction—are allowed, but data access is not
allowed.
Add Capacity
Enables you to add capacity (HDDs or SSDs) to existing aggregates.
Add Cache
Enables you to add cache disks (SSDs) to existing HDD aggregates or Flash Pool
aggregates.
This button is not available for a cluster containing nodes with All Flash Optimized
personality.
Mirror
Enables you to mirror the aggregates.
Cluster Management Using OnCommand System Manager 129
Managing physical storage
Volume Move
Enables you to move a FlexVol volume.
Attach External Capacity Tier
Enables you to attach an external capacity tier to the aggregate.
Refresh
Updates the information in the window.
Aggregate list
Displays the name and the space usage information for each aggregate.
Status
Displays the status of the aggregate.
Name
Displays the name of the aggregate.
Node
Displays the name of the node to which the disks of the aggregate are assigned.
This field is available only at the cluster level.
Type
Displays the type of aggregate.
This field is not displayed for a cluster containing nodes with All Flash Optimized
personality.
Used (%)
Displays the percentage of space used in the aggregate.
Available Space
Displays the available space in the aggregate.
Used Space
Displays the amount of space that is used for data in the aggregate.
Total Space
Displays the total space of the aggregate.
FabricPool
Displays whether the selected aggregate is attached to an external capacity tier.
External Capacity Tier
If the selected aggregate is attached to an external capacity tier, displays the name of the
external capacity tier.
Volume Count
Displays the number of volumes that are associated with the aggregate.
Disk Count
Displays the number of disks that are used to create the aggregate.
Flash Pool
Displays the total cache size of the Flash Pool aggregate. A value of NA indicates that the
aggregate is not a Flash Pool aggregate.
This field is not displayed for a cluster containing nodes with All Flash Optimized
personality.
Cluster Management Using OnCommand System Manager 130
Managing physical storage
Mirrored
Displays whether the aggregate is mirrored.
SnapLock Type
Displays the SnapLock type of the aggregate.
Details area
You can expand the aggregate to view information about the selected aggregate. You can click
Show More Details to view detailed information about the selected aggregate.
Overview tab
Displays detailed information about the selected aggregate, and displays a pictorial
representation of the space allocation of the aggregate, the space savings of the aggregate,
which includes the total logical space used, total physical space used, overall savings from
storage efficiency, data reduction ratio, FlexClone volume ratio, and Snapshot copies ratio,
and the performance of the aggregate in IOPS and total data transfers.
Disk Information tab
Displays disk layout information, such as the name of the disk, disk type, physical size,
usable size, disk position, disk status, plex name, plex status, RAID group, RAID type, and
storage pool (if any) for the selected aggregate. The disk port that is associated with the disk
primary path and the disk name with the disk secondary path for a multipath configuration
are also displayed.
Volumes tab
Displays details about the total number of volumes on the aggregate, total aggregate space,
and the space committed to the aggregate. The total committed space is the sum of the total
size of all the volumes (online and offline) and the Snapshot reserve space of the online
volumes.
Performance tab
Displays graphs that show the performance metrics of the aggregates, including total
transfers, IOPS, and write workload impact. Performance metrics data for read, write, and
total transfers is displayed, and the data for SSDs and HDDs is recorded separately.
Performance metrics data for the impact of write workload to "nvlog" and dirty buffer is
also displayed.
Changing the client time zone or the cluster time zone impacts the performance metrics
graphs. You must refresh your browser to see the updated graphs.
Related tasks
Provisioning storage through aggregates on page 37
You can create an aggregate or a Flash Pool aggregate to provide storage for one or more volumes
by using System Manager.
Deleting aggregates on page 114
You can use System Manager to delete aggregates when you no longer require the data in the
aggregates. However, you cannot delete the root aggregate because it contains the root volume,
which contains the system configuration information.
Editing aggregates on page 113
You can use System Manager to change the aggregate name, RAID type, and RAID group size of
an existing aggregate when required.
Storage pools
You can use System Manager to create storage pools to enable SSDs to be shared by multiple
Flash Pool aggregates.
Cluster Management Using OnCommand System Manager 131
Managing physical storage
Related information
Disk and aggregate management
Steps
1. Click Hardware and Diagnostics > Storage Pools.
2. In the Storage Pools window, select the storage pool, and then click Add Disks.
3. In the Add Disks dialog box, specify the number of disks that you want to add.
4. Click Next.
5. In the Summary dialog box, review how the cache is distributed among various aggregates and
the free space of the storage pool.
6. Click Add.
Related reference
Storage Pools window on page 136
You can use the Storage Pools window to create, display, and manage a dedicated cache of SSDs,
also known as storage pools. These storage pools can be associated with a non-root aggregate to
provide SSD cache and with a Flash Pool aggregate to increase its size.
You can use only one spare SSD for a storage pool, so that if an SSD in that storage pool becomes
unavailable, Data ONTAP can use the spare SSD to reconstruct the partitions of the
malfunctioning SSD. You do not need to reserve any allocation units as spare capacity; Data
ONTAP can use only a full, unpartitioned SSD as a spare for SSDs in a storage pool.
After you add an SSD to a storage pool, you cannot remove it, just as you cannot remove disks
from an aggregate. If you want to use the SSDs in a storage pool as discrete drives again, you must
destroy all Flash Pool aggregates to which the storage pool's allocation units have been allocated,
and then destroy the storage pool.
How Flash Pool SSD partitioning increases cache allocation flexibility for Flash Pool
aggregates
Flash Pool SSD partitioning, also known as Advanced Drive Partitioning, enables you to group
SSDs together into an SSD storage pool that can be allocated to multiple Flash Pool aggregates.
This amortizes the cost of the parity SSDs over more aggregates, increases SSD allocation
flexibility, and maximizes SSD performance .
The storage pool is associated with an HA pair, and can be composed of SSDs owned by either
node in the HA pair.
When you add an SSD to a storage pool, it becomes a shared SSD, and it is divided into 4
partitions.
Storage from an SSD storage pool is divided into allocation units, each of which represents 25%
of the total storage capacity of the storage pool. Allocation units contain one partition from each
SSD in the storage pool, and are added to a Flash Pool cache as a single RAID group. By default,
for storage pools associated with an HA pair, two allocation units are assigned to each of the HA
partners, but you can reassign the allocation units to the other HA partner if needed (allocation
units must be owned by the node that owns the aggregate).
SSD storage pools do not have a RAID type. When an allocation unit is added to a Flash Pool
aggregate, the appropriate number of partitions are designated to provide parity to that RAID
group.
The following diagram shows one example of Flash Pool SSD partitioning. The SSD storage pool
pictured is providing cache to two Flash Pool aggregates:
Cluster Management Using OnCommand System Manager 134
Managing physical storage
Storage pool SP1 is composed of 5 SSDs; in addition, there is one hot spare SSD available to
replace any SSD that experiences a failure. Two of the storage pool's allocation units are allocated
to Flash Pool FP1, and two are allocated to Flash Pool FP2. FP1 has a cache RAID type of
RAID4, so the allocation units provided to FP1 contain only one partition designated for parity.
FP2 has a cache RAID type of RAID-DP, so the allocation units provided to FP2 include a parity
partition and a double-parity partition.
In this example, two allocation units are allocated to each Flash Pool aggregate; however, if one
Flash Pool aggregate needed a larger cache, you could allocate three of the allocation units to that
Flash Pool aggregate, and only one to the other.
Considerations for adding SSDs to an existing storage pool versus creating a new one
You can increase the size of your SSD cache in two ways—by adding SSDs to an existing SSD
storage pool or by creating a new SSD storage pool. The best method for you depends on your
configuration and plans for the storage.
The choice between creating a new storage pool or adding storage capacity to an existing one is
similar to deciding whether to create a new RAID group or add storage to an existing one:
• If you are adding a large number of SSDs, creating a new storage pool provides more
flexibility because you can allocate the new storage pool differently from the existing one.
• If you are adding only a few SSDs, and increasing the RAID group size of your existing Flash
Pool caches is not an issue, then adding SSDs to the existing storage pool keeps your spare and
parity costs lower, and automatically allocates the new storage.
If your storage pool is providing allocation units to Flash Pool aggregates whose caches have
different RAID types, and you expand the size of the storage pool beyond the maximum RAID4
RAID group size, the newly added partitions in the RAID4 allocation units are unused.
When you add SSDs to an existing storage pool, the SSDs must be owned by one node or the other
of the same HA pair that already owned the existing SSDs in the storage pool. You can add SSDs
that are owned by either node of the HA pair.
Spare Cache
Displays the available spare cache size of the storage pool.
Used Cache (%)
Displays the percentage of used cache size of the storage pool.
Allocation Unit
Displays the minimum allocation unit of the total cache size that you can use to increase the
size of your storage pool.
Owner
Displays the name of the HA pair or the node with which the storage pool is associated.
State
Displays the state of the storage pool, which can be Normal, Degraded, Creating, Deleting,
Reassigning, or Growing.
Is Healthy
Displays whether storage pool is healthy or not.
Details tab
Displays detailed information about the selected storage pool, such as the name, health, storage
type, disk count, total cache, spare cache, used cache size (in percent), and allocation unit. The tab
also displays the names of the aggregates that are provisioned by the storage pool.
Disks tab
Displays detailed information about the disks in the selected storage pool, such as the names, disk
types, useable size, and total size.
Related tasks
Adding disks to a storage pool on page 131
You can add SSDs to an existing storage pool and increase its cache size by using System
Manager.
Creating a storage pool on page 131
A storage pool is a collection of SSDs (cache disks). You can use System Manager to combine
SSDs to create a storage pool, which enables you to share the SSDs and SSD spares between an
HA pair for allocation to two or more Flash Pool aggregates at the same time.
Deleting storage pools on page 132
You might want to delete a storage pool when the cache of the storage pool is not optimal or when
it is no longer used by any aggregate or Flash Pool aggregate. You can delete a storage pool by
using the Delete Storage Pool dialog box in System Manager.
Disks
You can use System Manager to manage disks.
Related information
Disk and aggregate management
FlexArray virtualization installation requirements and reference
ONTAP concepts
Cluster Management Using OnCommand System Manager 138
Managing physical storage
For drives using root-data partitioning and SSDs in storage pools, a single drive might be used in
multiple ways for RAID. For example, the root partition of a partitioned drive might be a spare
partition, whereas the data partition might be being used for parity. For this reason, the RAID drive
type for partitioned drives and SSDs in storage pools is displayed simply as shared.
Data disk
Holds data stored on behalf of clients within RAID groups (and any data generated about
the state of the storage system as a result of a malfunction).
Spare disk
Does not hold usable data, but is available to be added to a RAID group in an aggregate.
Any functioning disk that is not assigned to an aggregate but is assigned to a system
functions as a hot spare disk.
Parity disk
Stores row parity information that is used for data reconstruction when a single disk drive
fails within the RAID group.
dParity disk
Stores diagonal parity information that is used for data reconstruction when two disk drives
fail within the RAID group, if RAID-DP is enabled.
FC-connected storage
Storage arrays
ONTAP disk type Disk class Industry standard disk type Description
LUN N/A LUN Logical storage device
that is backed by storage
arrays and used by
ONTAP as a disk
These LUNs are referred
to as array LUNs to
distinguish them from
the LUNs that ONTAP
serves to clients.
Related information
NetApp Hardware Universe
NetApp Technical Report 3437: Storage Subsystem Resiliency Guide
• Data ONTAP supports RAID4 and RAID-DP on native disk shelves, but supports only
RAID0 on array LUNs.
Follow these steps when planning your ONTAP RAID groups for array LUNs:
1. Plan the size of the aggregate that best meets your data needs.
2. Plan the number and size of RAID groups that you need for the size of the aggregate.
Note: It is best to use the default RAID group size for array LUNs. The default RAID group
size is adequate for most organizations. The default RAID group size is different for array
LUNs and disks.
3. Plan the size of the LUNs that you need in your RAID groups.
• To avoid a performance penalty, all array LUNs in a particular RAID group should be of
the same size.
• The LUNs should be of the same size in all RAID groups in the aggregate.
4. Ask the storage array administrator to create the number of LUNs of the size that you need for
the aggregate.
The LUNs should be optimized for performance, according to the instructions in the storage
array vendor documentation.
5. Create all the RAID groups in the aggregate simultaneously.
Note:
• Do not mix array LUNs from storage arrays with different characteristics in the same
ONTAP RAID group.
• If you create a new RAID group for an existing aggregate, ensure that the new RAID
group is of the same size as the other RAID groups in the aggregate, and that the array
LUNs are of the same size as the LUNs in the other RAID groups in the aggregate.
Cluster Management Using OnCommand System Manager 144
Managing physical storage
Disks window
You can use the Disks window to view all the disks in your storage system.
• Command buttons on page 144
• Summary on page 144
• Inventory on page 144
• Inventory details area on page 146
Command buttons
Assign
Assigns or reassigns the ownership of the disks to a node.
This button is enabled only if the container type of the selected disks is unassigned, spare,
or shared.
Zero Spares
Erases all the data, and formats the spare disks and array LUNs.
Refresh
Updates the information in the window.
Tabs
Summary
Displays detailed information about the disks in the cluster, including the size of the spare disks
and assigned disks. The tab also graphically displays information about spare disks, aggregates,
and root aggregates for HDDs and information about spare disks, disks in a storage pool,
aggregates, Flash Pool aggregates, and root aggregates for cache disks (SSDs).
The HDD panel is not displayed for systems with All Flash Optimized personality.
The details panel provides additional information about partitioned and unpartitioned spare disks
(disk type, node, disk size, RPM, checksum, number of available disks, and spare capacity), in
tabular format.
Inventory
Name
Displays the name of the disk.
Container Type
Displays the purpose for which the disk is used. The possible values are Aggregate, Broken,
Foreign, Label Maintenance, Maintenance, Shared, Spare, Unassigned, Volume, Unknown,
and Unsupported.
Partition Type
Displays the partition type of the disk.
Node Name
Displays the name of the node that contains the aggregate.
This field is available only at the cluster level.
Home owner
Displays the name of the home node to which this disk is assigned.
Current owner
Displays the name of the node that currently owns this disk.
Cluster Management Using OnCommand System Manager 145
Managing physical storage
Root owner
Displays the name of the node that currently owns the root partition of this disk.
Data Owner
Displays the name of the node that currently owns the data partition of this disk.
Data1 Owner
Displays the name of the node that currently owns the data1 partition of the disk.
Data2 Owner
Displays the name of the node that currently owns the data2 partition of the disk.
Storage Pool
Displays the name of the storage pool with which the disk is associated.
Type
Displays the type of the disk.
Firmware Version
Displays the firmware version of the disk.
Model
Displays the model of the disk.
RPM
Displays the effective speed of the disk drive when the option
raid.mix.hdd.rpm.capacity is enabled, and displays the actual speed of the disk drive
when the option raid.mix.hdd.rpm.capacity is disabled.
This field is not applicable to SSDs.
Effective Size
Displays the usable space available on the disk.
Physical Space
Displays the total physical space of the disk.
Shelf
Displays the shelf on which the physical disks are located.
This field is hidden by default.
Bay
Displays the bay within the shelf for the physical disk.
This field is hidden by default.
Pool
Displays the name of the pool to which the selected disk is assigned.
This field is hidden by default.
Checksum
Displays the type of the checksum.
This field is hidden by default.
Carrier ID
Specifies information about disks that are located within the specified multi-disk carrier.
The ID is a 64-bit value.
This field is hidden by default.
Cluster Management Using OnCommand System Manager 146
Managing physical storage
Array LUNs
You can use System Manager to assign array LUNs to an existing aggregate and manage array
LUNs.
Related information
FlexArray virtualization installation requirements and reference
Steps
1. Click Hardware and Diagnostics > Array LUNs.
2. Select the spare array LUNs that you want to reassign, and then click Assign.
3. In the Warning dialog box, click Continue.
4. In the Assign Array LUNs dialog box, select the node to which you want to reassign the spare
array LUNs.
5. Click Assign.
Data ONTAP
Create array
LUNs
Spare disk
Unowned or array LUN
disk or It is owned by Add to
Storage array
array LUN the storage aggregate
system, but it (optional)
cannot be
used yet.
Install a new
disk on a
In-use disk
disk shelf
or array LUN
Automatic or
The disk or LUN
manual assignment
is in use by
of a new disk to a
System running the system
system running
Data ONTAP that owns it.
Data ONTAP
• Array LUNs from storage arrays from the same vendor but from different storage array
families
Note: Storage arrays in the same family share the same characteristics—for example, the
same performance characteristics. For more information about how Data ONTAP defines
family members for a vendor, see the FlexArray Virtualization Implementation Guide for
Third-Party Storage guide.
FlexArray virtualization implementation for third-party storage
• Array LUNs from different drive types (for example, FC and SATA)
You cannot mix array LUNs from different drive types in the same aggregate even if array
LUNs are from the same series and the same vendor. Before setting up this type of
configuration, you should consult your authorized reseller to plan the best implementation for
your environment.
Node name
Specifies the name of the node to which the array LUN belongs.
Home owner
Displays the name of the home node to which the array LUN is assigned.
Current owner
Displays the name of the node that currently owns the array LUN.
Array name
Specifies the name of the array.
Pool
Displays the name of the pool to which the selected array LUN is assigned.
Details area
The area below the Array LUNs list displays detailed information about the selected array LUN.
Nodes
You can use System Manager to view the details of the nodes in the cluster.
Nodes window
You can use the Nodes to view the details of the nodes in the cluster.
• Command buttons
• Nodes list
Command buttons
Initialize ComplianceClock
Initializes the ComplianceClock of the selected node to the current value of the system
clock.
Refresh
Updates the information in the window.
Cluster Management Using OnCommand System Manager 151
Managing physical storage
Nodes list
Name
Displays the name of the node.
State
Displays the state of the node, whether it is up or down.
Up Time
Displays the duration for which the node is up.
Data ONTAP Version
Displays the Data ONTAP version that is installed on the node.
Model
Displays the platform model number of the node.
System ID
Displays the ID of the node.
Serial No
Displays the serial number of the node.
All Flash Optimized
Displays if the node has an All Flash Optimized personality or not.
Details area
Displays detailed information about the selected node.
Details tab
Displays information related to the selected node such as name of the node, state of the
node, and the duration for which the node is up.
Performance tab
Displays throughput, IOPS, and latency of the selected node.
Changing the client time zone or the cluster time zone impacts the performance metrics
graphs. Refresh your browser to see the updated graphs.
Flash Cache
You can use System Manager to manage Flash Cache.
Events
You can use System Manager to view the event log and event notifications.
Events window
You can use the Events window to view the event log and event notifications.
Command buttons
Refresh
Updates the information in the window.
Events list
Time
Displays the time when the event occurred.
Cluster Management Using OnCommand System Manager 153
Managing physical storage
Node
Displays the node and the cluster on which the event occurred.
Severity
Displays the severity of the event. The possible severity levels are:
• Emergency
Specifies that the event source unexpectedly stopped, and the system experienced
unrecoverable data loss. You must take corrective action immediately to avoid extended
downtime.
• Alert
Specifies that the event source has an alert, and action must be taken to avoid downtime.
• Critical
Specifies that the event source is critical, and might lead to service disruption if
corrective action is not taken immediately.
• Error
Specifies that the event source is still performing, and a corrective action is required to
avoid service disruption.
• Warning
Specifies that the event source experienced an occurrence that you must be aware of.
Events of this severity might not cause service disruption; however, corrective action
might be required.
• Notice
Specifies that the event source is normal, but the severity is a significant condition that
you must be aware of.
• Informational
Specifies that the event source has an occurrence that you must be aware of. No
corrective action might be required.
• Debug
Specifies that the event source includes a debugging message.
By default, the alert severity type, emergency severity type, and the error severity type are
displayed.
Source
Displays the source of the event.
Event
Displays the description of the event.
Details area
Displays the event details, including the event description, message name, sequence number,
message description, and corrective action for the selected event.
System alerts
You can use System Manager to monitor different parts of a cluster.
Related information
System administration
status for the cluster. The alerts include the information that you need to respond to degraded
system health.
If the status is degraded, you can view details about the problem, including the probable cause and
recommended recovery actions. After you resolve the problem, the system health status
automatically returns to OK.
The system health status reflects multiple separate health monitors. A degraded status in an
individual health monitor causes a degraded status for the overall system health.
Data ONTAP supports the following cluster switches for system health monitoring in your cluster:
• NetApp CN1601
• NetApp CN1610
• Cisco Nexus 5010
• Cisco Nexus 5020
• Cisco Nexus 5596
• Cisco Catalyst 2960-24TT-L
You can use the System Alerts window to learn more about system health alerts. You can also
acknowledge, delete, and suppress alerts from the window.
Resource
Displays the resource that generated the alert, such as a specific shelf or disk.
Time
Displays the time when the alert was generated.
Details area
The details area displays detailed information about the alert, such as the time when the alert was
generated and whether the alert has been acknowledged. The area also includes information about
the probable cause and possible effect of the condition generated by the alert, and the
recommended actions to correct the problem reported by the alert.
Related tasks
Acknowledging system health alerts on page 154
You can use System Manager to acknowledge and respond to system health alerts for subsystems.
You can use the information displayed to take the recommended action and correct the problem
reported by the alert.
Suppressing system health alerts on page 154
You can use System Manager to suppress system health alerts that do not require any intervention
from you.
Deleting system health alerts on page 155
You can use System Manager to delete system health alerts to which you have already responded.
AutoSupport notifications
You can use System Manager to configure AutoSupport notifications that help you to monitor your
storage system health.
a. If you want to generate AutoSupport data for a specific node, clear the Generate
Autosupport data for all nodes check box, and then select the node.
b. Type the case number.
4. Click Generate.
5. In the Confirmation dialog box, click OK.
AutoSupport window
The AutoSupport window enables you to view the current AutoSupport settings for your system.
You can also change your system's AutoSupport settings.
Command buttons
Enable
Enables AutoSupport notification.
Disable
Disables AutoSupport notification.
Edit
Opens the Edit AutoSupport Settings dialog box, which enables you to specify an email
address from which email notifications are sent and to add multiple email addresses of the
host names.
Cluster Management Using OnCommand System Manager 160
Managing physical storage
Test
Opens the AutoSupport Test dialog box, which enables you to generate an AutoSupport test
message.
AutoSupport Request
Provides the following AutoSupport requests:
Generate AutoSupport
Generates AutoSupport data for a selected node or all nodes.
View Previous Summary
Displays the status and details of all the previous AutoSupport data.
Refresh
Updates the information in the window.
Details area
The details area displays AutoSupport setting information such as the node name, AutoSupport
status, transport protocol used, and name of the proxy server.
Related tasks
Setting up a support page on page 25
Setting up the support page completes the cluster setup, and involves setting up the AutoSupport
messages and event notifications, and for single-node clusters, configuring system backup.
Jobs
You can use System Manager to manage job tasks such as displaying job information and
monitoring the progress of a job.
Jobs
Jobs are asynchronous task and typically long-running volume operations, such as copying,
moving, or mirroring data. Jobs are placed in a job queue and are run when resources are available.
The cluster administrator can perform all the tasks related to job management.
A job can be one of the following categories:
• A server-affiliated job is placed in queue by the management framework to be run in a specific
node.
• A cluster-affiliated job is placed in queue by the management framework to be run in any node
in the cluster.
• A private job is specific to a node and does not use the replicated database (RDB) or any other
cluster mechanism.
You require the advanced privilege level or higher to run the commands to manage private jobs.
You can manage jobs in the following ways:
• Displaying job information, including the following:
◦ Jobs on a per-node basis
◦ Cluster-affiliated jobs
◦ Completed jobs
◦ Job history
• Monitoring a job's progress
• Displaying information about the initialization state for job managers.
You can determine the outcome of a completed job by checking the event log.
Cluster Management Using OnCommand System Manager 161
Managing physical storage
Job window
You can use the Job window to manage job tasks such as displaying job information and
monitoring the progress of a job.
Command button
Refresh
Updates the information in the window.
Tabs
Current Jobs
This tab displays information about the job tasks that are in progress.
Job History
This tab displays information about all the jobs.
Job list
Job ID
Displays the ID of the job.
Start Time
Displays the start time of the job.
Job Name
Displays the name of the job.
Node
Displays the name of the node.
State
Displays the state of the job.
Job Description
Displays the description of the job.
Progress
Displays the state of the job.
Schedule Name
Displays the name of the schedule.
Monitoring SVMs
The dashboard in System Manager enables you to monitor the health and performance of a
Storage Virtual Machine (SVM).
Steps
1. Click the SVMs tab.
2. Select the SVM, and then click Manage.
3. View the details in the dashboard panels.
5. In the Service tab, specify the name service switch sources for the required database types and
the order in which they should be consulted to retrieve name service information.
The default values for each of the database types are as follows:
Cluster Management Using OnCommand System Manager 165
Managing logical storage
Deleting SVMs
You can use System Manager to delete Storage Virtual Machines (SVMs) that you no longer
require from the storage system configuration.
Before you begin
You must have completed the following tasks:
1. Disabled the Snapshot copies, data protection (DP) mirrors, and load-sharing (LS) mirrors for
all the volumes
Note: You must use the CLI to disable LS mirrors.
2. Deleted all the igroups that belong to the SVM manually if you are deleting SVMs with
FlexVol volume
3. Deleted all the portsets
4. Deleted all the volumes in the SVM, including the root volume
5. Unmapped the LUNs, taken them offline, and deleted them
6. Deleted the CIFS server if you are deleting SVMs with FlexVol volume
7. Deleted any customized user accounts and roles that are associated with the SVM
8. Stopped the SVM
About this task
When you delete SVMs, the following objects associated with the SVM are also deleted:
• LIFs, LIF failover groups, and LIF routing groups
• Export policies
• Efficiency policies
If you delete SVMs that are configured to use Kerberos, or modify SVMs to use a different
Service Principal Name (SPN), the original service principal of the SVM is not automatically
deleted or disabled from the Kerberos realm. You must manually delete or disable the principal.
You must have the Kerberos realm administrator's user name and password to delete or disable the
principal.
If you want to move data from an SVM to another SVM before you delete the first SVM, you can
use the SnapMirror technology.
Steps
1. Click the SVMs tab.
2. Select the SVM that you want to delete, and then click Delete.
3. Select the confirmation check box, and then click Delete.
Cluster Management Using OnCommand System Manager 166
Managing logical storage
Starting SVMs
You can use System Manager to provide data access from a Storage Virtual Machine (SVM) by
starting the SVM.
Steps
1. Click the SVMs tab.
2. Select the SVM that you want to start, and then click Start.
Result
The SVM starts serving data to clients.
Stopping SVMs
You can use System Manager to stop a Storage Virtual Machine (SVM) if you want to
troubleshoot any issue with the SVM, delete the SVM, or stop data access from the SVM.
Before you begin
All the clients connected to the SVM must be disconnected.
Attention: If any clients are connected to the SVM when
you stop it, data loss might occur.
Data LIFs
Management
NFS,
LIF
CIFS,
Client access
SVM administrator iSCSI, and
FC
Multiple FlexVol
Managing SVMs
Storage Virtual Machine (SVM) administrators can administer SVMs and its resources, such as
volumes, protocols, and services, depending on the capabilities assigned by the cluster
administrator. SVM administrators cannot create, modify, or delete SVMs.
Note: SVM administrators cannot log in to System Manager.
SVM administrators might have all or some of the following administration capabilities:
• Data access protocol configuration
SVM administrators can configure data access protocols, such as NFS, CIFS, iSCSI, and Fibre
Channel (FC) protocol (Fibre Channel over Ethernet or FCoE included).
• Services configuration
SVM administrators can configure services such as LDAP, NIS, and DNS.
• Storage management
SVM administrators can manage volumes, quotas, qtrees, and files.
• LUN management in a SAN environment
• Management of Snapshot copies of the volume
• Monitoring SVM
SVM administrators can monitor jobs, network connection, network interface, and the SVM
health.
Related information
NetApp Documentation: ONTAP 9
Types of SVMs
A cluster consists of three types of SVMs, which help in managing the cluster and its resources
and data access to the clients and applications.
A cluster contains the following types of SVMs:
• Admin SVM
The cluster setup process automatically creates the admin SVM for the cluster. The admin
SVM represents the cluster.
• Node SVM
A node SVM is created when the node joins the cluster, and the node SVM represents the
individual nodes of the cluster.
• System SVM (advanced)
A system SVM is automatically created for cluster-level communications in an IPspace.
• Data SVM
Cluster Management Using OnCommand System Manager 168
Managing logical storage
A data SVM represents the data serving SVMs. After the cluster setup, a cluster administrator
must create data SVMs and add volumes to these SVMs to facilitate data access from the
cluster.
A cluster must have at least one data SVM to serve data to its clients.
Note: Unless otherwise specified, the term SVM refers to data (data-serving) SVM, which
applies to SVMs with FlexVol volumes.
In the CLI, SVMs are displayed as Vservers.
Database type Defines name service sources for... Valid sources are...
hosts Converting host names to IP addresses files, dns
Cluster Management Using OnCommand System Manager 169
Managing logical storage
Database type Defines name service sources for... Valid sources are...
group Looking up user group information files, nis, ldap
passwd Looking up user information files, nis, ldap
netgroup Looking up netgroup information files, nis, ldap
namemap Mapping user names files, ldap
Source types
The sources specify which name service source to use for retrieving the appropriate information.
nis External NIS servers as specified in the NIS domain vserver services name-service
configuration of the SVM nis-domain
ldap External LDAP servers as specified in the LDAP client vserver services name-service
configuration of the SVM ldap
dns External DNS servers as specified in the DNS vserver services name-service dns
configuration of the SVM
Even if you plan to use NIS or LDAP for both data access and SVM administration authentication,
you should still include files and configure local users as a fallback in case NIS or LDAP
authentication fails.
Related tasks
Editing SVM settings on page 164
You can use System Manager to edit the properties of Storage Virtual Machines (SVMs), such as
the name service switch, name mapping switch, and aggregate list.
Edit
Opens the Edit Storage Virtual Machine dialog box, which enables you to modify the
properties, such as the name service switch, name mapping switch, and aggregate list, of a
selected SVM.
Delete
Deletes the selected SVMs.
Start
Starts the selected SVM.
Stop
Stops the selected SVM.
Manage
Manages the storage, policies, and configuration for the selected SVM.
Refresh
Updates the information in the window.
SVM list
The SVM list displays the name of each SVM and the allowed protocols on it.
You can view only data SVMs by using System Manager.
Name
Displays the name of the SVM.
State
Displays the SVM state, such as Running, Starting, Stopped, or Stopping.
Subtype
Displays the subtype of the SVM, which can be one of the following:
• default
Specifies that the SVM is a data-serving SVM.
• dp-destination
Specifies that the SVM is configured for disaster recovery.
• sync-source
Specifies that the SVM is in the primary site of a MetroCluster configuration.
• sync-destination
Specifies that the SVM is in the surviving site of a MetroCluster configuration.
Allowed Protocols
Displays the allowed protocols, such as CIFS and NFS, on each SVM.
IPspace
Displays the IPspace of the associated SVM.
Volume Type
Displays the allowed volume type, such as FlexVol volume, on each SVM.
Configuration State
Displays whether the configuration state of the SVM is locked or unlocked.
Details area
The area below the SVM list displays detailed information, such as the type of volumes allowed,
language, and Snapshot policy, about the selected SVM.
Cluster Management Using OnCommand System Manager 171
Managing logical storage
You can also configure the protocols that are allowed on this SVM. If you have not configured the
protocols while creating the SVM, you can click the protocol link to configure the protocol.
You cannot configure protocols for an SVM configured for disaster recovery by using System
Manager.
Note: If the FCP service is already started for the SVM, clicking the FC/FCoE link opens the
Network Interfaces window.
Status Description
Green LIFs exist and the protocol is configured.
You can click the link to view the configuration details.
Note: Configuration might be partially completed.
However, service is running. You can create the LIFs
and complete the configuration from the Network
Interfaces window.
Grey The protocol is not configured. You can click the protocol
link to configure the protocol.
Grey border The protocol license has expired or is missing. You can
click the protocol link to add the licenses in the Licenses
page.
Volumes
You can use System Manager to create, edit, and delete volumes.
You can access all the volumes in the cluster by using the Volumes tab or you can access the
volumes specific to an SVM by using SVMs > Volumes.
Note: The Volumes tab is displayed only if you have enabled the CIFS and NFS licenses.
Related information
ONTAP concepts
Logical storage management
◦ Disrupt
Deletes the Snapshot copies that can disrupt the data transfer.
• Select the caching policy that you want to assign to the volume.
This option is available only for FlexVol volumes in a Flash Pool aggregate.
• Select the retention priority for cached data in the volume.
This option is available only for FlexVol volumes in a Flash Pool aggregate.
• Specify the fractional reserve that you want to set for the volume.
• Update the access time for reading the file.
This option is disabled for SnapLock volumes.
9. Click Save and Close.
Related tasks
Setting up CIFS on page 250
You can use System Manager to enable and configure CIFS servers to allow CIFS clients to access
files on the cluster.
Related reference
Volumes window on page 204
You can use the Volumes window to manage your volumes and to display information about these
volumes.
Deleting volumes
You can use System Manager to delete a volume when you no longer require the data that it
contains, or if you have copied the data that it contains to another location. When you delete a
volume, all the data in the volume is destroyed, and you cannot recover this data.
Before you begin
• If the FlexVol volume is cloned, the FlexClone volumes must be either split from the parent
volume or destroyed.
• The volume must be unmounted and in the offline state.
• If the volume is in one or more SnapMirror relationships, the SnapMirror relationships must be
deleted.
• You can delete a complete SnapLock Enterprise volume or a file in a SnapLock Enterprise
volume; however, you cannot delete only the data within a file in a SnapLock Enterprise
volume.
• You cannot delete a SnapLock Compliance volume if data is committed to the volume.
Steps
1. Click the SVMs tab.
2. Select the SVM, and then click Manage.
3. Click the Volumes tab.
4. Select the volumes that you want to delete, and then click Delete.
5. Select the confirmation check box, and then click Delete.
Related reference
Volumes window on page 204
You can use the Volumes window to manage your volumes and to display information about these
volumes.
6. In the Create FlexClone Volume dialog box, type the name of the FlexClone volume that you
want to create.
7. Optional: If you want to enable thin provisioning for the new FlexClone volume, select Thin
Provisioning.
By default, this setting is the same as that of the parent volume.
8. Create a new Snapshot copy or select an existing Snapshot copy that you want to use as the
base Snapshot copy for creating the new FlexClone volume.
9. Click Clone.
Related reference
Volumes window on page 204
You can use the Volumes window to manage your volumes and to display information about these
volumes.
You can use the Volumes window to manage your volumes and to display information about these
volumes.
You can use the Volumes window to manage your volumes and to display information about these
volumes.
You can use the Volumes window to manage your volumes and to display information about these
volumes.
• If the FlexVol volume you want to restore contains a LUN, the LUN must be unmounted or
unmapped.
• There must be enough available space for the restored volume.
• Users accessing the volume must be notified that you are going to revert a volume, and that the
data from the selected Snapshot copy replaces the current data in the volume.
About this task
• If the volume contains junction points to other volumes, the volumes mounted on these
junction points will not be restored.
• You cannot restore Snapshot copies for SnapLock Compliance volumes.
Steps
1. Click the SVMs tab.
2. Select the SVM, and then click Manage.
3. Click the Volumes tab.
4. Select the volume that you want to restore from a Snapshot copy.
5. Click Actions > Manage Snapshots > Restore.
6. Select the appropriate Snapshot copy, and then click Restore.
7. Select the confirmation check box, and then click Restore.
Related reference
Volumes window on page 204
You can use the Volumes window to manage your volumes and to display information about these
volumes.
Resizing volumes
When your volume reaches nearly full capacity, you can increase the size of the volume, delete
some Snapshot copies, or adjust the Snapshot reserve. You can use the Volume Resize wizard in
System Manager to provide more free space.
About this task
• For a volume that is configured to grow automatically, you can modify the limit to which the
volume can grow automatically, based on the increased size of the volume.
• You cannot resize a data protection volume if its mirror relationship is broken or if a reverse
resynchronization operation has been performed on the volume.
Instead, you must use the command-line interface (CLI).
Steps
1. Click the SVMs tab.
2. Select the SVM, and then click Manage.
3. Click the Volumes tab.
4. Select the volume that you want to resize.
5. Click Actions > Resize.
6. Type or select information as prompted by the wizard.
7. Confirm the details, and then click Finish to complete the wizard.
8. Verify the changes you made to the available space and total space of the volume in the
Volumes window.
Related reference
Volumes window on page 204
Cluster Management Using OnCommand System Manager 183
Managing logical storage
You can use the Volumes window to manage your volumes and to display information about these
volumes.
Related reference
Volumes window on page 204
You can use the Volumes window to manage your volumes and to display information about these
volumes.
You can use the Volumes window to manage your volumes and to display information about these
volumes.
7. Click Move.
8. Optional: Click the link that specifies the number of volumes to review the list of selected
volumes, and then click Discard if you want to remove any volumes from the list.
The link is displayed only when multiple volumes are selected.
9. Click OK.
• You can use System Manager to only view the FlexGroup volume relationships.
Steps
1. Click the SVMs tab.
2. Select the SVM, and then click Manage.
3. Click the Volumes tab.
4. Select the volume for which you want to create a mirror relationship, and then click Actions
> Protect.
The Protect option is available only for a read/write volume.
5. In the Create Protection Relationship dialog box, select Mirror from the Relationship
Type drop-down list.
6. Optional: Select the Create version-flexible mirror relationship check box to create a
mirror relationship that is independent of the ONTAP version running on the source cluster
and destination cluster, and to back up the Snapshot copies from the source volume.
If you select this option, the SnapLock volumes will not be displayed.
7. Specify the cluster, the SVM, and the destination volume.
8. If the selected SVM is not peered, use the Authenticate link to enter the credentials of the
remote cluster and create the SVM peer relationship.
9. Optional: Enter an alias name for the remote SVM in the Enter Alias Name for SVM dialog
box.
10. For FlexVol volumes, create a new destination volume or select an existing volume:
• The capacity of the destination volume must be greater than or equal to the capacity of the
source volume.
• If autogrow is disabled, the free space on the destination volume must be at least five percent
more than the used space on the source volume.
About this task
• System Manager does not support a cascade relationship.
For example, a destination volume in a relationship cannot be the source volume in another
relationship.
• You can create a vault relationship only between a non-SnapLock (primary) volume and a
Snaplock destination (secondary) volume.
• You can use System Manager to only view the FlexGroup volume relationships.
Steps
1. Click the SVMs tab.
2. Select the SVM, and then click Manage.
3. Click the Volumes tab.
4. Select the volume for which you want to create a vault relationship, and then click Actions >
Protect.
The Protect option is available only for a read/write volume.
5. In the Create Protection Relationship dialog box, select Vault from the Relationship Type
drop-down list.
6. Specify the cluster, the SVM, and the destination volume.
7. If the selected SVM is not peered, use the Authenticate link to enter the credentials of the
remote cluster, and create an SVM peer relationship.
8. Optional: Enter an alias name for the remote SVM in the Enter Alias Name for SVM dialog
box.
9. Create a new destination volume or select an existing volume:
10. If you are creating a SnapLock volume, specify the default retention period.
The default retention period can be set to any value between 1 day through 70 years or
Infinite.
11. Select an existing policy or create a new policy:
Cluster Management Using OnCommand System Manager 191
Managing logical storage
Related reference
Protection window on page 348
You can use the Protection window to create and manage mirror, vault, and mirror vault
relationships and to display details about these relationships. The Protection window does not
display load-sharing (LS) and transition relationships (TDP).
You can use the Volumes window to manage your volumes and to display information about these
volumes.
You can use the Volumes window to manage your volumes and to display information about these
volumes.
• Set services (for example, Snapshot copy schedules) differently for individual FlexVol
volumes.
• Minimize interruptions in data availability by taking individual FlexVol volumes offline to
perform administrative tasks on them while the other FlexVol volumes remain online.
• Save time by backing up and restoring individual FlexVol volumes instead of all the file
systems an aggregate contains.
Snapshot configuration
You can configure Snapshot copies by setting a schedule to an existing Snapshot policy. You can
have a maximum of 255 Snapshot copies of a FlexVol volume. You can change the maximum
number of Snapshot copies for a Snapshot policy's schedule.
aggregate, although the amount of space that can actually be used is limited by the size of
aggregate.
Writes to LUNs or files (including space-reserved LUNs and files) contained by that volume
could fail if the containing aggregate does not have enough available space to accommodate
the write.
When space in the aggregate is allocated for a volume guarantee for an existing volume, that
space is no longer considered free in the aggregate, even if the volume is not yet using the space.
Operations that consume free space in the aggregate, such as creation of aggregate Snapshot
copies or creation of new volumes in the containing aggregate, can occur only if there is enough
available free space in that aggregate; these operations are prevented from using space already
allocated to another volume.
When the free space in an aggregate is exhausted, only writes to volumes or files in that aggregate
with preallocated space are guaranteed to succeed.
Guarantees are honored only for online volumes. If you take a volume offline, any allocated but
unused space for that volume becomes available for other volumes in that aggregate. When you try
to bring that volume back online, if there is insufficient available space in the aggregate to fulfill
its guarantee, it will remain offline. You must force the volume online, at which point the volume's
guarantee will be disabled.
Related information
NetApp Technical Report 3965: NetApp Thin Provisioning Deployment and Implementation
Guide Data ONTAP 8.1 (7-Mode)
Flexible volumes are no longer bound by the limitations of the disks on which they reside. A
FlexVol volume can be sized based on how much data you want to store in it, rather than on the
size of your disk. This flexibility enables you to maximize the performance and capacity
utilization of the storage systems. Because FlexVol volumes can access all available physical
storage in the system, improvements in storage utilization are possible.
Example
A 500-GB volume is allocated with only 100 GB of actual data; the remaining 400 GB allocated
has no data stored in it. This unused capacity is assigned to a business application, even though the
application might not need all 400 GB until later. The allocated but unused 400 GB of excess
capacity is temporarily wasted.
With thin provisioning, the storage administrator provisions 500 GB to the business application
but uses only 100 GB for the data. The difference is that with thin provisioning, the unused 400
GB is still available to other applications. This approach allows the application to grow
transparently, and the physical storage is fully allocated only when the application needs it. The
rest of the storage remains in the free pool to be used as needed.
• Snapshot copies are a point-in-time, read-only view of a data volume, which consume minimal
storage space.
Two Snapshot copies created in sequence differ only by the blocks added or changed in the
time interval between the two. This block incremental behavior limits the associated
consumption of storage capacity.
• Deduplication saves storage space by eliminating redundant data blocks within a FlexVol
volume.
• Data compression stores more data in less space and reduces the time and bandwidth required
to replicate data during volume SnapMirror transfers.
You have to choose the type of compression (inline or background) based on your requirement
and the configurations of your storage system. Inline compression checks if data can be
compressed, compresses data, and then writes data to the volume. Background compression
runs on all the files, irrespective of whether the file is compressible or not, after all the data is
written to the volume.
• SnapMirror technology is a flexible solution for replicating data over local area, wide area, and
Fibre Channel networks.
It can serve as a critical component in implementing enterprise data protection strategies. You
can replicate your data to one or more storage systems to minimize downtime costs in case of a
production site failure. You can also use SnapMirror technology to centralize the backup of
data to disks from multiple data centers.
• FlexClone technology copies data volumes, files, and LUNs as instant virtual copies.
A FlexClone volume, file, or LUN is a writable point-in-time image of the FlexVol volume or
another FlexClone volume, file, or LUN. This technology enables you to use space efficiently,
storing only data that changes between the parent and the clone.
• The unified architecture integrates multiprotocol support to enable both file-based and block-
based storage on a single platform.
With FlexArray Virtualization, you can virtualize your entire storage infrastructure under one
interface, and you can apply all the preceding efficiencies to your non-NetApp systems.
When you run deduplication on a FlexVol volume that contains uncompressed data, it scans all the
uncompressed blocks in the FlexVol volume and creates a digital fingerprint for each of the blocks.
Note: If a FlexVol volume has compressed data, but the compression option is disabled on that
volume, then you might lose the space savings when you run the sis undo command.
Autogrow
You can specify the limit to which the volume can be grown automatically, if required.
The move is not disruptive to client access because the time in which client access is blocked ends
before clients notice a disruption and time out. Client access is blocked for 35 seconds by default.
If the volume move operation cannot finish in the time that access is denied, the system aborts this
final phase of the volume move operation and allows client access. The system attempts the final
phase three times by default. After the third attempt, the system waits an hour before attempting
the final phase sequence again. The system runs the final phase of the volume move operation until
the volume move is complete.
Volumes window
You can use the Volumes window to manage your volumes and to display information about these
volumes.
You cannot view or manage volumes that are in Storage Virtual Machines (SVMs) that are
configured for disaster recovery (DR) by using System Manager. You must use the command-line
interface instead.
Note: The command buttons and list of columns will differ based on the type of volume
selected. You will be able to view only those command buttons and columns that are applicable
for the selected volume.
Command buttons
Create
Provides the following options:
Create FlexVol
Opens the Create Volume dialog box, which enables you to add FlexVol volumes.
Create FlexGroup
Opens the Create FlexGroup window, which enables you to create FlexGroup
volumes.
Edit
Enables you to edit the properties of the selected volume.
Delete
Deletes the selected volume or volumes.
Actions
Provides the following options:
Change status to
Changes the status of the selected volume to one of the following status:
• Online
• Offline
• Restrict
Resize
Enables you to change the size of the volume.
For FlexGroup volumes, you can resize by using existing resources or expand by
adding new resources.
Protect
Opens the Create Protection Relationship dialog box, which enables you to create
data protection relationships between a source volume and a destination volume.
Cluster Management Using OnCommand System Manager 205
Managing logical storage
Manage Snapshots
Provides a list of Snapshot options, including the following:
• Create
Displays the Create Snapshot dialog box, which you can use to create a Snapshot
copy of the selected volume.
• Configure
Configures the Snapshot settings.
• Restore
Restores a Snapshot copy of the selected volume.
Clone
Provides a list of clone options, including the following:
• Create
Creates a clone of the selected volume or a clone of a file from the selected
volume.
• Split
Splits the clone from the parent volume.
• View Hierarchy
Displays information about the clone hierarchy.
Storage Efficiency
Opens the Storage Efficiency dialog box, which you can use to manually start
deduplication or to abort a running deduplication operation. This button is displayed
only if deduplication is enabled on the storage system.
Move
Opens the Move Volume dialog box, which you can use to move volumes from one
aggregate or node to another aggregate or node within the same SVM.
Storage QoS
Opens the Quality of Service details dialog box, which you can use to assign one or
more volumes to a new or existing policy group.
Provision Storage for VMware
Enables you to create a volume for the NFS datastore and specify the ESX servers
that can access the NFS datastore.
Refresh
Updates the information in the window.
Volume list
Status
Displays the status of the volume.
Name
Displays the name of the volume.
Style
Displays the type of the volume.
Aggregates
Displays the name of the aggregates belonging to the volume.
Cluster Management Using OnCommand System Manager 206
Managing logical storage
Thin Provisioned
Displays whether space guarantee is set for the selected volume. Valid values for online
volumes are Yes and No.
Type
Displays the type of volume: rw for read/write, ls for load sharing, or dp for data
protection.
Root volume
Displays whether the volume is a root volume.
Available Space
Displays the available space in the volume.
Total Space
Displays the total space in the volume, which includes the space that is reserved for
Snapshot copies.
% Used
Displays the amount of space (in percentage) that is used in the volume.
Storage Efficiency
Displays whether deduplication is enabled or disabled for the selected volume.
Encrypted
Displays whether the volume is encrypted or not.
Policy Group
Displays the name of the Storage QoS policy group to which the volume is assigned. By
default, this column is hidden.
SnapLock Type
Displays the SnapLock type of the volume.
Clone
Displays whether the volume is a FlexClone volume.
Is Volume Moving
Displays whether a volume is being moved from one aggregate to another aggregate, or
from one node to another node.
Tiering Policy
Displays the tiering policy of a FabricPool. The default tiering policy is "snapshot-only".
Application
Displays the name of the application that is assigned to the volume.
Details area
You can expand the volume to view information about the selected volume. You can click Show
More Details to view detailed information about the selected volume.
Overview tab
Displays general information about the selected volume, and displays a pictorial
representation of the space allocation of the volume, the protection status of the volume,
and the performance of the volume. It also displays information about a volume that is
being moved, such as the state and phase of the volume move, the destination node and
aggregate to which the volume is being moved, the percentage of volume move that is
complete, the estimated time to complete the volume move operation, and details of the
volume move operation.
Cluster Management Using OnCommand System Manager 207
Managing logical storage
You can use System Manager to enable storage efficiency and configure deduplication and data
compression or only deduplication on a volume to save storage space. If you have not enabled
storage efficiency when you created the volume, you can do so later by editing the volume.
Changing the deduplication schedule on page 184
You can use System Manager to change the deduplication schedule by choosing to run
deduplication manually, automatically, or on a schedule that you specify.
Running deduplication operations on page 184
You can use System Manager to run deduplication immediately after creating a volume, or
schedule deduplication to run at a specified time.
Splitting a FlexClone volume from its parent volume on page 176
If you want a FlexClone volume to have its own disk space rather than using that of its parent
volume, you can split the volume from its parent by using System Manager. After the split, the
FlexClone volume becomes a normal FlexVol volume.
Resizing volumes on page 182
When your volume reaches nearly full capacity, you can increase the size of the volume, delete
some Snapshot copies, or adjust the Snapshot reserve. You can use the Volume Resize wizard in
System Manager to provide more free space.
Restoring a volume from a Snapshot copy on page 179
You can use System Manager to restore a volume to a state recorded in a previously created
Snapshot copy to retrieve lost information. When you restore a Snapshot copy, the restore
operation overwrites the existing volume configuration. Any changes made to the data in the
volume after the Snapshot copy was created are lost.
Scheduling automatic Snapshot copies on page 179
You can use System Manager to set up a schedule for creating automatic Snapshot copies of a
volume. You can specify the time and frequency of creating the copies and specify the number of
Snapshot copies that are saved.
Renaming Snapshot copies on page 181
You can use System Manager to rename a Snapshot copy to help you organize and manage your
Snapshot copies.
Hiding the Snapshot copy directory on page 179
You can use System Manager to hide the Snapshot copy directory (.snapshot) so that it is not
visible when you view your volume directories. By default, the .snapshot directory is visible.
Viewing the FlexClone volume hierarchy on page 176
You can use System Manager to view the hierarchy of FlexClone volumes and their parent
volumes.
Creating FlexGroup volumes on page 194
You can use System Manager to create a FlexGroup volume by selecting specific aggregates or by
selecting system recommended aggregates.
Editing FlexGroup volumes on page 195
You can use System Manager to edit the properties of an existing FlexGroup volume.
Resizing FlexGroup volumes on page 195
You can use System Manager to resize a FlexGroup volume by resizing existing resources or
adding new resources.
Changing the status of a FlexGroup volume on page 196
You can use System Manager to change the status of a FlexGroup volume when you want to take
the FlexGroup volume offline, bring it back online, or restrict access to the FlexGroup volume.
Deleting FlexGroup volumes on page 196
Cluster Management Using OnCommand System Manager 210
Managing logical storage
You can use System Manager to delete a FlexGroup volume when you no longer require the
FlexGroup volume.
Viewing FlexGroup volume information on page 197
You can use System Manager to view information about a FlexGroup volume. You can view a
graphical representation of the space allocated, the protection status, and the performance of the
FlexGroup volume.
Namespace
You can use the Namespace window in System Manager to mount or unmount FlexVol volumes to
a junction in the SVM namespace.
Mounting volumes
You can use System Manager to mount volumes to a junction in the Storage Virtual Machine
(SVM) namespace.
About this task
• If you mount the volume to a junction path with a language setting that is different from that of
the immediate parent volume in the path, NFSv3 clients cannot access some of the files
because some characters might not be decoded correctly.
This issue does not occur if the immediate parent directory is the root volume.
• You can mount a SnapLock volume only under the root of the SVM.
• You cannot mount a regular volume under a SnapLock volume.
Steps
1. Click the SVMs tab.
2. Select the SVM, and then click Manage.
3. Click the Namespace tab.
4. Click Mount, and then select the volume that is to be mounted.
5. Optional: If you want to change the default junction name, specify a new name.
6. Click Browse, and then select a junction path to mount the volume.
7. Click OK, and then click Mount.
8. Verify the new junction path in the Details tab.
Namespace window
You can use the Namespace window to manage the NAS namespace of Storage Virtual Machines
(SVMs).
Command buttons
Mount
Opens the Mount Volume dialog box, which enables you to mount a volume to the junction
in an SVM namespace.
Unmount
Opens the Unmount Volume dialog box, which enables you to unmount a volume from its
parent volume.
Change Export Policy
Opens the Change Export Policy dialog box, which enables you to change the existing
export policy associated with the volume.
Refresh
Updates the information in the window.
Namespace list
Path
Specifies the junction path of the mounted volume. You can expand the junction path to
view the related volumes and qtrees.
Storage Object
Specifies the name of the volume mounted on the junction path. You can also view the
qtrees that the volume contains.
Export Policy
Specifies the export policy of the mounted volume.
Security Style
Specifies the security style for the volume. Possible values include UNIX (for UNIX mode
bits), NTFS (for CIFS ACLs), and Mixed (for mixed NFS and CIFS permissions).
Cluster Management Using OnCommand System Manager 212
Managing logical storage
Details tab
Displays general information about the selected volume or qtree, such as the name, type of storage
object, junction path of the mounted object, and export policy. If the selected object is a qtree,
details about the space hard limit, space soft limit, and space usage are displayed.
Shares
You can use System Manager to create, edit, and manage shares.
Steps
1. Click the SVMs tab.
2. Select the SVM, and then click Manage.
3. Click the Shares tab.
4. From the list of shares, select the share that you want to stop sharing and click Stop Sharing.
5. Select the confirmation check box and click Stop.
6. Verify that the share is no longer listed in the Shares window.
Related reference
Shares window on page 215
You can use the Shares window to manage your shares and display information about them.
To make the share name unique across all home directories, the share name must contain
either the %w or the %u variable. The share name can contain both the %d and the %w variable
(for example, %d/%w), or the share name can contain a static portion and a variable portion
(for example, home_%w).
Share path
This is the relative path, which is defined by the share and is therefore associated with one
of the share names, that is appended to each search path to generate the user's entire home
directory path from the root of the SVM. It can be static (for example, home), dynamic (for
example, %w), or a combination of the two (for example, eng/%w).
Search paths
This is the set of absolute paths from the root of the SVM that you specify that directs the
ONTAP search for home directories. You can specify one or more search paths by using the
vserver cifs home-directory search-path add command. If you specify
multiple search paths, ONTAP tries them in the order specified until it finds a valid path.
Directory
This is the user's home directory that you create for the user. The directory name is usually
the user's name. You must create the home directory in one of the directories that are
defined by the search paths.
Cluster Management Using OnCommand System Manager 215
Managing logical storage
Shares window
You can use the Shares window to manage your shares and display information about them.
• Command buttons
• Shares list
• Details area
Command buttons
Create Share
Opens the Create Share dialog box, which enables you to create a share.
Create Home Directory
Opens the Create Home Directory Share dialog box, which enables you to create a new
home directory share.
Edit
Opens the Edit Settings dialog box, which enables you to modify the properties of a
selected share.
Stop Sharing
Stops the selected object from being shared.
Refresh
Updates the information in the window.
Cluster Management Using OnCommand System Manager 216
Managing logical storage
Shares list
The shares list displays the name and path of each share.
Share Name
Displays the name of the share.
Path
Displays the complete path name of an existing folder, qtree, or volume that is shared. Path
separators can be backward or forward slashes, although Data ONTAP displays them as
forward slashes.
Home Directory
Displays the name of the home directory share.
Comment
Displays any description for the share.
Continuously Available Share
Displays whether the share is enabled for continuous availability.
Details area
The area below the shares list displays the share properties and the access rights for each share.
Properties
• Name
Displays the name of the share.
• Oplocks status
Specifies if the share uses opportunistic locks (oplocks).
• Browsable
Specifies whether the share can be browsed by Windows clients.
• Show Snapshot
Specifies whether Snapshot copies can be viewed by clients.
• Continuously Available Share
Specifies whether the share is enabled or disabled for continuous availability.
• Access-Based Enumeration
Specifies whether access-based enumeration (ABE) is enabled or disabled on the share.
• BranchCache
Specifies whether BranchCache is enabled or disabled on the share.
• SMB Encryption
Specifies whether data encryption using SMB 3.0 is enabled at the SVM level or at the
share level. If SMB encryption is enabled at SVM level, it applies for all the shares and
the value is shown as Enabled (at the SVM level).
Share access control
Displays the access rights of the domain users and groups and local users and groups for the
share.
Related tasks
Creating a CIFS share on page 212
You can use System Manager to create a share that enables you to specify a folder, qtree, or
volume that CIFS users can access.
Stopping share access on page 212
Cluster Management Using OnCommand System Manager 217
Managing logical storage
You can use System Manager to stop a share when you want to remove the shared network access
to a folder, qtree, or volume.
Editing share settings on page 213
You can use System Manager to modify the settings of a share, such as the symbolic link settings,
share access permissions of users or groups, and the type of access to the share. You can also
enable or disable continuous availability of a share over Hyper-V, and enable or disable access-
based enumeration (ABE).
LUNs
You can use System Manager to manage LUNs.
You can access all the LUNs in the cluster by using the LUNs tab or you can access the LUNs
specific to the SVM by using SVMs > LUNs.
Note: The LUNs tab is displayed only if you have enabled the FC/FCoE and iSCSI licenses.
Related information
SAN administration
SQL Specify the number of databases and the size of each database.
Other a. Specify the name and size of each LUN.
b. If you want to create more LUNs, click Add more LUNs, and then specify
the name and size for each LUN.
Data, log, binary, and temporary LUNs are created based on the selected application type.
3. In the Map to these Initiators area, perform these steps:
a. Specify the initiator group name and the type of operating system.
b. Add the host initiator WWPN by selecting it from the drop-down list or by typing the
initiator in the text box.
Only one initiator group is created.
4. Click Create.
A summary table is displayed with the LUNs that are created.
5. Click Close.
Related information
NetApp Documentation: ONTAP 9
The following table provides information about how space is provisioned for the default values of
SQL:
Node Aggregate LUN type Volume name LUN name Formula for LUN size
LUN size (GB)
node1 node1_aggr1 data db01_data01 db01_data01 Database size 125
÷8
data db01_data02 db01_data02 Database size 125
÷8
data db01_data03 db01_data03 Database size 125
÷8
data db01_data04 db01_data04 Database size 125
÷8
data db02_data01 db02_data01 Database size 125
÷8
data db02_data02 db02_data02 Database size 125
÷8
data db02_data03 db02_data03 Database size 125
÷8
data db02_data04 db02_data04 Database size 125
÷8
log db01_log db01_log Database size 50
÷ 20
temp sql_temp sql_temp Database size 330
÷3
node2 node2_aggr1 data db01_data05 db01_data05 Database size 125
÷8
data db01_data06 db01_data06 Database size 125
÷8
data db01_data07 db01_data07 Database size 125
÷8
data db01_data08 db01_data08 Database size 125
÷8
data db02_data05 db02_data05 Database size 125
÷8
data db02_data06 db02_data06 Database size 125
÷8
data db02_data07 db02_data07 Database size 125
÷8
data db02_data08 db02_data08 Database size 125
÷8
log db02_log db02_log Database size 50
÷ 20
Cluster Management Using OnCommand System Manager 220
Managing logical storage
Node Aggregate LUN type Volume name LUN name Formula for LUN size
LUN size (GB)
node1 node1_aggr1 data ora_vol01 ora_lundata01 Database size 250
÷8
data ora_vol02 ora_lundata02 Database size 250
÷8
data ora_vol03 ora_lundata03 Database size 250
÷8
data ora_vol04 ora_lundata04 Database size 250
÷8
log ora_vol05 ora_lunlog1 Database size 50
÷ 40
binaries ora_vol06 ora_orabin1 Database size 50
÷ 40
node2 node2_aggr1 data ora_vol07 ora_lundata05 Database size 250
÷8
data ora_vol08 ora_lundata06 Database size 250
÷8
data ora_vol09 ora_lundata07 Database size 250
÷8
data ora_vol10 ora_lundata08 Database size 250
÷8
log ora_vol11 ora_lunlog2 Database size 50
÷ 40
For Oracle RAC, LUNs are provisioned for grid files. Only two RAC nodes are supported for
Oracle RAC.
The following table provides information about how space is provisioned for the default values of
Oracle RAC:
Node Aggregate LUN type Volume name LUN name Formula for LUN size
LUN size (GB)
node1 node1_aggr1 data ora_vol01 ora_lundata01 Database size 250
÷8
data ora_vol02 ora_lundata02 Database size 250
÷8
data ora_vol03 ora_lundata03 Database size 250
÷8
Cluster Management Using OnCommand System Manager 221
Managing logical storage
Node Aggregate LUN type Volume name LUN name Formula for LUN size
LUN size (GB)
data ora_vol04 ora_lundata04 Database size 250
÷8
log ora_vol05 ora_lunlog1 Database size 50
÷ 40
binaries ora_vol06 ora_orabin1 Database size 50
÷ 40
grid ora_vol07 ora_lungrid1 10 GB 10
node2 node2_aggr1 data ora_vol08 ora_lundata05 Database size 250
÷8
data ora_vol09 ora_lundata06 Database size 250
÷8
data ora_vol10 ora_lundata07 Database size 250
÷8
data ora_vol11 ora_lundata08 Database size 250
÷8
log ora_vol12 ora_lunlog2 Database size 50
÷ 40
binaries ora_vol13 ora_orabin2 Database size 50
÷ 40
Creating LUNs
You can use System Manager to create LUNs for an existing aggregate, volume, or qtree when
there is available free space. You can create a LUN in an existing volume or create a new FlexVol
volume for the LUN. You can also enable Storage Quality of Service (QoS) to manage the
workload performance.
About this task
If you specify the LUN ID, System Manager checks the validity of the LUN ID before adding it. If
you do not specify a LUN ID, Data ONTAP automatically assigns one.
While selecting the LUN multiprotocol type, you should have considered the guidelines for using
each type.
In a MetroCluster configuration, System Manager displays only the following aggregates for
creating FlexVol volumes for the LUN:
• In normal mode, when you create volumes on sync-source SVMs or data-serving SVMs in the
primary site, only those aggregates that belong to the cluster in the primary site are displayed.
• In switched-over mode, when you create volumes on sync-destination SVMs or data-serving
SVMs in the surviving site, only switched-over aggregates are displayed.
Steps
1. Click the LUNs tab.
Cluster Management Using OnCommand System Manager 222
Managing logical storage
Select an existing policy group a. Select Existing Policy Group, and then click Choose to select an existing
policy group from the Select Policy Group dialog box.
b. Specify the minimum throughput limit.
If you do not specify the minimum throughput value or when the minimum
throughput value is set to 0, the system automatically displays "None" as
the value and this value is case-sensitive.
c. Specify the maximum throughput limit to ensure that the workload of the
objects in the policy group do not exceed the specified throughput limit.
• The minimum throughput limit and the maximum throughput limit must
be of the same unit type.
• If you do not specify the minimum throughput limit, then you can set
the maximum throughput limit in IOPs and B/s, KB/s, MB/s, and so on.
• If you do not specify the maximum throughput value, the system
automatically displays "Unlimited" as the value and this value is case-
sensitive. The unit that you specify does not affect the maximum
throughput.
If the policy group is assigned to more than one object, the maximum
throughput that you specify is shared among the objects.
Cluster Management Using OnCommand System Manager 223
Managing logical storage
9. Review the specified details in the LUN summary window, and then click Next.
10. Confirm the details, and then click Finish to complete the wizard.
Related concepts
Guidelines for using LUN multiprotocol type on page 232
The LUN multiprotocol type, or operating system type, specifies the operating system of the host
accessing the LUN. It also determines the layout of data on the LUN, and the minimum and
maximum size of the LUN.
Related reference
LUNs window on page 235
You can use the LUNs window to create and manage LUNs and to display information about
LUNs. You can also add, edit, or delete initiator groups and initiator IDs.
Deleting LUNs
You can use System Manager to delete LUNs and return the space used by the LUNs to their
containing aggregates or volumes.
Before you begin
• The LUN must be offline.
• The LUN must be unmapped from all initiator hosts.
Steps
1. Click the LUNs tab.
2. In the LUN Management tab, select one or more LUNs that you want to delete, and then click
Delete.
3. Select the confirmation check box, and then click Delete.
Related reference
LUNs window on page 235
You can use the LUNs window to create and manage LUNs and to display information about
LUNs. You can also add, edit, or delete initiator groups and initiator IDs.
You can use the LUNs window to create and manage LUNs and to display information about
LUNs. You can also add, edit, or delete initiator groups and initiator IDs.
Adding initiators
You can use System Manager to add initiators to an initiator group. An initiator provides access to
a LUN when the initiator group that it belongs to is mapped to that LUN.
Steps
1. Click the LUNs tab.
2. In the LUN Management tab, select the initiator group to which you want to add initiators and
click Edit.
3. In the Edit Initiator Group dialog box, click Initiators.
4. Click Add.
5. Specify the initiator name and click OK.
6. Click Save and Close.
Related reference
LUNs window on page 235
You can use the LUNs window to create and manage LUNs and to display information about
LUNs. You can also add, edit, or delete initiator groups and initiator IDs.
4. Select and delete the initiator from the text box and click Save.
The initiator is disassociated from the initiator group.
Related reference
LUNs window on page 235
You can use the LUNs window to create and manage LUNs and to display information about
LUNs. You can also add, edit, or delete initiator groups and initiator IDs.
Cloning LUNs
LUN clones enable you to create multiple readable and writable copies of a LUN. You can use
System Manager to create a temporary copy of a LUN for testing or to create a copy of your data
available to additional users without providing them access to the production data.
Before you begin
• You must have installed the FlexClone license on the storage system.
• When space reservation is disabled on a LUN, the volume that contains the LUN must have
enough space to accommodate changes to the clone.
About this task
• When you create a LUN clone, automatic deletion of the LUN clone is enabled by default in
System Manager. As a result, the LUN clone is deleted when Data ONTAP triggers automatic
deletion to conserve space.
• You cannot clone LUNs on SnapLock volumes.
Steps
1. Click the LUNs tab.
2. In the LUN Management tab, select the LUN that you want to clone, and then click Clone.
3. Optional: If you want to change the default name, specify a new name.
4. Click Clone.
5. Verify that the LUN clone you created is listed in the LUNs window.
Related reference
LUNs window on page 235
Cluster Management Using OnCommand System Manager 226
Managing logical storage
You can use the LUNs window to create and manage LUNs and to display information about
LUNs. You can also add, edit, or delete initiator groups and initiator IDs.
Editing LUNs
You can use the LUN properties dialog box in System Manager to change the name, description,
size, space reservation setting, or the mapped initiator hosts of a LUN.
Steps
1. Click the LUNs tab.
2. In the LUN Management tab, select the LUN that you want to edit from the list of LUNs, and
click Edit.
3. Make the required changes.
4. Click Save and Close.
Related reference
LUNs window on page 235
You can use the LUNs window to create and manage LUNs and to display information about
LUNs. You can also add, edit, or delete initiator groups and initiator IDs.
You can use the LUNs window to create and manage LUNs and to display information about
LUNs. You can also add, edit, or delete initiator groups and initiator IDs.
Moving LUNs
You can use System Manager to move a LUN from its containing volume to another volume or
qtree within a Storage Virtual Machine (SVM). You can move the LUN to a volume that is hosted
on an aggregate containing high-performance disks, thereby improving the performance when
accessing the LUN.
About this task
• You cannot move a LUN to a qtree within the same volume.
• If you have created a LUN from a file using the command-line interface (CLI), you cannot
move the LUN using System Manager.
• The LUN move operation is nondisruptive; it can be performed when the LUN is online and
serving data.
• You cannot use System Manager to move the LUN if the allocated space in the destination
volume is not sufficient to contain the LUN, and even if autogrow is enabled on the volume.
You should use the CLI instead.
• You cannot move LUNs on SnapLock volumes.
Steps
1. Click the LUNs tab.
2. In the LUN Management tab, select the LUN that you want to move from the list of LUNs,
and then click Move.
3. Optional: In the Move Options area of the Move LUN dialog box, specify a new name for the
LUN if you want to change the default name.
4. Select the storage object to which you want to move the LUN and perform one of the following
actions:
An existing volume or qtree a. Select a volume to which you want to move the LUN.
b. If the selected volume contains any qtrees, select the qtree to which you want
to move the LUN.
5. Click Move.
6. Confirm the LUN move operation, and click Continue.
For a brief period of time, the LUN is displayed on both the origin and destination volume.
After the move operation is complete, the LUN is displayed on the destination volume.
The destination volume or qtree is displayed as the new container path for the LUN.
Cluster Management Using OnCommand System Manager 228
Managing logical storage
6. Optional: Click the link that specifies the number of LUNs to review the list of selected LUNs,
and click Discard if you want to remove any LUNs from the list.
The link is displayed only when multiple LUNs are selected.
7. Click OK.
Editing initiators
You can use the Edit Initiator Group dialog box in System Manager to change the name of an
existing initiator in an initiator group.
Steps
1. Click the LUNs tab.
Cluster Management Using OnCommand System Manager 230
Managing logical storage
2. In the Initiator Groups tab, select the initiator group to which the initiator belongs, and then
click Edit.
3. In the Edit Initiator Group dialog box, click Initiators.
4. Select the initiator that you want to edit and click Edit.
5. Change the name and click OK.
6. Click Save and Close.
Related reference
LUNs window on page 235
You can use the LUNs window to create and manage LUNs and to display information about
LUNs. You can also add, edit, or delete initiator groups and initiator IDs.
When a LUN has space reservations disabled (a non-space-reserved LUN), no space is set aside
for that LUN at creation time. The storage required by any write operation to the LUN is allocated
from the volume when it is needed, provided sufficient free space is available.
If a space-reserved LUN is created in a none-guaranteed volume, the LUN behaves the same as a
non-space-reserved LUN. This is because a none-guaranteed volume has no space to allocate to
the LUN; the volume itself can only allocate space as it is written to, due to its none guarantee.
Therefore, creating a space-reserved LUN in a none-guaranteed volume is not recommended;
employing this configuration combination might provide write guarantees that are in fact
impossible.
When the space reserve is set to "Default", the ONTAP space reservation settings apply to the
LUNs. ONTAP space reservation settings also apply to the container volumes if new volumes are
created.
VMware If you are using an ESX Server and your LUNs will be configured with
VMFS.
Note: If you configure the LUNs with RDM, you can use the guest
operating system as the LUN multiprotocol type.
Windows 2003 MBR If your host operating system is Windows Server 2003 using the MBR
partitioning method.
Cluster Management Using OnCommand System Manager 233
Managing logical storage
Related tasks
Creating LUNs on page 221
You can use System Manager to create LUNs for an existing aggregate, volume, or qtree when
there is available free space. You can create a LUN in an existing volume or create a new FlexVol
volume for the LUN. You can also enable Storage Quality of Service (QoS) to manage the
workload performance.
Related information
NetApp Interoperability
Solaris Host Utilities 6.1 Installation and Setup Guide
Solaris Host Utilities 6.1 Quick Command Reference
Solaris Host Utilities 6.1 Release Notes
Resizing a LUN
You can resize a LUN to be bigger or smaller than its original size. When you resize a LUN, you
have to perform the steps on the host side that are recommended for the host type and the
application that is using the LUN.
Initiator hosts
Initiator hosts can access the LUNs mapped to them. When you map a LUN on a storage system to
the igroup, you grant all the initiators in that group access to that LUN. If a host is not a member
of an igroup that is mapped to a LUN, that host does not have access to the LUN.
VMware RDM
When you perform raw device mapping (RDM) on VMware, the operating system type of the
LUN must be the operating system type of the guest operating system.
igroup name
The igroup name is a case-sensitive name that must satisfy several requirements.
The igroup name:
• Contains 1 to 96 characters. Spaces are not allowed.
• Can contain the letters A through Z, a through z, numbers 0 through 9, hyphen (“-”),
underscore (“_”), colon (“:”), and period (“.”).
• Must start with a letter or number.
The name you assign to an igroup is independent of the name of the host that is used by the host
operating system, host files, or Domain Name Service (DNS). If you name an igroup aix1, for
example, it is not mapped to the actual IP host name (DNS name) of the host.
Note: You might find it useful to provide meaningful names for igroups, ones that describe the
hosts that can access the LUNs mapped to them.
Cluster Management Using OnCommand System Manager 235
Managing logical storage
igroup type
The igroup type can be mixed type, iSCSI, or FC/FCoE.
igroup ostype
The ostype indicates the type of host operating system used by all of the initiators in the igroup.
All initiators in an igroup must be of the same ostype. The ostypes of initiators are solaris,
windows, hpux, aix, netware, xen, hyper_v, vmware, and linux.
host1 LIF1
LIF2
You can limit access to LUN1 by using a port set. In the following example, initiator1 can access
LUN1 only through LIF1. However, initiator1 cannot access LUN1 through LIF2 because LIF2 is
not in port set1.
LIF2
LUNs window
You can use the LUNs window to create and manage LUNs and to display information about
LUNs. You can also add, edit, or delete initiator groups and initiator IDs.
• LUN Management tab
• Initiator Groups tab
• Portsets tab
LUN Management tab
This tab enables you to create, clone, delete, move, or edit the settings of LUNs. You can also
assign LUNs to a Storage Quality of Service (QoS) policy group.
Cluster Management Using OnCommand System Manager 236
Managing logical storage
Command buttons
Create
Opens the Create LUN wizard, which enables you to create LUNs.
In a cluster on an All Flash FAS platform that does not contain any existing LUNs, the
Create FC SAN optimized LUNs dialog box is opened, which enables you to set up one or
more FC SAN optimized LUNs.
Clone
Opens the Clone LUN dialog box, which enables you to clone the selected LUNs.
Edit
Opens the Edit LUN dialog box, which enables you to edit the settings of the selected LUN.
Delete
Deletes the selected LUN.
Status
Enables you to change the status of the selected LUN to either Online or Offline.
Move
Opens the Move LUN dialog box, which enables you to move the selected LUN to a new
volume or an existing volume or qtree within the same Storage Virtual Machine (SVM).
Storage QoS
Opens the Quality of Service details dialog box, which enables you to assign one or more
LUNs to a new or existing policy group.
Refresh
Updates the information in the window.
LUNs list
Name
Displays the name of the LUN.
SVM
Displays the name of the Storage Virtual Machine (SVM) in which the LUN is created.
Container Path
Displays the name of the file system (volume or qtree) that contains the LUN.
Space Reservation
Specifies whether space reservation is enabled or disabled.
Available Size
Displays the space available in the LUN.
Total Size
Displays the total space in the LUN.
%Used
Displays the total space (in percentage) that is used.
Type
Specifies the LUN type.
Status
Specifies the status of the LUN.
Cluster Management Using OnCommand System Manager 237
Managing logical storage
Policy Group
Displays the name of the Storage QoS policy group to which the LUN is assigned. By
default, this column is hidden.
Application
Displays the name of the application that is assigned to the LUN.
Details area
The area below the LUNs list displays details related to the selected LUN.
Details tab
Displays details related to the LUN such as the LUN serial number, whether the LUN is a
clone, LUN description, the policy group to which the LUN is assigned, minimum
throughput of the policy group, maximum throughput of the policy group, details about the
LUN move operation, and the application assigned to the LUN. You can also view details
about the initiator groups and initiators that are associated with the selected LUN.
Performance tab
Displays performance metrics graphs of the LUNs, including data rate, IOPS, and response
time.
Changing the client time zone or the cluster time zone impacts the performance metrics
graphs. Refresh your browser to see the updated graphs.
Initiator Groups tab
This tab enables you to create, delete, or edit the settings of initiator groups and initiator IDs.
Command buttons
Create
Opens the Create Initiator Group dialog box, which enables you to create initiator groups to
control host access to specific LUNs.
Edit
Opens the Edit Initiator Group dialog box, which enables you to edit the settings of the
selected initiator group.
Delete
Deletes the selected initiator group.
Refresh
Updates the information in the window.
Initiator Groups list
Name
Displays the name of the initiator group.
Type
Specifies the type of protocol supported by the initiator group. The supported protocols are
iSCSI, FC/FCoE, or Mixed (iSCSI and FC/FCoE).
Operating System
Specifies the operating system for the initiator group.
Portset
Displays the port set that is associated with the initiator group.
Cluster Management Using OnCommand System Manager 238
Managing logical storage
Initiator Count
Displays the number of initiators added to the initiator group.
Details area
The area below the Initiator Groups list displays details about the initiators that are added to the
selected initiator group and the LUNs that are mapped to the initiator group.
Portsets tab
This tab enables you to create, delete, or edit the settings of port sets.
Command buttons
Create
Opens the Create Portset dialog box, which enables you to create port sets to limit access to
your LUNs.
Edit
Opens the Edit Portset dialog box, which enables you to select the network interfaces that
you want to associate with the port set.
Delete
Deletes the selected port set.
Refresh
Updates the information in the window.
Portsets list
Portset Name
Displays the name of the port set.
Type
Specifies the type of protocol supported by the port set. The supported protocols are iSCSI,
FC/FCoE, or Mixed (iSCSI and FC/FCoE).
Interface Count
Displays the number of network interfaces that are associated with the port set.
Initiator Group Count
Displays the number of initiator groups that are associated with the port set.
Details area
The area below the Portsets list displays details about the network interfaces and initiator groups
associated with the selected port set.
Related tasks
Creating LUNs on page 221
You can use System Manager to create LUNs for an existing aggregate, volume, or qtree when
there is available free space. You can create a LUN in an existing volume or create a new FlexVol
volume for the LUN. You can also enable Storage Quality of Service (QoS) to manage the
workload performance.
Deleting LUNs on page 223
You can use System Manager to delete LUNs and return the space used by the LUNs to their
containing aggregates or volumes.
Creating initiator groups on page 223
Cluster Management Using OnCommand System Manager 239
Managing logical storage
You can use System Manager to create an initiator group. Initiator groups enable you to control
host access to specific LUNs. You can use port sets to limit which LIFs an initiator can access.
Deleting initiator groups on page 224
You can use the Initiator Groups tab in System Manager to delete initiator groups.
Adding initiators on page 224
You can use System Manager to add initiators to an initiator group. An initiator provides access to
a LUN when the initiator group that it belongs to is mapped to that LUN.
Deleting initiators from an initiator group on page 224
You can use the Initiator Groups tab in System Manager to delete an initiator.
Editing LUNs on page 226
You can use the LUN properties dialog box in System Manager to change the name, description,
size, space reservation setting, or the mapped initiator hosts of a LUN.
Editing initiator groups on page 229
You can use the Edit Initiator Group dialog box in System Manager to change the name of an
existing initiator group and its operating system. You can add initiators to or remove initiators
from the initiator group. You can also change the port set associated with the initiator group.
Editing initiators on page 229
You can use the Edit Initiator Group dialog box in System Manager to change the name of an
existing initiator in an initiator group.
Bringing LUNs online on page 226
You can use the LUN Management tab in System Manager to bring selected LUNs online and
make them available to the host.
Taking LUNs offline on page 226
You can use the LUN Management tab in System Manager to take selected LUNs offline and
make them unavailable for block protocol access.
Cloning LUNs on page 225
LUN clones enable you to create multiple readable and writable copies of a LUN. You can use
System Manager to create a temporary copy of a LUN for testing or to create a copy of your data
available to additional users without providing them access to the production data.
Qtrees
You can use System Manager create, edit, and delete Qtrees.
Related information
ONTAP concepts
Logical storage management
NFS management
CIFS management
Creating qtrees
Qtrees enable you to manage and partition your data within the volume. You can use the Create
Qtree dialog box in System Manager to add a new qtree to a volume on your storage system.
Steps
1. Click the SVMs tab.
2. Select the SVM, and then click Manage.
3. Click the Qtrees tab.
4. Click Create.
5. In the Details tab of the Create Qtree dialog box, type a name for the qtree.
Cluster Management Using OnCommand System Manager 240
Managing logical storage
Deleting qtrees
You can delete a qtree and reclaim the disk space it uses within a volume by using System
Manager. When you delete a qtree, all quotas applicable to that qtree are no longer applied by
Data ONTAP.
Before you begin
• The qtree status must be normal.
• The qtree must not contain any LUN.
Steps
1. Click the SVMs tab.
2. Select the SVM, and then click Manage.
3. Click the Qtrees tab.
4. In the Qtrees window, select one or more qtrees that you want to delete, and then click Delete.
5. Select the confirmation check box, and then click Delete.
6. Verify that the qtree you deleted is no longer included in the list of qtrees in the Qtrees
window.
Related reference
Qtrees window on page 243
Cluster Management Using OnCommand System Manager 241
Managing logical storage
You can use the Qtrees window to create, display, and manage information about qtrees.
Editing qtrees
You can use System Manager to modify the properties of a qtree, such as the security style, enable
or disable opportunistic locks (oplocks), or assign a new or existing export policy.
Steps
1. Click the SVMs tab.
2. Select the SVM, and then click Manage.
3. Click the Qtrees tab.
4. Select the qtree that you want to edit and click Edit.
5. In the Edit Qtree dialog box, edit the following properties:
• Oplocks
• Security style
• Export policy
6. Click Save.
7. Verify the changes you made to the selected qtree in the Qtrees window.
Related reference
Qtrees window on page 243
You can use the Qtrees window to create, display, and manage information about qtrees.
What a qtree is
A qtree is a logically defined file system that can exist as a special subdirectory of the root
directory within a FlexVol volume. You can create up to 4995 qtrees per volume. There is no
maximum limit for the storage system as a whole. You can create qtrees for managing and
partitioning your data within the volume. Qtrees are available only for FlexVol volumes.
In general, qtrees are similar to volumes. However, they have the following key differences:
• Snapshot copies can be enabled or disabled for individual volumes but not for individual
qtrees.
• Qtrees do not support space reservations or space guarantees.
There are no restrictions on how much disk space can be used by the qtree or how many files can
exist in the qtree.
Qtree options
A qtree is a logically defined file system that can exist as a special subdirectory of the root
directory within a FlexVol volume and are used to manage and partition data within the volume.
You can specify the following options when creating a qtree:
• Name of the qtree
• Volume in which you want the qtree to reside
• Oplocks
By default, oplocks are enabled for the qtree. If you disable oplocks for the entire storage
system, oplocks are not set even if you enable oplocks on a per-qtree basis
• Security style
The security style can be UNIX, NTFS, or Mixed (UNIX and NTFS). By default, the security
style of the qtree is the same as that of the selected volume.
• Export policy
Create a new export policy or select an existing policy. By default, the export policy of the
qtree is same as that of the selected volume.
• Space usage limits for qtree and user quotas
Security styles
Storage systems running Data ONTAP operating system supports different types of security styles
for a storage object. By default, the security style of a qtree is the same as that for the root
directory of the volume.
UNIX
The user's UID and GID, and the UNIX-style permission bits of the file or directory
determine user access. The storage system uses the same method for determining access for
both NFS and CIFS requests.
If you change the security style of a qtree or a volume from NTFS to UNIX, the storage
system disregards the Windows NT permissions that were established when the qtree or
volume used the NTFS security style.
NTFS
For CIFS requests, Windows NT permissions determine user access. For NFS requests, the
storage system generates and stores a set of UNIX-style permission bits that are at least as
restrictive as the Windows NT permissions.
Cluster Management Using OnCommand System Manager 243
Managing logical storage
The storage system grants NFS access only if the UNIX-style permission bits allow the user
access.
If you change the security style of a qtree or a volume from UNIX to NTFS, files created
before the change do not have Windows NT permissions. For these files, the storage system
uses only the UNIX-style permission bits to determine access.
Mixed
Some files in the qtree or volume have the UNIX security style and some have the NTFS
security style. A file's security style depends on whether the permission was last set from
CIFS or NFS.
For example, if a file currently uses the UNIX security style and a CIFS user sends a set-
ACL request to the file, the file's security style is changed to NTFS. If a file currently uses
the NTFS security style and an NFS user sends a set-permission request to the file, the file's
security style is changed to UNIX.
Qtrees window
You can use the Qtrees window to create, display, and manage information about qtrees.
• Command buttons on page 243
• Qtree list on page 243
• Details area on page 244
Command buttons
Create
Opens the Create Qtree dialog box, which enables you to create a new qtree.
Edit
Opens the Edit Qtree dialog box, which enables you to change the security style and to
enable or disable oplocks (opportunistic locks) on a qtree.
Change Export Policy
Opens the Export Policy dialog box, which enables you to assign one or more qtrees to new
or existing export policies.
Delete
Deletes the selected qtree.
This button is disabled unless the status of the selected qtree is normal.
Refresh
Updates the information in the window.
Qtree list
The qtree list displays the volume in which the qtree resides and the qtree name.
Name
Displays the name of the qtree.
Volume
Displays the name of the volume in which the qtree resides.
Security Style
Specifies the security style of the qtree.
Status
Specifies the current status of the qtree.
Cluster Management Using OnCommand System Manager 244
Managing logical storage
Oplocks
Specifies whether the oplocks setting is enabled or disabled for the qtree.
Export Policy
Displays the name of the export policy to which the qtree is assigned.
Details area
Details tab
Displays detailed information about the selected qtree, such as the mount path of the
volume containing the qtree, details about the export policy, and the export policy rules.
Related tasks
Creating qtrees on page 239
Qtrees enable you to manage and partition your data within the volume. You can use the Create
Qtree dialog box in System Manager to add a new qtree to a volume on your storage system.
Deleting qtrees on page 240
You can delete a qtree and reclaim the disk space it uses within a volume by using System
Manager. When you delete a qtree, all quotas applicable to that qtree are no longer applied by
Data ONTAP.
Editing qtrees on page 241
You can use System Manager to modify the properties of a qtree, such as the security style, enable
or disable opportunistic locks (oplocks), or assign a new or existing export policy.
Quotas
You can use System Manager to create, edit, and delete quotas.
Related information
Logical storage management
Creating quotas
Quotas enable you to restrict or track the disk space and number of files used by a user, group, or
qtree. You can use the Add Quota wizard in System Manager to create a quota and apply it to a
specific volume or qtree.
About this task
Using System Manager, the minimum value that you can specify for hard and soft limits on the
number of files that the quota can own is one thousand. If you want to specify a value lower than
one thousand, you should use the command-line interface (CLI).
Steps
1. Click the SVMs tab.
2. Select the SVM, and then click Manage.
3. Click the Quotas tab.
4. In the User Defined Quotas tab, click Create.
The Create Quota Wizard is displayed.
5. Type or select information as prompted by the wizard.
6. Confirm the details, and then click Finish to complete the wizard.
After you finish
You can use the local user name or RID to create user quotas. If you create the user quota or group
quota using the user name or group name, then the /etc/passwd file and /etc/group file
must be updated, respectively.
Cluster Management Using OnCommand System Manager 245
Managing logical storage
Related reference
Quotas window on page 248
You can use the Quotas window to create, display, and manage information about quotas.
Deleting quotas
You can use System Manager to delete one or more quotas as your users and their storage
requirements and limitations change.
Steps
1. Click the SVMs tab.
2. Select the SVM, and then click Manage.
3. Click the Quotas tab.
4. Select one or more quotas that you want to delete and click Delete.
5. Select the confirmation check box, and then click Delete.
Related reference
Quotas window on page 248
You can use the Quotas window to create, display, and manage information about quotas.
Resizing quotas
You can use the Resize Quota dialog box in System Manager to adjust the active quotas in the
specified volume so that they reflect the changes that you have made to a quota.
Before you begin
Quotas must be enabled for the volumes for which you want to resize quotas.
Steps
1. Click the SVMs tab.
2. Select the SVM, and then click Manage.
3. Click the Quotas tab.
4. In the Quota Status on Volumes tab of the Quotas window, select one or more volumes for
which you want to resize the quotas.
5. Click Resize.
Related reference
Quotas window on page 248
You can use the Quotas window to create, display, and manage information about quotas.
If... Then...
You want to view details of all Click the User Defined Quotas tab.
the quotas that you created
You want to view the details of Click the Quota Report tab.
the quotas that are currently
active
5. Select the quota that you want to view information about from the displayed list of quotas.
6. Review the quota details.
Cluster Management Using OnCommand System Manager 247
Managing logical storage
Types of quotas
Quotas can be classified on the basis of the targets they are applied to.
The following are the types of quotas based on the targets they are applied to:
User quota
The target is a user.
The user can be represented by a UNIX user name, UNIX UID, a Windows SID, a file or
directory whose UID matches the user, Windows user name in pre-Windows 2000 format,
and a file or directory with an ACL owned by the user's SID. You can apply it to a volume
or a qtree.
Group quota
The target is a group.
The group is represented by a UNIX group name, a GID, or a file or directory whose GID
matches the group. Data ONTAP does not apply group quotas based on a Windows ID. You
can apply it to a volume or a qtree.
Qtree quota
The target is a qtree, specified by the path name to the qtree.
You can determine the size of the target qtree.
Default quota
Automatically applies a quota limit to a large set of quota targets without creating separate
quotas for each target.
Default quotas can be applied to all three types of quota target (users, groups, and qtrees).
The quota type is determined by the value of the type field.
Quota limits
You can apply a disk space limit or limit the number of files for each quota type. If you do not
specify a limit for a quota, none is applied.
Disk space soft limit
Disk space limit applied to soft quotas.
Disk space hard limit
Disk space limit applied to hard quotas.
Threshold limit
Disk space limit applied to threshold quotas.
Files soft limit
The maximum number of files on a soft quota.
Files hard limit
The maximum number of files on a hard quota.
Quota management
System Manager includes several features that help you to create, edit, or delete quotas. You can
create a user, group, or tree quota and you can specify quota limits at the disk and file levels. All
quotas are established on a per-volume basis.
After creating a quota, you can perform the following tasks:
• Enable and disable quotas
Cluster Management Using OnCommand System Manager 248
Managing logical storage
• Resize quotas
Example
The following example shows how a change in the security style of a qtree results in a different
user being charged for the usage of a file in the particular qtree.
Suppose NTFS security is in effect on qtree A, and an ACL gives Windows user corp\joe
ownership of a 5 MB file. User corp\joe is charged with 5 MB of disk space usage for qtree A.
Now you change the security style of qtree A from NTFS to UNIX. After quotas are reinitialized,
Windows user corp\joe is no longer charged for this file; instead, the UNIX user corresponding to
the UID of the file is charged for the file. The UID could be a UNIX user mapped to corp\joe or
the root user.
Quotas window
You can use the Quotas window to create, display, and manage information about quotas.
• Tabs
• Command buttons
• User Defined Quotas list
• Details area
Tabs
User Defined Quotas
You can use the User Defined Quotas tab to view details of the quotas that you create and
to create, edit, or delete quotas.
Cluster Management Using OnCommand System Manager 249
Managing logical storage
Quota Report
You can use the Quota Report tab to view the space and file usage and to edit the space and
file limits of quotas that are active.
Quota Status on Volumes
You can use the Quota Status on Volumes tab to view the status of a quota and to turn
quotas on or off and to resize quotas.
Command buttons
Create
Opens the Create Quota wizard, which enables you to create quotas.
Edit Limits
Opens the Edit Limits dialog box, which enables you to edit settings of the selected quota.
Delete
Deletes the selected quota from the quotas list.
Refresh
Updates the information in the window.
User Defined Quotas list
The quotas list displays the name and storage information for each quota.
Volume
Specifies the volume to which the quota is applied.
Qtree
Specifies the qtree associated with the quota. "All Qtrees" indicates that the quota is
associated with all the qtrees.
Type
Specifies the quota type: user, or group, or tree.
User/Group
Specifies a user or a group associated with the quota. "All Users" indicates that the quota is
associated with all the users. "All groups" indicates that the quota is associated with all the
groups.
Quota Target
Specifies the type of target that the quota is assigned to. The target can be qtree, user, or
group.
Space Hard Limit
Specifies the disk space limit applied to hard quotas.
This field is hidden by default.
Space Soft Limit
Specifies the disk space limit applied to soft quotas.
This field is hidden by default.
Threshold
Specifies the disk space limit applied to threshold quotas.
This field is hidden by default.
Cluster Management Using OnCommand System Manager 250
Managing logical storage
CIFS protocol
You can use System Manager to enable and configure CIFS servers to allow CIFS clients to access
files on the cluster.
Related information
CIFS management
Setting up CIFS
You can use System Manager to enable and configure CIFS servers to allow CIFS clients to access
files on the cluster.
Before you begin
• The CIFS license must be installed on your storage system.
• While configuring CIFS in the Active Directory domain, the following requirements must be
met:
◦ DNS must be enabled and configured correctly.
◦ The storage system must be able to communicate with the domain controller using the fully
qualified domain name (FQDN).
◦ The time difference (clock skew) between the cluster and the domain controller must not be
more than five minutes.
Cluster Management Using OnCommand System Manager 251
Managing logical storage
• If CIFS is the only protocol configured on the Storage Virtual Machine (SVM), the following
requirements must be met:
◦ The root volume security style must be NTFS.
By default, System Manager sets the security style as UNIX.
◦ Superuser access must be set to Any for CIFS protocol.
Steps
1. Click the SVMs tab.
2. Select the SVM, and then click Manage.
3. Click the SVM Settings tab.
4. In the Configuration tab, click Setup.
5. In the General tab of the CIFS Server Setup dialog box, specify the NetBIOS name and the
Active Directory domain details.
6. Click the Options tab and perform the following actions:
• In the SMB settings area, select or clear the SMB signing and SMB encryption check box
as required.
• Specify the default UNIX user.
• In the WINS Servers area, add the required IP address.
7. Click Setup.
Related tasks
Creating a CIFS share on page 212
You can use System Manager to create a share that enables you to specify a folder, qtree, or
volume that CIFS users can access.
Editing the volume properties on page 171
You can modify volume properties such as the volume name, security style, fractional reserve, and
space guarantee by using System Manager. You can modify storage efficiency settings
(deduplication schedule and policy, and compression) and space reclamation settings.
Modifying export policy rules on page 272
You can use System Manager to modify the specified client, access protocols, and access
permissions of an export policy rule.
Related reference
CIFS window on page 258
You can use the CIFS window to configure the CIFS server, manage domain controllers, manage
symbolic UNIX mappings, and configure BranchCache.
• IP address
• Enable or disable SMB signing
Enabling SMB signing helps to ensure that the data is not compromised. However, you
might encounter performance degradation in the form of increased CPU usage on both the
clients and the server, although the network traffic remains the same. You can disable SMB
signing on any of your Windows clients that do not require protection against replay
attacks.
For information about disabling SMB signing on Windows clients, see the Microsoft
Windows documentation.
• Enable or disable SMB 3.0 encryption
6. Click either Save or Save and Close.
Related reference
CIFS window on page 258
You can use the CIFS window to configure the CIFS server, manage domain controllers, manage
symbolic UNIX mappings, and configure BranchCache.
You can use the CIFS window to configure the CIFS server, manage domain controllers, manage
symbolic UNIX mappings, and configure BranchCache.
Setting up BranchCache
You can use System Manager to configure BranchCache on a CIFS-enabled Storage Virtual
Machine (SVM) to enable caching of content on computers local to requesting clients.
Before you begin
• CIFS must be licensed and a CIFS server must be configured.
• For BranchCache version 1, SMB 2.1 or later must be enabled.
• For BranchCache version 2, SMB 3.0 must be enabled and the remote Windows clients must
support BranchCache 2.
About this task
• You can configure BranchCache on SVMs with FlexVol volumes.
• You can create an all-shares BranchCache configuration if want to offer caching services for all
the content contained within all the SMB shares on the CIFS server.
• You can create a per-share BranchCache configuration if you want to offer caching services for
the content contained within selected SMB shares on the CIFS server.
Steps
1. Click the SVMs tab.
2. Select the SVM, and then click Manage.
3. Click the SVM Settings tab.
4. In the BranchCache tab, click Set Up.
5. In the BranchCache Setup dialog box, enter the following information:
a. Specify the path to the hash store.
The path can be to an existing directory where you want the hash data to be stored. The
destination path must be read-writable. Read-only paths, such as Snapshot directories, are
not allowed.
b. Specify the maximum size (in KB, MB, GB, TB, or PB) for a hash data store.
If the hash data exceeds this value, older hashes are deleted to provide space for newer
hashes. The default size for the hash store is 1 GB.
c. Specify the operating mode for the BranchCache configuration.
The default operating mode is set to all shares.
d. Specify a server key to prevent clients from impersonating the BranchCache server.
You can set the server key to a specific value so that if multiple servers are providing
BranchCache data for the same files, clients can use hashes from any server using that same
server key. If the server key contains any spaces, you must enclose the server key in
quotation marks.
Cluster Management Using OnCommand System Manager 255
Managing logical storage
6. Click Save.
7. Verify that the domain controller you added is displayed in the list of preferred domain
controllers.
SMB concepts
Clients can access files on Storage Virtual Machines (SVMs) using the SMB protocol, provided
that Data ONTAP can properly authenticate the user.
When an SMB client connects to a CIFS server, Data ONTAP authenticates the user with a
Windows domain controller. Data ONTAP uses two methods to obtain the domain controllers to
use for authentication:
• It queries DNS servers in the domain that the CIFS server is configured to use for domain
controller information.
Cluster Management Using OnCommand System Manager 257
Managing logical storage
How ONTAP enables you to provide SMB client access to UNIX symbolic links
You must understand certain concepts about how ONTAP enables you to manage symbolic links.
This is important to provide access to SMB users connecting to the SVMs.
A symbolic link is a file that is created in a UNIX environment that contains a reference to another
file or directory. If a client accesses a symbolic link, the client is redirected to the target file or
directory to which the symbolic link refers.
ONTAP provides SMB clients the ability to follow UNIX symbolic links that are configured on
the SVM. This feature is optional, and you can configure it on a per-share basis with one of the
following settings:
• Enabled with read/write access
• Enabled with read-only access
• Disabled by hiding symbolic links from SMB clients
• Disabled with no access to symbolic links from SMB clients
There are two types of symbolic links: relative symbolic links and absolute symbolic links.
Relative
A relative symbolic link contains a reference to a file or directory relative to its parent
directory. Therefore, the path of the file that it is referring to should not begin with a slash
(/). If you enable symbolic links on a share, relative symbolic links work without further
configuration.
Absolute
An absolute symbolic link contains a reference to a file or directory in the form of an
absolute path. Therefore, the path of the file that it is referring to should begin with a slash
(/). It is treated as an absolute path location of the file from the root of the file system. An
absolute symbolic link can refer to a file or directory within or outside of the file system of
the symbolic link. If the target is not in the same local file system, the symbolic link is
called a widelink. If you enable symbolic links on a share, absolute symbolic links do not
work right away. You must first create a mapping between the UNIX path of the symbolic
link to the destination CIFS path. When creating absolute symbolic link mappings, you can
specify whether it is a local link or a widelink. If you create an absolute symbolic link to a
file or directory outside of the local share but set the locality to local, ONTAP disallows
access to the target.
Note that if a client attempts to delete a local symbolic link (absolute or relative), only the
symbolic link is deleted, not the target file or directory. However, if a client attempts to
delete a widelink, it might delete the actual target file or directory to which the widelink
refers. ONTAP does not have control over this because the client can explicitly open the
target file or directory outside the SVM and delete it.
Cluster Management Using OnCommand System Manager 258
Managing logical storage
CIFS window
You can use the CIFS window to configure the CIFS server, manage domain controllers, manage
symbolic UNIX mappings, and configure BranchCache.
Configuration tab
This tab enables you to create and manage the CIFS server.
Server
Specifies the status of the CIFS server, name of the server, authentication mode, and the
name of the active directory domain.
Home Directories
Specifies home directory paths and the style to determine how PC user names are mapped
to home directory entries.
Command buttons
• Setup
Opens the CIFS Setup wizard, which enables you to set up CIFS on your Storage
Virtual Machine (SVM).
• Options
Displays the CIFS Options dialog box, which enables you to enable or disable SMB 3.0
signing, enable or disable SMB 3.0 encryption, and add Windows Internet Name
Service (WINS) servers.
Cluster Management Using OnCommand System Manager 259
Managing logical storage
SMB signing ensures that the network traffic between the CIFS server and the client is
not compromised.
• Delete
Enables you to delete the CIFS server.
• Refresh
Updates the information in the window.
Domain tab
This tab enables you to view and reset your CIFS domain controllers, and to add or delete
preferred domain controllers. You can also use this tab to manage CIFS group policy
configurations.
Servers
Displays information about discovered authentication servers and your preferred domain
controllers on the CIFS-enabled SVM.
You can also reset the information about the discovered servers, add a preferred domain
controller, delete a domain controller, or refresh the list of domain controllers.
Group Policy
Enables you to view, enable, or disable group policy configurations on the CIFS server. You
can also reload a group policy if the status of the policy is changed.
Symlinks tab
This tab enables you to manage mappings of UNIX symbolic links for CIFS users.
Path Mappings
Displays the list of symbolic link mappings for CIFS.
Command buttons
• Create
Opens the Create New Symlink Path Mappings dialog box, which enables you to create
a UNIX symbolic link mapping.
• Edit
Opens the Edit Symlink Path Mappings dialog box, which enables you to modify the
CIFS share and path.
• Delete
Enables you to delete the symbolic link mapping.
• Refresh
Updates the information in the window.
BranchCache tab
This tab enables you to set up and manage BranchCache settings on CIFS-enabled SVMs with
FlexVol volumes.
You can view the status of the BranchCache service, path to the hash store, size of the hash store,
and the operating mode, server key, and version of BranchCache.
Command buttons
• Setup
Opens the BranchCache Setup dialog box, which enables you to configure BranchCache
for the CIFS server.
• Edit
Opens the Modify BranchCache Settings dialog box, which enables you to modify the
properties of the BranchCache configuration.
Cluster Management Using OnCommand System Manager 260
Managing logical storage
• Delete
Enables you to delete the BranchCache configuration.
• Refresh
Updates the information in the window.
Related tasks
Setting up CIFS on page 250
You can use System Manager to enable and configure CIFS servers to allow CIFS clients to access
files on the cluster.
Editing the general properties for CIFS on page 251
You can modify the general properties for CIFS, such as the default UNIX and Windows user by
using System Manager. You can also enable or disable SMB signing for the CIFS server.
Adding home directory paths on page 252
You can use System Manager to specify one or more paths that can be used by the storage system
to resolve the location of users' CIFS home directories.
Deleting home directory paths on page 252
You can use System Manager to delete a home directory path when you do not want the storage
system to use the path to resolve the location of users' CIFS home directories.
Resetting CIFS domain controllers on page 253
You can use System Manager to reset the CIFS connection to domain controllers for the specified
domain. Failure to reset the domain controller information can cause a connection failure.
NFS protocol
You can use System Manager to authenticate NFS clients to access data on the SVM.
Related information
NFS management
You can use the NFS window to display and configure your NFS settings.
NFS window
You can use the NFS window to display and configure your NFS settings.
Server Status
Displays the status of the NFS service. The service is enabled if the NFS protocol is
configured on the Storage Virtual Machine (SVM).
Note: If you have upgraded to Data ONTAP 8.3 or later from an NFS-enabled storage
system running Data ONTAP 8.1.x, the NFS service is enabled in Data ONTAP 8.3 or
later. However, you must enable support for NFSv3 or NFSv4 because NFSv2 is no
longer supported.
Command buttons
Enable
Enables the NFS service.
Disable
Disables the NFS service.
Edit
Opens the Edit NFS Settings dialog box, which enables you to edit NFS settings.
Refresh
Updates the information in the window.
Related tasks
Editing NFS settings on page 260
You can use System Manager to edit the NFS settings, such as enabling or disabling NFSv3,
NFSv4, and NFSv4.1; enabling or disabling read and write delegations for NFSv4 clients; and
enabling NFSv4 ACLs. You can also edit the default Windows user.
iSCSI protocol
You can use System Manager to configure the iSCSI protocol that enables you to transfer block
data to hosts using SCSI protocol over TCP/IP.
Cluster Management Using OnCommand System Manager 262
Managing logical storage
Related information
SAN administration
LIFs to the portsets. LIFs are created on the most suitable adapters and assigned to portsets to
ensure data path redundancy.
Related reference
iSCSI window on page 267
You can use the iSCSI window to start or stop the iSCSI service, change a storage system iSCSI
node name, and create or change the iSCSI alias of a storage system. You can also add or change
the initiator security setting for an iSCSI initiator that is connected to your storage system.
You can use the iSCSI window to start or stop the iSCSI service, change a storage system iSCSI
node name, and create or change the iSCSI alias of a storage system. You can also add or change
the initiator security setting for an iSCSI initiator that is connected to your storage system.
You can use the iSCSI window to start or stop the iSCSI service, change a storage system iSCSI
node name, and create or change the iSCSI alias of a storage system. You can also add or change
the initiator security setting for an iSCSI initiator that is connected to your storage system.
What iSCSI is
The iSCSI protocol is a licensed service on the storage system that enables you to transfer block
data to hosts using the SCSI protocol over TCP/IP. The iSCSI protocol standard is defined by RFC
3270.
In an iSCSI network, storage systems are targets that have storage target devices, which are
referred to as LUNs (logical units). A host with an iSCSI host bus adapter (HBA), or running
iSCSI initiator software, uses the iSCSI protocol to access LUNs on a storage system. The iSCSI
protocol is implemented over the storage system’s standard Ethernet interfaces using a software
driver.
The connection between the initiator and target uses a standard TCP/IP network. No special
network configuration is needed to support iSCSI traffic. The network can be a dedicated TCP/IP
network, or it can be your regular public network. The storage system listens for iSCSI
connections on TCP port 3260.
Related information
RFC 3270: www.ietf.org/rfc/rfc3270.txt
Initiator security
You can select from the following authentication methods:
• none
There is no authentication for the initiator.
• deny
The initiator is denied access when it attempts to authenticate to the storage system.
• CHAP
The initiator logs in using a Challenge Handshake Authentication Protocol (CHAP) user name
and password. You can specify a CHAP password or generate a random password.
• default
The initiator uses the default security settings. The initial setting for default initiator security is
none.
In CHAP authentication, the storage system sends the initiator a challenge value. The initiator
responds with a value calculated using a one-way hash function. The storage system then checks
the response against its own version of the value calculated using the same one-way hash function.
If the values match, the authentication is successful.
iSCSI window
You can use the iSCSI window to start or stop the iSCSI service, change a storage system iSCSI
node name, and create or change the iSCSI alias of a storage system. You can also add or change
the initiator security setting for an iSCSI initiator that is connected to your storage system.
Tabs
Service
You can use the Service tab to start or stop the iSCSI service, change a storage system
iSCSI node name, and create or change the iSCSI alias of a storage system.
Initiator Security
You can use the Initiator Security tab to add or change the initiator security setting for an
iSCSI initiator that is connected to your storage system.
Command buttons
Edit
Opens Edit iSCSI Service Configurations dialog box, which enables you to change iSCSI
node name and iSCSI alias of the storage system.
Start
Starts the iSCSI service.
Stop
Stops the iSCSI service.
Refresh
Updates the information in the window.
Details area
The details area displays information about the status of the iSCSI service, iSCSI target node
name, and iSCSI target alias. You can use this area to enable or disable the iSCSI service on a
network interface.
Related tasks
Creating iSCSI aliases on page 262
An iSCSI alias is a user-friendly identifier that you assign to an iSCSI target device (in this case,
the storage system) to make it easier to identify the target device in user interfaces. You can use
System Manager to create an iSCSI alias.
Enabling or disabling the iSCSI service on storage system interfaces on page 262
You can use System Manager to control which network interfaces are used for iSCSI
communication by enabling or disabling the interfaces. When the iSCSI service is enabled, iSCSI
connections and requests are accepted over those network interfaces that are enabled for iSCSI,
but not over disabled interfaces.
Adding the security method for iSCSI initiators on page 263
You can use System Manager to add an initiator and specify the security method that is used to
authenticate the initiator.
Editing default security settings on page 263
You can use the Edit Default Security dialog box in System Manager to edit the default security
settings for iSCSI initiators that are connected to the storage system.
Editing initiator security on page 264
Cluster Management Using OnCommand System Manager 268
Managing logical storage
The security style configured for an initiator specifies how the authentication is done for that
initiator during the iSCSI connection login phase. You can use System Manager to change the
security for selected iSCSI initiators by changing the authentication method.
Changing the default iSCSI initiator authentication method on page 264
You can use System Manager to change the default iSCSI authentication method, which is the
authentication method that is used for any initiator that is not configured with a specific
authentication method.
Setting the default security for iSCSI initiators on page 265
You can use System Manager to remove the authentication settings for an initiator and use the
default security method to authenticate the initiator.
Starting or stopping the iSCSI service on page 265
You can use System Manager to start or stop the iSCSI service on your storage system.
FC/FCoE protocol
You can use System Manager to configure FC/FCoE protocols.
Related information
SAN administration
What FC is
FC is a licensed service on the storage system that enables you to export LUNs and transfer block
data to hosts using the SCSI protocol over a Fibre Channel fabric.
FC/FCoE window
You can use the FC/FCoE window to start or stop the FC service.
Command buttons
Edit
Opens the Edit Node Name dialog box, which enables you to change the FC or FCoE node
name.
Start
Starts the FC/FCoE service.
Stop
Stops the FC/FCoE service.
Refresh
Updates the information in the window.
FC/FCoE details
The details area displays information about the status of FC/FCoE service, the node name, and the
FC/FCoE adapters.
Related tasks
Starting or stopping the FC or FCoE service on page 268
Cluster Management Using OnCommand System Manager 270
Managing logical storage
The FC service enables you to manage FC target adapters for use with LUNs. You can use System
Manager to start the FC service to bring the adapters online and allow access to the LUNs on the
storage system. You can stop the FC service to take the FC adapters offline and prevent access to
the LUNs.
Changing an FC or FCoE node name on page 268
If you replace a storage system chassis and reuse it in the same Fibre Channel SAN, the node
name of the replaced storage system in certain cases might be duplicated. You can change the node
name of the storage system by using System Manager.
Configuring FC and FCoE protocols on SVMs on page 46
You can configure the FC and the FCoE protocols on the SVM for SAN hosts. LIFs are created on
the most suitable adapters and assigned to port sets to ensure data path redundancy. Based on your
requirements, you can configure either FC, FCoE, or both the protocols by using System Manager.
Export policies
You can use System Manager to create, edit, and manage export policies.
You can enter the IPv4 address 0.0.0.0/0 to provide access to all the hosts.
b. If you want to modify the rule index number, select the appropriate rule index number.
c. Select one or more access protocols.
If you do not select any access protocol, the default value "Any" is assigned to the export
rule.
d. Select one or more security types and access rules.
7. Click OK.
Cluster Management Using OnCommand System Manager 272
Managing logical storage
8. Verify that the export rule you added is displayed in the Export Rules tab for the selected
export policy.
• Assign the same export policy to multiple volumes or qtrees of the SVM for identical client
access control without having to create a new export policy for each volume or qtree.
If a client makes an access request that is not permitted by the applicable export policy, the request
fails with a permission-denied message. If a client does not match any rule in the export policy,
then access is denied. If an export policy is empty, then all accesses are implicitly denied.
You can modify an export policy dynamically on a system running Data ONTAP.
Specifies the priority based on which the export rules are processed. You can use the
Move Up and Move Down buttons to choose the priority.
• Client
Specifies the client to which the rule applies.
• Access Protocols
Displays the access protocol that is specified for the export rule.
If you have not specified any access protocol, the default value "Any" is considered.
• Read-Only Rule
Specifies one or more security types for read-only access.
• Read/Write Rule
Specifies one or more security types for read/write access.
• Superuser Access
Specifies the security type or types for superuser access.
Assigned Objects tab
The Assigned Objects tab enables you to view the volumes and qtrees that are assigned to the
selected export policy. You can also view whether the volume is encrypted or not.
Efficiency policies
You can use System Manager to create, edit, and delete efficiency policies.
• Default
You can configure a volume with the efficiency policy to run the scheduled deduplication
operations on the volume.
• Inline-only
You can configure a volume with the inline-only efficiency policy and enable inline
compression, to run inline compression on the volume without any scheduled or manually
started background efficiency operations.
For more information about the inline-only and default efficiency policies, see the man pages.
• Background
Specifies that the QoS policy is running in the background, which reduces potential
performance impact on the client operations.
• Best-effort
Specifies that the QoS policy is running on a best-effort basis, which enables you to
maximize the utilization of system resources.
Maximum Runtime
Specifies the maximum run-time duration of an efficiency policy. If this value is not
specified, the efficiency policy is run till the operation is complete.
Details area
The area below the efficiency policy list displays additional information about the selected
efficiency policy, including the schedule name and the schedule details for a schedule-based
policy, and the threshold value for a threshold-based policy.
Protection policies
You can use System Manager to create, edit, and delete protection policies.
Steps
1. Click the SVMs tab.
2. Select the SVM, and then click Manage.
3. Click the SVM Settings tab.
4. In the Policies pane, click Protection Policies.
5. In the Protection Policies window, select the policy that you want to delete, and then click
Delete.
6. In the Delete Policy dialog box, click Delete.
Edit
Opens the Edit Policy dialog box, which enables you to edit a policy.
Delete
Opens the Delete Policy dialog box, which enables you to delete a policy.
Refresh
Updates the information in the window.
Protection policies list
Name
Displays the name of the protection policy.
Type
Displays the policy type, which can be Vault, Mirror Vault, or Asynchronous Mirror.
Comment
Displays the description specified for the policy.
Transfer Priority
Displays the data transfer priority, such as Normal or Low.
Details area
Policy Details tab
Displays details of the protection policy, such as the user who created the policy, number of
rules, retention count, and status of network compression.
Policy Rules tab
Displays details of the rules that are applied to the policy. The Policy Rules tab is displayed
only if the selected policy contains rules.
• If you do not specify the minimum throughput limit, then you can set the maximum
throughput limit in IOPs and B/s, KB/s, MB/s, and so on.
• If you do not specify the maximum throughput value, the system automatically displays
"Unlimited" as the value and this value is case-sensitive. The unit that you specify does not
affect the maximum throughput.
9. Click OK.
• LUNs
You assign a storage object to a policy group to control and monitor a workload. You can monitor
workloads without controlling them.
The following illustration shows an example environment before and after using Storage QoS. On
the left, workloads compete for cluster resources to transmit I/O. These workloads get "best effort"
performance, which means you have less performance predictability (for example, a workload
might get such good performance that it negatively impacts other workloads). On the right are the
same workloads assigned to policy groups. The policy groups enforce a maximum throughput
limit.
Client
applications
Client
applications
The -max-throughput parameter specifies the maximum throughput limit for the policy group
that the policy group must not exceed. The value of this parameter is specified in terms of IOPS or
MB/s, or a combination of comma-separated IOPS and MB/s values, and the range is zero to
infinity.
Cluster Management Using OnCommand System Manager 282
Managing logical storage
The units are base 10. There should be no space between the number and the unit. The default
value for the -max-throughput parameter is infinity, which is specified by the special value
INF.
Note: There is no default unit for the -max-throughput parameter. For all values except zero
and infinity, you must specify the unit.
The keyword "none" is available for a situation that requires the removal of a value. The keyword
"INF" is available for a situation that requires the maximum available value to be specified.
Examples of valid throughput specifications are: ""100B/s"", "10KB/s", "1gb/s", "500MB/s",
"1tb/s", "100iops", "100iops,400KB/s", and "800KB/s,100iops".
In the following illustration, the SVM vs3 is assigned to policy group pg2. You cannot assign
volumes vol4 or vol5 to a policy group because an object in the storage hierarchy (SVM vs3) is
assigned to a policy group.
SVM “vs3”
vol4 vol5
Command buttons
Create
Opens the Create QoS Policy Group dialog box, which enables you to create new policy
groups.
Edit
Opens the Edit QoS Policy Group dialog box, which enables you to modify the selected
policy group.
Delete
Deletes the selected policy groups.
Refresh
Updates the information in the window.
QoS Policy Groups list
The QoS Policy Groups list displays the policy group name and the maximum throughput for each
policy group.
Name
Displays the name of the QoS policy group.
Minimum Throughput
Displays the minimum throughput limit specified for the policy group.
If you have not specified any minimum throughput value, the system automatically displays
"None" as the value and this value is case-sensitive.
Maximum Throughput
Displays the maximum throughput limit specified for the policy group.
If you have not specified any maximum throughput value, the system automatically displays
"Unlimited" as the value and this value is case-sensitive.
Storage Objects Count
Displays the number of storage objects assigned to the policy group.
Details area
The area below the QoS Policy Groups list displays detailed information about the selected policy
group.
Assigned Storage Objects tab
Displays the name and type of the storage object that is assigned to the selected policy
group.
NIS services
You can use System Manager to add, edit, and manage Network Information Service (NIS)
domains.
Related information
NFS configuration
Cluster Management Using OnCommand System Manager 285
Managing logical storage
NIS window
The NIS window enables you to view the current NIS settings for your storage system.
Command buttons
Create
Opens the Create NIS Domain dialog box, which enables you to create NIS domains.
Edit
Opens the Edit NIS Domain dialog box, which enables you to add, delete, or modify NIS
servers.
Delete
Deletes the selected NIS domain.
Refresh
Updates the information in the window.
Cluster Management Using OnCommand System Manager 286
Managing logical storage
You can use System Manager to configure an LDAP server that centrally maintains user
information.
Details area
The details area displays information such as the KDC IP address and port number, KDC vendor,
administrative server IP address and port number, Active Directory server and server IP address of
the selected Kerberos realm configuration.
Refresh
Updates the information in the window.
Kerberos Interface list
Provides details about the Kerberos configuration.
Interface Name
Specifies the logical interfaces associated with the Kerberos configuration for SVMs.
Service Principal Name
Specifies the Service Principal Name (SPN) that matches the Kerberos configuration.
Realm
Specifies the name of the Kerberos realm associated with the Kerberos configuration.
Kerberos Status
Specifies whether Kerberos is enabled.
DNS/DDNS Services
You can use System Manager to manage DNS/DDNS services.
The DNS/DDNS Services window enables you to view the current DNS and DDNS settings for
your system.
Related tasks
Enabling or disabling DNS and DDNS on page 293
You can use System Manager to enable or disable DNS on a storage system. After enabling DNS,
you can enable DDNS.
Editing DNS and DDNS settings on page 294
You can maintain host information centrally using DNS. You can use System Manager to add or
modify the DNS domain name of your storage system. You can also enable DDNS on your storage
system to update the name server automatically in the DNS server.
Users
You can use System Manager to create and manage Storage Virtual Machine (SVM) user
accounts.
Users window
You can use the Users window to manage user accounts, reset a user's password, or display
information about all user accounts.
Command buttons
Add
Opens the Add User dialog box, which enables you to add user accounts.
Edit
Opens the Modify User dialog box, which enables you to modify user login methods.
Note: It is best to use a single role for all access and authentication methods of a user
account.
Delete
Enables you to delete a selected user account.
Change Password
Opens the Change Password dialog box, which enables you to reset the user password.
Lock
Locks the user account.
Refresh
Updates the information in the window.
Users list
The area below the users list displays detailed information about the selected user.
User
Displays the name of the user account.
Account Locked
Displays whether the user account is locked.
Cluster Management Using OnCommand System Manager 297
Managing logical storage
Roles
You can use System Manager to create and manage roles.
Related information
Administrator authentication and RBAC
Adding roles
You can use System Manager to add an access-control role and specify the command or command
directory that the role's users can access. You can also control the level of access the role has to the
command or command directory and specify a query that applies to the command or command
directory.
Steps
1. Click the SVMs tab.
2. Select the SVM, and then click Manage.
3. Click the SVM Settings tab.
4. In the SVM User Details pane, click Roles.
5. Click Add.
6. In the Add Role dialog box, specify the role name and add the role attributes.
7. Click Add.
Editing roles
You can use System Manager to modify an access-control role's access to a command or command
directory and restrict a user's access to only a specified set of commands.
Steps
1. Click the SVMs tab.
2. Select the SVM, and then click Manage.
3. Click the SVM Settings tab.
4. In the SVM User Details pane, click Roles.
5. Select the role that you want to modify, and then click Edit.
6. Modify the role attributes, and then click Modify.
Cluster Management Using OnCommand System Manager 298
Managing logical storage
vsadmin-volume • Managing own user account local password and key information
• Managing volumes, including volume moves
• Managing quotas, qtrees, Snapshot copies, and files
• Managing LUNs
• Configuring protocols: NFS, CIFS, iSCSI, and FC, including FCoE
• Configuring services: DNS, LDAP, and NIS
• Monitoring network interface
• Monitoring the health of the SVM
vsadmin-protocol • Managing own user account local password and key information
• Configuring protocols: NFS, CIFS, iSCSI, and FC, including FCoE
• Configuring services: DNS, LDAP, and NIS
• Managing LUNs
• Monitoring network interface
• Monitoring the health of the SVM
vsadmin-backup • Managing own user account local password and key information
• Managing NDMP operations
• Making a restored volume read/write
• Managing SnapMirror relationships and Snapshot copies
• Viewing volumes and network information
vsadmin-snaplock • Managing own user account local password and key information
• Managing volumes, except volume moves
• Managing quotas, qtrees, Snapshot copies, and files
• Performing SnapLock operations, including privileged delete
• Configuring protocols: NFS and CIFS
• Configuring services: DNS, LDAP, and NIS
• Monitoring jobs
• Monitoring network connections and network interface
Cluster Management Using OnCommand System Manager 299
Managing logical storage
Roles window
You can use the Roles window to manage roles for user accounts.
Command buttons
Add
Opens the Add Role dialog box, which enables you to create an access-control role and
specify the command or command directory that the role's users can access.
Edit
Opens the Edit Role dialog box, which enables you to add or modify role attributes.
Refresh
Updates the information in the window.
Roles list
The roles list provides a list of roles that are available to be assigned to users.
Role Attributes area
The details area displays the role attributes, such as the command or command directory that the
selected role can access, the access level, and the query that applies to the command or command
directory.
UNIX
You can use System Manager to maintain a list of local UNIX users and groups for each Storage
Virtual Machine (SVM).
UNIX window
You can use the UNIX window to maintain a list of local UNIX users and groups for each Storage
Virtual Machine (SVM). You can use local UNIX users and groups for authentication and name
mappings.
Groups tab
You can use the Groups tab to add, edit, or delete UNIX groups that are local to an SVM.
Command buttons
Add Group
Opens the Add Group dialog box, which enables you to create UNIX groups that are local
to SVMs. Local UNIX groups are used with local UNIX users.
Edit
Opens the Edit Group dialog box, which enables you to edit a group ID.
Delete
Deletes the selected group.
Cluster Management Using OnCommand System Manager 300
Managing logical storage
Refresh
Updates the information in the window.
Groups list
Group Name
Displays the name of the group.
Group ID
Displays the ID of the group.
Users tab
You can use the Users tab to add, edit, and delete UNIX users that are local to SVMs.
Command buttons
Add User
Opens the Add User dialog box, which enables you to create UNIX users that are local to
SVMs.
Edit
Opens the Edit User dialog box, which enables you to edit the User ID, UNIX group to
which the user belongs, and the full name of the user.
Delete
Deletes the selected user.
Refresh
Updates the information in the window.
Users list
User Name
Displays the name of the user.
User ID
Displays the ID of the user.
Full Name
Displays the full name of the user.
Primary Group ID
Displays the ID of the group to which the user belongs.
Primary Group Name
Displays the name of the group to which the user belongs.
Windows
You can use System Manager to create and manage Windows groups and user accounts.
Related information
CIFS management
also assign privileges that define the user rights or capabilities that a member of the group has
when performing administrative activities.
Before you begin
CIFS server must be configured for the SVM.
About this task
• You can specify a group name with or without the local domain name.
The local domain is the name of the CIFS server for the SVM. For example, if the CIFS server
name of the SVM is "CIFS_SERVER" and you want to create an "engineering" group, you can
specify either "engineering" or "CIFS_SERVER\engineering" as the group name.
The following rules apply when using a local domain as part of the group name:
◦ You can only specify the local domain name for the SVM to which the group is applied.
For example, if the local CIFS server name is "CIFS_SERVER", you cannot specify
"CORP_SERVER\group1" as the group name.
◦ You cannot use "BUILTIN" as a local domain in the group name.
For example, you cannot create a group with "BUILTIN\group1" as the name.
◦ You cannot use an Active Directory domain as a local domain in the group name.
For example, you cannot create a group named "AD_DOM\group1", where "AD_DOM" is
the name of an Active Directory domain.
• You cannot use a group name that already exists.
• The group name must meet the following requirements:
◦ Must not exceed 256 characters
◦ Must not end in a period
◦ Must not include commas
◦ Must not include any of the following printable characters: " / \ [ ] : | < > + = ; ? * @
◦ Must not include characters in the ASCII range 1 through 31, which are non-printable
Steps
1. Click the SVMs tab.
2. Select the SVM, and then click Manage.
3. Click the SVM Settings tab.
4. In the Host Users and Groups pane, click Windows
5. In the Groups tab, click Create.
6. In the Create Group dialog box, specify a name for the group and a description that helps
you identify this new group.
7. Assign a set of privileges to the group.
You can select the privileges from the predefined set of supported privileges.
8. Click Add to add users to this group.
9. In the Add Members to Group dialog box, perform one of the following actions:
• Specify the Active Directory user or Active Directory group to be added to a particular
local group.
• Select the users from the list of available local users in the SVM.
• Click OK.
10. Click Create.
Result
The local Windows group is created and is listed in the Groups window.
Cluster Management Using OnCommand System Manager 302
Managing logical storage
Related reference
Windows window on page 311
You can use the Windows window to maintain a list of local Windows users and groups for each
Storage Virtual Machine (SVM) on the cluster. You can use the local Windows users and groups
for authentication and name mappings.
5. In the Groups tab, select the group you want to rename, and then click Rename.
6. In the Rename Group window, specify a new name for the group.
Result
The local group name is changed and is listed with the new name in the Groups window.
Related reference
Windows window on page 311
You can use the Windows window to maintain a list of local Windows users and groups for each
Storage Virtual Machine (SVM) on the cluster. You can use the local Windows users and groups
for authentication and name mappings.
Related reference
Windows window on page 311
You can use the Windows window to maintain a list of local Windows users and groups for each
Storage Virtual Machine (SVM) on the cluster. You can use the local Windows users and groups
for authentication and name mappings.
including user name and SID. A local user account authenticates locally on the CIFS server
using NTLM authentication.
User accounts have several uses:
• Used to grant User Rights Management privileges to a user.
• Used to control share-level and file-level access to file and folder resources that the
SVM owns.
Local group
A group with a unique SID has visibility only on the SVM on which it is created. Groups
contain a set of members. Members can be local users, domain users, domain groups, and
domain machine accounts. Groups can be created, modified, or deleted.
Groups have several uses:
• Used to grant User Rights Management privileges to its members.
• Used to control share-level and file-level access to file and folder resources that the
SVM owns.
Local domain
A domain that has local scope, which is bounded by the SVM. The local domain's name is
the CIFS server name. Local users and groups are contained within the local domain.
Security identifier (SID)
A SID is a variable-length numeric value that identifies Windows-style security principals.
For example, a typical SID takes the following form:
S-1-5-21-3139654847-1303905135-2517279418-123456.
NTLM authentication
A Microsoft Windows security method used to authenticate users on a CIFS server.
Cluster replicated database (RDB)
A replicated database with an instance on each node in a cluster. Local user and group
objects are stored in the RDB.
You can create one or more local groups for the following reasons:
Cluster Management Using OnCommand System Manager 310
Managing logical storage
• You want to control access to file and folder resources by using local groups for share and file-
access control.
• You want to create local groups with customized User Rights Management privileges.
Some built-in user groups have predefined privileges. To assign a customized set of privileges,
you can create a local group and assign the necessary privileges to that group. You can then
add local users, domain users, and domain groups to the local group.
Windows window
You can use the Windows window to maintain a list of local Windows users and groups for each
Storage Virtual Machine (SVM) on the cluster. You can use the local Windows users and groups
for authentication and name mappings.
• Users tab
• Groups tab
Users tab
You can use the Users tab to view the Windows users that are local to an SVM.
Cluster Management Using OnCommand System Manager 312
Managing logical storage
Command buttons
Create
Opens the Create User dialog box, which enables you to create a local Windows user
account that can be used to authorize access to data contained in the SVM over an SMB
connection.
Edit
Opens the Edit User dialog box, which enables you to edit local Windows user properties,
such as group memberships and the full name. You can also enable or disable the user
account.
Delete
Opens the Delete User dialog box, which enables you to delete a local Windows user
account from an SVM if it is no longer required.
Add to Group
Opens the Add Groups dialog box, which enables you to assign group membership to a user
account if you want the user to have privileges associated with that group.
Set Password
Opens the Reset Password dialog box, which enables you to reset the password of a
Windows local user. For example, you might want to reset the password if the password is
compromised or if the user has forgotten the password.
Rename
Opens the Rename User dialog box, which enables you to rename a local Windows user
account to more easily identify it.
Refresh
Updates the information in the window.
Users list
Name
Displays the name of the local user.
Full Name
Displays the full name of the local user.
Account Disabled
Displays whether the local user account is enabled or disabled.
Description
Displays the description for this local user.
Users Details Area
Group
Displays the list of groups in which the user is a member.
Groups tab
You can use the Groups tab to add, edit, or delete Windows groups that are local to an SVM.
Command buttons
Create
Opens the Create Group dialog box, which enables you to create local Windows groups that
can be used for authorizing access to data contained in SVMs over an SMB connection.
Cluster Management Using OnCommand System Manager 313
Managing logical storage
Edit
Opens the Edit Group dialog box, which enables you to edit the local Windows group
properties, such as privileges assigned to the group and the description of the group.
Delete
Opens the Delete Group dialog box, which enables you to delete a local Windows group
from an SVM if it is no longer required.
Add Members
Opens the Add Members dialog box, which enables you to add local or Active Directory
users, or Active Directory groups to the local Windows group.
Rename
Opens the Rename Group dialog box, which enables you to rename a local Windows group
to more easily identify it.
Refresh
Updates the information in the window.
Groups list
Name
Displays the name of the local group.
Description
Displays the description for this local group.
Groups Details Area
Privileges
Displays the list of privileges associated with the selected group.
Users
Displays the list of local users associated with the selected group.
Related tasks
Creating a local Windows group on page 300
You can create local Windows groups that can be used for authorizing access to data contained in
the Storage Virtual Machine (SVM) over an SMB connection by using System Manager. You can
also assign privileges that define the user rights or capabilities that a member of the group has
when performing administrative activities.
Editing local Windows group properties on page 302
You can manage local group membership by adding and removing a local or Active Directory user
or Active Directory group by using System Manager. You can modify the privileges assigned to a
group and the description of a group to easily identify the group.
Adding user accounts to a Windows local group on page 302
You can add a local or an Active Directory user, or an Active Directory group, if you want users to
have the privileges associated with that group by using System Manager.
Renaming a local Windows group on page 303
You can use System Manager to rename a local Windows group to identify it more easily.
Deleting a local Windows group on page 304
You can use System Manager to delete a local Windows group from a Storage Virtual Machine
(SVM) if it is no longer required for determining access rights to data contained on the SVM or
for assigning SVM user rights (privileges) to group members.
Creating a local Windows user account on page 304
Cluster Management Using OnCommand System Manager 314
Managing logical storage
You can use System Manager to create a local Windows user account that can be used to authorize
access to data contained in the Storage Virtual Machine (SVM) over an SMB connection. You can
also use local Windows user accounts for authentication when creating a CIFS session.
Editing the local Windows user properties on page 305
You can use System Manager to modify a local Windows user account if you want to change an
existing user's full name or description, and if you want to enable or disable the user account. You
can also modify the group memberships assigned to the user account.
Assigning group memberships to a user account on page 306
You can use System Manager to assign group membership to a user account if you want a user to
have privileges associated with that group.
Renaming a local Windows user on page 306
You can use System Manager to rename a local Windows user account to identify it more easily.
Resetting the password of a Windows local user on page 307
You can use System Manager to reset the password of a Windows local user. For example, you
might want to reset the password if the password is compromised or if the user has forgotten the
password.
Deleting a local Windows user account on page 308
You can use System Manager to delete a local Windows user account from a Storage Virtual
Machine (SVM) if it is no longer required for local CIFS authentication to the CIFS server of the
SVM or for determining access rights to data contained in the SVM.
Name mapping
You can use System Manager to specify name mapping entries to map users from different
platforms.
Related information
CIFS management
If no mapping is found, ONTAP checks whether the lowercase Windows user name is a valid
user name in the UNIX domain. If this does not work, it uses the default UNIX user provided
that it is configured. If the default UNIX user is not configured and ONTAP cannot obtain a
mapping this way either, mapping fails and an error is returned.
• For UNIX to Windows mapping
If no mapping is found, ONTAP tries to find a Windows account that matches the UNIX name
in the CIFS domain. If this does not work, it uses the default CIFS user, provided that it is
configured. If the default CIFS user is not configured and ONTAP cannot obtain a mapping
this way either, mapping fails and an error is returned.
Direction
Specifies the direction of the name mapping. Possible values are krb_unix for a Kerberos-
to-UNIX name mapping, win_unix for a Windows-to-UNIX name mapping, and unix_win
for a UNIX-to-Windows name mapping.
Command buttons
Add
Opens the Add Group Mapping Entry dialog box, which enables you to create a group
mapping on SVMs.
Edit
Opens the Edit Group Mapping Entry dialog box, which enables you to edit the group
mapping on SVMs.
Delete
Opens the Delete Group Mapping Entries dialog box, which enables you to delete a group
mapping entry.
Swap
Opens the Swap Group Mapping Entries dialog box, which enables you to interchange
positions of the two selected group mapping entries.
Refresh
Updates the information in the window.
Cluster Management Using OnCommand System Manager 317
Managing data protection
Mirror relationships
You can use System Manager to create and manage mirror relationships by using the mirror policy.
Related information
Data protection using SnapMirror and SnapVault technology
You can use the Protection window to create and manage mirror, vault, and mirror vault
relationships and to display details about these relationships. The Protection window does not
display load-sharing (LS) and transition relationships (TDP).
Related reference
Protection window on page 348
You can use the Protection window to create and manage mirror, vault, and mirror vault
relationships and to display details about these relationships. The Protection window does not
display load-sharing (LS) and transition relationships (TDP).
You can use the Protection window to create and manage mirror, vault, and mirror vault
relationships and to display details about these relationships. The Protection window does not
display load-sharing (LS) and transition relationships (TDP).
Steps
1. Click Protection > Relationships.
2. Select the mirror relationship that you want to resynchronize.
3. Click Operations > Resync.
4. Select the confirmation check box and click Resync.
Related reference
Protection window on page 348
You can use the Protection window to create and manage mirror, vault, and mirror vault
relationships and to display details about these relationships. The Protection window does not
display load-sharing (LS) and transition relationships (TDP).
4. Optional: Select the Keep any partially transferred data check box to retain the data that is
already transferred to the destination volume.
5. Click Abort.
Related reference
Protection window on page 348
You can use the Protection window to create and manage mirror, vault, and mirror vault
relationships and to display details about these relationships. The Protection window does not
display load-sharing (LS) and transition relationships (TDP).
Any other volume Select Other volume, and then select the cluster and SVM from the list.
4. Restore the data to a new volume or to an existing volume:
5. Select the latest Snapshot copy or select the specific Snapshot copy that you want to restore.
6. Select the confirmation check box to restore the volume from the Snapshot copy.
7. Optional: Select the Enable Network Compression check box to compress the data that is
being transferred during the restore operation.
8. Click Restore.
Snapshot copies are used to update destination volumes. Snapshot copies are transferred from the
source volume to the destination volume by using an automated schedule or manually; therefore,
mirrors are updated asynchronously.
You can create data protection mirror relationships to destinations on the same aggregate as the
source volume, and on the same Storage Virtual Machine (SVM) or on a different SVM. For
greater protection, you can create the relationships to destinations on a different aggregate, which
enables you to recover from any failure of the source volume's aggregate. However, these two
configurations do not protect against a cluster failure.
To protect against a cluster failure, you can create a data protection mirror relationship in which
the source volume is on one cluster and the destination volume is on a different cluster. If the
cluster on which the source volume resides experiences a disaster, you can direct user clients to the
destination volume on the cluster peer until the source volume is available again.
Cluster Management Using OnCommand System Manager 327
Managing data protection
Vault relationships
You can use System Manager to create and manage vault relationships by using the vault policy.
Related information
Data protection using SnapMirror and SnapVault technology
4. In the Create Protection Relationship dialog box, select Vault from the Relationship Type
drop-down list.
5. Specify the cluster, the SVM, and the source volume.
6. If the selected SVM is not peered, use the Authenticate link to enter the credentials of the
remote cluster, and create an SVM peer relationship.
If the name of the local SVM and remote SVM are identical, or if the local SVM is in a peer
relationship with another remote SVM of the same name, or if the local SVM contains a data
SVM of the same name, the Enter Alias Name for SVM dialog box is displayed.
7. Optional: Enter an alias name for the remote SVM in the Enter Alias Name for SVM dialog
box.
8. Create a new destination volume or select an existing volume:
9. If you are creating a SnapLock volume, specify the default retention period.
The default retention period can be set to any value between 1 day through 70 years or
Infinite.
10. Select an existing policy or create a new policy:
If the relationship is not released, then you must use the CLI to run the release operation on the
source cluster to delete the base Snapshot copies that were created for the vault relationship
from the source volume.
Related reference
Protection window on page 348
You can use the Protection window to create and manage mirror, vault, and mirror vault
relationships and to display details about these relationships. The Protection window does not
display load-sharing (LS) and transition relationships (TDP).
You can use the Protection window to create and manage mirror, vault, and mirror vault
relationships and to display details about these relationships. The Protection window does not
display load-sharing (LS) and transition relationships (TDP).
You can use the Protection window to create and manage mirror, vault, and mirror vault
relationships and to display details about these relationships. The Protection window does not
display load-sharing (LS) and transition relationships (TDP).
You can use the Protection window to create and manage mirror, vault, and mirror vault
relationships and to display details about these relationships. The Protection window does not
display load-sharing (LS) and transition relationships (TDP).
Any other volume Select Other volume, and select the cluster and SVM from the list.
4. Restore the data to a new volume or select any existing volume:
5. Select the latest Snapshot copy or select the specific Snapshot copy that you want to restore.
6. Select the confirmation check box to restore the volume from the Snapshot copy.
Cluster Management Using OnCommand System Manager 335
Managing data protection
7. Optional: Select the Enable Network Compression check box to compress the data that is
being transferred during the restore operation.
8. Click Restore.
Related reference
Protection window on page 348
You can use the Protection window to create and manage mirror, vault, and mirror vault
relationships and to display details about these relationships. The Protection window does not
display load-sharing (LS) and transition relationships (TDP).
Guidelines for planning Snapshot copy schedule and retention for SnapVault backups
It is important to plan the Snapshot copy transfer schedule and retention for your SnapVault
backups.
When planning SnapVault relationships, consider the following guidelines:
• Before you create a SnapVault policy, you should create a table to plan which Snapshot copies
you want replicated to the SnapVault secondary volume and how many of each you want to
keep.
For example:
◦ Hourly (periodically throughout the day)
Does the data change often enough throughout the day to make it worthwhile to replicate a
Snapshot copy every hour, every two hours, or every four hours?
Cluster Management Using OnCommand System Manager 338
Managing data protection
◦ Nightly
Do you want to replicate a Snapshot copy every night or just workday nights?
◦ Weekly
How many weekly Snapshot copies is it useful to keep in the SnapVault secondary volume?
• The primary volume should have an assigned Snapshot policy that creates Snapshot copies at
the intervals you need, and labels each Snapshot copy with the appropriate snapmirror-
label attribute name.
• The SnapVault policy assigned to the SnapVault relationship should select the Snapshot copies
you want from the primary volume, identified by the snapmirror-label attribute name, and
specify how many Snapshot copies of each name you want to keep on the SnapVault secondary
volume.
Result
A Snapshot copy is created and transferred to the destination.
This Snapshot copy is used as a baseline for subsequent incremental Snapshot copies.
4. Optional: Select the Keep any partially transferred data check box to retain the data that is
already transferred to the destination volume.
5. Click Abort.
Result
The transfer status is displayed as "Aborting" until the operation is complete and displayed as
"Idle" after the operation is complete.
Any other volume Select Other volume, and then select the cluster and the SVM.
4. Restore the data to a new volume or to an existing volume:
5. Select either the latest Snapshot copy or a specific Snapshot copy that you want to restore.
6. Select the confirmation check box to restore the volume from the Snapshot copy.
7. Optional: Select the Enable Network Compression check box to compress the data that is
being transferred during the restore operation.
8. Click Restore.
Protection window
You can use the Protection window to create and manage mirror, vault, and mirror vault
relationships and to display details about these relationships. The Protection window does not
display load-sharing (LS) and transition relationships (TDP).
Destination Volume
Displays the volume to which data is mirrored or vaulted in a relationship.
Is Healthy
Displays whether the relationship is healthy or not.
Relationship State
Displays the state of the relationship, such as Snapmirrored, Uninitialized, or Broken Off.
Transfer Status
Displays the relationship status, such as Idle, Transferring, or Aborting.
Relationship Type
Displays the type of relationship, such as Mirror, Vault, or Mirror and Vault.
Lag Time
Displays the difference between the current time and the timestamp of the Snapshot copy
that was last transferred successfully to the destination storage system. It indicates the time
difference between the data that is currently on the source system and the latest data stored
on the destination system. The value that is displayed can be positive or negative. The value
is negative if the time zone of the destination system is behind the time zone of the source
storage system.
Policy Name
Displays the name of the policy that is assigned to the relationship.
Policy Type
Displays the type of policy that is assigned to the relationship. The policy type can be Vault,
Mirror Vault, or Asynchronous Mirror.
Details area
Details tab
Displays general information about the selected relationship, such as the source and
destination clusters, data transfer rate, state of the relationship, details about the network
compression ratio, data transfer status, type of current data transfer, type of last data
transfer, latest Snapshot copy, and timestamp of the latest Snapshot copy.
Policy Details tab
Displays details about the policy that is assigned to the selected protection relationship. It
also displays the SnapMirror label and the Snapshot copy schedules in the source volume
that match the specified label.
Snapshot Copies tab
Displays the count of Snapshot copies with the SnapMirror label attribute for the selected
protection relationship and the timestamp of the latest Snapshot copy.
Related concepts
What a SnapVault backup is on page 335
A SnapVault backup is a collection of Snapshot copies on a FlexVol volume that you can restore
data from if the primary data is not usable. Snapshot copies are created based on a Snapshot
policy. The SnapVault backup backs up Snapshot copies based on its schedule and SnapVault
policy rules.
Related tasks
Creating a mirror relationship from a source SVM on page 187
Cluster Management Using OnCommand System Manager 350
Managing data protection
You can use System Manager to create a mirror relationship from the source Storage Virtual
Machine (SVM), and to assign a mirror policy and schedule to the mirror relationship. The mirror
copy enables quick availability of data if the data on the source volume is corrupted or lost.
Creating a mirror relationship from a destination SVM on page 317
You can use System Manager to create a mirror relationship from the destination Storage Virtual
Machine (SVM), and to assign a policy and schedule to the mirror relationship. The mirror copy
enables quick availability of data if the data on the source volume is corrupted or lost.
Deleting mirror relationships on page 319
You can delete a mirror relationship and permanently end the mirror relationship between the
source and destination volumes. When a mirror relationship is deleted, the base Snapshot copy on
the source volume is deleted.
Editing mirror relationships on page 320
You can use System Manager to edit a mirror relationship either by selecting an existing policy or
schedule in the cluster, or by creating a new policy or schedule. However, you cannot edit the
parameters of an existing policy or schedule.
Initializing mirror relationships on page 321
When you start a mirror relationship for the first time, you have to initialize the relationship.
Initializing a relationship consists of a complete baseline transfer of data from the source volume
to the destination. You can use System Manager to initialize a mirror relationship if you have not
already initialized the relationship while creating it.
Updating mirror relationships on page 321
You can initiate an unscheduled mirror update of the destination. You might have to perform a
manual update to prevent data loss due to an upcoming power outage, scheduled maintenance, or
data migration.
Quiescing mirror relationships on page 322
Use System Manager to quiesce a mirror destination to stabilize the destination before creating a
Snapshot copy. The quiesce operation enables active mirror transfers to finish and disables future
transfers for the mirroring relationship.
Resuming mirror relationships on page 322
You can resume a quiesced mirror relationship. When you resume the relationship, normal data
transfer to the mirror destination is resumed and all the mirror activities are restarted.
Breaking SnapMirror relationships on page 323
You must break the mirror relationship if a mirror source becomes unavailable and you want client
applications to be able to access the data from the mirror destination. After the mirror relationship
is broken, the destination volume type changes from DP to RW.
Resynchronizing mirror relationships on page 323
You can reestablish a mirror relationship that was broken earlier. You can perform a
resynchronization operation to recover from a disaster that disabled the source volume.
Reverse resynchronizing mirror relationships on page 324
You can use System Manager to reestablish a mirror relationship that was previously broken. In a
reverse resynchronization operation, you reverse the functions of the source and destination.
Aborting a mirror transfer on page 324
You can abort a volume replication operation before the data transfer is complete. You can abort a
scheduled update, a manual update, or an initial data transfer.
Creating a vault relationship from a source SVM on page 189
You can use System Manager to create a vault relationship from the source Storage Virtual
Machine (SVM), and to assign a vault policy to the vault relationship to create a backup vault. In
Cluster Management Using OnCommand System Manager 351
Managing data protection
the event of data loss or corruption on a system, backed-up data can be restored from the backup
vault destination.
Creating a vault relationship from a destination SVM on page 327
You can use System Manager to create a vault relationship from the destination Storage Virtual
Machine (SVM), and to assign a vault policy to create a backup vault. In the event of data loss or
corruption on a system, backed-up data can be restored from the backup vault destination.
Deleting vault relationships on page 329
You can use System Manager to end a vault relationship between a source and destination volume,
and release the Snapshot copies from the source.
Editing vault relationships on page 330
You can use System Manager to edit a vault relationship either by selecting an existing policy or
schedule in the cluster, or by creating a new policy or schedule. However, you cannot edit the
parameters of an existing policy or schedule.
Initializing a vault relationship on page 331
You can use System Manager to initialize a vault relationship if you have not already initialized it
while creating the relationship. A baseline transfer of data is initiated from the source FlexVol
volume to the destination FlexVol volume.
Updating a vault relationship on page 332
You can use System Manager to manually initiate an unscheduled incremental update. You might
require a manual update to prevent data loss due to an upcoming power outage, scheduled
maintenance, or data migration.
Quiescing a vault relationship on page 332
You can use System Manager to disable data transfers to the destination FlexVol volume by
quiescing the vault relationship.
Resuming a vault relationship on page 333
You can resume a quiesced vault relationship by using System Manager. When you resume the
relationship, normal data transfer to the destination FlexVol volume is resumed and all vault
activities are restarted.
Aborting a Snapshot copy transfer on page 333
You can use System Manager to abort or stop a data transfer that is currently in progress.
Restoring a volume in a vault relationship on page 334
You can use System Manager to restore Snapshot copies to a source or other volumes if the source
data is corrupted and is no longer usable. You can replace the original data with the Snapshot
copies in the destination volume.
Snapshot policies
You can use System Manager to create and manage Snapshot policies in your storage system.
The maximum number of Snapshot copies that can be retained by the specified schedules must
not exceed 254.
5. Click OK, and then click Create.
Edit
Opens the Edit Snapshot Policy dialog box, which enables you to modify the frequency at
which Snapshot copies should be created and the maximum number of Snapshot copies to
be retained.
Delete
Opens the Delete dialog box, which enables you to delete the selected Snapshot policy.
View as
Enables you to view the Snapshot policies either as a list or as a tree.
Status
Opens the menu, which you can use to either enable or disable the selected Snapshot policy.
Refresh
Updates the information in the window.
Snapshot policy list
Policy/Schedule Name
Specifies the name of the Snapshot policy and the schedules in the policy.
Storage Virtual Machine
Specifies the name of the Storage Virtual Machine (SVM) to which the Snapshot copies
belong.
Status
Specifies the status of the Snapshot policy, which can be Enabled or Disabled.
Maximum Snapshots to be retained
Specifies the maximum number of Snapshot copies to be retained.
SnapMirror Label
Specifies the name of the SnapMirror label attribute of the Snapshot copy generated by the
backup schedule.
Schedules
You can use System Manager to create and manage schedules in your storage system.
Creating schedules
You can create schedules to run a job at a specific time or at regular periods by using System
Manager.
About this task
When you create a schedule in a MetroCluster configuration, it is a best practice to create an
equivalent schedule on the cluster in the surviving site as well.
Steps
1. Click Protection > Schedules.
2. Click Create.
3. In the Create Schedule dialog box, specify the schedule name.
4. Create a schedule based on your requirements:
Cluster Management Using OnCommand System Manager 354
Managing data protection
Editing schedules
You can make changes to a previously created cron schedule or an interval schedule if it does not
meet your requirements by using System Manager. You can modify schedule details such as
recurring days and hours, interval options, and advanced cron options.
About this task
When you edit a schedule in a MetroCluster configuration, it is a best practice to edit the
equivalent schedule on the surviving site cluster as well.
Steps
1. Click Protection > Schedules.
2. Select the schedule that you want to modify and click Edit.
3. In the Edit Schedule dialog box, modify the schedule by performing the appropriate action:
Deleting schedules
You can use System Manager to delete the schedules that run specific storage management tasks.
Steps
1. Click Protection > Schedules.
2. Select the schedule that you want to delete and click Delete.
3. Select the confirmation check box, and then click Delete.
Schedules
You can configure many tasks (for instance, volume Snapshot copies and mirror replications) to
run on specified schedules. Schedules that are run at specified schedules are known as cron
schedules because of their similarity to UNIX cron schedules. Schedules that are run at intervals
are known as interval schedules.
Schedules window
You can use the Schedules window to manage scheduled tasks, such as creating, displaying
information about, modifying, and deleting schedules.
Command buttons
Create
Opens the Create Schedule dialog box, which enables you to create time-based and interval
schedules.
Edit
Opens the Edit Schedule dialog box, which enables you to edit the selected schedules.
Delete
Opens the Delete Schedule dialog box, which enables you to delete the selected schedules.
Refresh
Updates the information in the window.
Schedules list
Name
Specifies the name of the schedule.
Type
Specifies the type of the schedule—time-based or interval-based.
Details area
The details area displays information about when a selected schedule is run.
Cluster Management Using OnCommand System Manager 356
Copyright and trademark
Copyright
Copyright © 2020 NetApp, Inc. All rights reserved. Printed in the U.S.
No part of this document covered by copyright may be reproduced in any form or by any means—
graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an
electronic retrieval system—without prior written permission of the copyright owner.
Software derived from copyrighted NetApp material is subject to the following license and
disclaimer:
THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE,
WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and without notice.
NetApp assumes no responsibility or liability arising from the use of products described herein,
except as expressly agreed to in writing by NetApp. The use or purchase of this product does not
convey a license under any patent rights, trademark rights, or any other intellectual property rights
of NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign
patents, or pending applications.
Data contained herein pertains to a commercial item (as defined in FAR 2.101) and is proprietary
to NetApp, Inc. The U.S. Government has a non-exclusive, non-transferrable, non-sublicensable,
worldwide, limited irrevocable license to use the Data only in connection with and in support of
the U.S. Government contract under which the Data was delivered. Except as provided herein, the
Data may not be used, disclosed, reproduced, modified, performed, or displayed without the prior
written approval of NetApp, Inc. United States Government license rights for the Department of
Defense are limited to those rights identified in DFARS clause 252.227-7015(b).
Trademark
NETAPP, the NETAPP logo, and the marks listed on the NetApp Trademarks page are trademarks
of NetApp, Inc. Other company and product names may be trademarks of their respective owners.
http://www.netapp.com/us/legal/netapptmlist.aspx