0% found this document useful (0 votes)
133 views300 pages

HCIP-Storage V5.5 Lab Guide

The document is a lab guide for the Huawei Certified ICT Professional-Storage (HCIP-Storage) certification, focusing on the SmartMulti-Tenant feature configuration for Huawei flash storage. It includes practical exercises for configuring multiple virtual stores (vStores) and managing storage services for different departments. The guide also outlines required background knowledge, references, tools, and detailed steps for various configuration tasks.

Uploaded by

Dmitry Borisov
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
133 views300 pages

HCIP-Storage V5.5 Lab Guide

The document is a lab guide for the Huawei Certified ICT Professional-Storage (HCIP-Storage) certification, focusing on the SmartMulti-Tenant feature configuration for Huawei flash storage. It includes practical exercises for configuring multiple virtual stores (vStores) and managing storage services for different departments. The guide also outlines required background knowledge, references, tools, and detailed steps for various configuration tasks.

Uploaded by

Dmitry Borisov
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 300

Huawei Storage Certification Training

HCIP-Storage

SmartMulti-Tenant Feature
Configuration

Lab Guide
ISSUE: 5.5

Huawei Technologies Co., Ltd.

2
Copyright © Huawei Technologies Co., Ltd. 2024. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any
means without prior written consent of Huawei Technologies Co., Ltd.

Trademarks and Permissions

and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of
their respective holders.

Notice
The purchased products, services and features are stipulated by the contract made
between Huawei and the customer. All or part of the products, services and features
described in this document may not be within the purchase scope or the usage scope.
Unless otherwise specified in the contract, all statements, information, and
recommendations in this document are provided "AS IS" without warranties,
guarantees or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has
been made in the preparation of this document to ensure accuracy of the contents, but
all statements, information, and recommendations in this document do not constitute
a warranty of any kind, express or implied.

Huawei Technologies Co., Ltd.


Address: Huawei Industrial Base
Bantian, Longgang
Shenzhen 518129
People's Republic of China
Website: https://e.huawei.com

Huawei Proprietary and Confidential


Copyright © Huawei Technologies Co.,Ltd
SmartMulti-Tenant Feature Configuration Lab Guide Page 1

Huawei Certification System


Huawei Certification is an integral part of the company's Platform + Ecosystem
strategy. It supports the development of ICT infrastructure that features Cloud-Pipe-
Device synergy. Our certification is always evolving to reflect the latest trends in ICT
development. Huawei Certification consists of three categories: ICT Infrastructure
Certification, Basic Software & Hardware Certification, and Cloud Platform & Services
Certification, making it the most extensive technical certification program in the
industry.
Huawei offers three levels of certification: Huawei Certified ICT Associate (HCIA),
Huawei Certified ICT Professional (HCIP), and Huawei Certified ICT Expert (HCIE).
Our programs cover all ICT fields and follow the industry's trend of ICT
convergence. With our leading talent development system and certification standards,
we are committed to fostering new digital ICT talent and building a sound ICT talent
ecosystem.
Huawei Certified ICT Professional-Storage (HCIP-Storage) is intended for Huawei
channel engineers, college students, and other ICT practitioners.
Its certification covers storage product technologies and applications, storage
product deployment and implementation, storage system performance tuning, and
storage O&M and fault management.
Huawei Certification introduces you to the industry and market, helps you in
innovation, and enables you to stand atop the storage frontiers.
SmartMulti-Tenant Feature Configuration Lab Guide Page 2
SmartMulti-Tenant Feature Configuration Lab Guide Page 3

About This Document

Overview
This lab guide provides SmartMulti-Tenant feature configuration practices to help
trainees consolidate and review previously learned content. Upon completion of this
course, you will be able to get familiar with the configuration operations related to
SmartMulti-Tenant for Huawei flash storage.

Description
This lab guide provides one lab practice to simulate an environment where one physical
storage device has multiple vStores.
Lab practice 1: SmartMulti-Tenant configuration

Background Knowledge Required


This lab guide is for HCIP certification. To better understand this course, familiarize
yourself with:
⚫ Basic storage knowledge. You are advised to complete the study of HCIA-Storage and
pass the HCIA-Storage certification exam.
⚫ Basic knowledge of Linux operating system (OS) operations and networks.
SmartMulti-Tenant Feature Configuration Lab Guide Page 4

Contents

About This Document ............................................................................................................... 3


Overview ............................................................................................................................................................................................. 3
Description ......................................................................................................................................................................................... 3
Background Knowledge Required ............................................................................................................................................. 3
1 References and Tools ............................................................................................................. 5
1.1 References ................................................................................................................................................................................... 5
1.2 Software and Tools .................................................................................................................................................................. 5
2 SmartMulti-Tenant Feature Configuration ...................................................................... 7
2.1 Introduction ................................................................................................................................................................................ 7
2.1.1 About This Lab Practice ...................................................................................................................................................... 7
2.1.2 Objectives ................................................................................................................................................................................ 7
2.1.3 Networking Topology .......................................................................................................................................................... 7
2.2 System User Operations ........................................................................................................................................................ 9
2.2.1 Checking a License File ....................................................................................................................................................... 9
2.2.2 Creating a Storage Pool ...................................................................................................................................................10
2.2.3 Creating a vStore ................................................................................................................................................................12
2.2.4 Creating a vStore User ......................................................................................................................................................23
2.3 vStore User Operations (Block Services) .......................................................................................................................29
2.3.1 Creating a LUN ....................................................................................................................................................................29
2.3.2 Creating a Host ....................................................................................................................................................................31
2.3.3 Creating a Mapping ...........................................................................................................................................................32
2.3.4 Verifying Host Connectivity .............................................................................................................................................34
2.3.5 Verifying Service Isolation................................................................................................................................................38
2.4 vStore User Operations (File Services) ...........................................................................................................................39
2.4.1 Creating a File System ......................................................................................................................................................39
2.4.2 Sharing a File System ........................................................................................................................................................42
2.4.3 Verifying Service Isolation................................................................................................................................................46
SmartMulti-Tenant Feature Configuration Lab Guide Page 5

1 References and Tools

1.1 References
Commands and documents listed in this document are for reference only, and actual
ones may vary depending on product versions.
Product documentation

Log in to Huawei's technical support website (https://support.huawei.com/enterprise/)


and type the name of a document or tool in the search box to search for, browse, and
download the desired document or tool.

1.2 Software and Tools


1. PuTTY

Use the open-source software PuTTY to log in to a terminal. You can use the common
domain name (putty.org) of PuTTY to browse or download the desired document or
tool.

2. WinSCP

Use WinSCP to transfer files between Windows and Linux OSs. You can use the
common domain name (winscp.net) of WinSCP to browse or download the desired
document or tool.

3. Huawei OceanStor UltraPath

Log in to Huawei's technical support website (http://support.huawei.com/enterprise/)


and type UltraPath in the search box to search for, browse, and download the desired
document or tool.
SmartMulti-Tenant Feature Configuration Lab Guide Page 6

4. eStor

Log in to Huawei's technical support website (http://support.huawei.com/enterprise/)


and type eStor in the search box to search for, browse, and download the desired
document or tool.
SmartMulti-Tenant Feature Configuration Lab Guide Page 7

2 SmartMulti-Tenant Feature
Configuration

2.1 Introduction
2.1.1 About This Lab Practice
This lab practice aims to simulate the use of Huawei flash storage in providing storage
services for two departments of a company, including block and NAS storage services.
The SmartMulti-Tenant feature is used to configure these departments as Tenant-A and
Tenant-B to ensure independent use of storage services.

2.1.2 Objectives
⚫ Understand the principles and usage of SmartMulti-Tenant.
⚫ Master the configuration of SmartMulti-Tenant for block services and file services.

2.1.3 Networking Topology


⚫ Lab devices

Device Type Quantity Software Version

Linux host 1 CentOS 7.6

Windows host 1 Windows Server 2016

Huawei flash storage 1 6.1.3, UltraPath 31.2.0


SmartMulti-Tenant Feature Configuration Lab Guide Page 8

⚫ Networking topology

⚫ Lab network information

Management IP Storage IP
Device Description
Address Address

192.168.1.33 Block services


DCA-Host 192.168.0.33
192.168.2.33 File services

192.168.1.34 Block services


DCA-Win 192.168.0.34
192.168.2.34 File services

192.168.1.30 Block service A

192.168.0.30 192.168.1.31 Block service B


DCA-Storage
192.168.2.30 File service A

192.168.2.31 File service B

⚫ Resource planning

vStore Resource Name Resource Capacity

Tenant-A LUN-A01 3 GB

Tenant-B FS-B01 2 GB
SmartMulti-Tenant Feature Configuration Lab Guide Page 9

The IP addresses used in this lab are for reference only. The actual IP
addresses are subject to the lab environment plan.

2.2 System User Operations


2.2.1 Checking a License File
Step 1 Log in to DeviceManager.
Enter the IP address (https://XXX.XXX.XXX.XXX:8088) of the management network port
on the controller enclosure in the address box of the browser and press Enter.
The DeviceManager login page is displayed.

Enter the username and password of the system user in Username and Password, and
click Log In. The DeviceManager home page is displayed.
SmartMulti-Tenant Feature Configuration Lab Guide Page 10

Step 2 Check the license.


Choose Settings > License Management.
In the middle function pane, verify that SmartMulti-Tenant is displayed in the Feature
column.

2.2.2 Creating a Storage Pool


Choose System > Storage Pools and click Create. The Create Storage Pool page is
displayed on the right. Set the parameters as follows:
Set Name to StoragePool001, select CTE0, and set Required Disks to 10.
Retain the default values for other parameters. Click OK.
SmartMulti-Tenant Feature Configuration Lab Guide Page 11

The Execution Result page is displayed. Click Close.


SmartMulti-Tenant Feature Configuration Lab Guide Page 12

If the storage pool has been created, go to the next step.

2.2.3 Creating a vStore


Create two vStores: Tenant-A and Tenant-B.
Create two logical ports A1 and A2 for Tenant-A: A1 for block services and A2 for file
services, and then another two logical ports B1 and B2 for Tenant-B: B1 (block) and B2
(file).

Step 1 Create the first vStore.


Choose Services > vStore Service > vStores. Click Create. The Create vStore page is
displayed.
Enter Tenant-A in the Name text box and click Add in the Logical Ports area.

On the Create Logical Port page, set the parameters as follows:


Name: A1
Role: Management + service
Data Protocol: iSCSI
IP Address: 192.168.1.30
Subnet Mask: 255.255.255.0
Gateway: 192.168.1.1
Port Type: Ethernet port
Home Port: CTE0.A.IOM4.P0
Click OK.
SmartMulti-Tenant Feature Configuration Lab Guide Page 13

Return to the Create vStore page and click Add to create the second logical port.
SmartMulti-Tenant Feature Configuration Lab Guide Page 14

On the Create Logical Port page, set the parameters as follows:


Name: A2
Role: Service
Data Protocol: NFS
IP Address: 192.168.2.30
Subnet Mask: 255.255.255.0
Gateway: 192.168.2.1
Port Type: Ethernet port
Home Port: CTE0.A.IOM4.P1
Activation Status: selecting Activate
Click OK.
SmartMulti-Tenant Feature Configuration Lab Guide Page 15

Return to the Create vStore page and click OK.


SmartMulti-Tenant Feature Configuration Lab Guide Page 16

The Execution Result page is displayed. Click Close.


SmartMulti-Tenant Feature Configuration Lab Guide Page 17

Step 2 Create the second vStore.


Choose Services > vStore Service > vStores. Click Create. The Create vStore page is
displayed.
Set Name to Tenant-B and click Add.
SmartMulti-Tenant Feature Configuration Lab Guide Page 18

On the Create Logical Port page, set the parameters as follows:


Name: B1
Role: Management + service
Data Protocol: iSCSI
IP Address: 192.168.1.31
Subnet Mask: 255.255.255.0
Gateway: 192.168.1.1
Port Type: Ethernet port
Home Port: CTE0.B.IOM4.P0
Click OK.
SmartMulti-Tenant Feature Configuration Lab Guide Page 19

Return to the Create vStore page and click Add to create the second logical port.
SmartMulti-Tenant Feature Configuration Lab Guide Page 20

On the Create Logical Port page, set the parameters as follows:


Name: B2
Role: Service
Data Protocol: NFS
IP Address: 192.168.2.31
Gateway: 192.168.2.1
Subnet Mask: 255.255.255.0
Port Type: Ethernet port
Home Port: CTE0.B.IOM4.P1
SmartMulti-Tenant Feature Configuration Lab Guide Page 21

Activation Status: selecting Activate


Click OK.

Return to the Create vStore page and click OK.


SmartMulti-Tenant Feature Configuration Lab Guide Page 22

The Execution Result page is displayed. Click Close.


SmartMulti-Tenant Feature Configuration Lab Guide Page 23

2.2.4 Creating a vStore User


Step 1 Create a user for the first vStore.
Choose Services > vStore Service > vStores. Click Tenant-A. On the page that is
displayed on the right, click the User Management tab and then Create.
SmartMulti-Tenant Feature Configuration Lab Guide Page 24

The Create User page is displayed. Set the parameters as follows:


Type: Local user
Username: user_a1
Password: Set this parameter as required.
Role: vStore administrator
Retain the default values for other parameters and click OK.
SmartMulti-Tenant Feature Configuration Lab Guide Page 25

The Execution Result page is displayed. Click Close.


SmartMulti-Tenant Feature Configuration Lab Guide Page 26

Step 2 Create a user for the second vStore.

Choose Services > vStore Service > vStores. Click Tenant-B. On the page that is
displayed on the right, click the User Management tab and then Create.
SmartMulti-Tenant Feature Configuration Lab Guide Page 27

The Create User page is displayed. Set the parameters as follows:


Type: Local user
Username: user_b1
Password: Set this parameter as required.
Role: vStore administrator
Retain the default values for other parameters and click OK.
SmartMulti-Tenant Feature Configuration Lab Guide Page 28

The Execution Result page is displayed. Click Close.


SmartMulti-Tenant Feature Configuration Lab Guide Page 29

2.3 vStore User Operations (Block Services)


2.3.1 Creating a LUN
Step 1 Log in to DeviceManager.
Log in to DeviceManager as user_a1 of Tenant-A.
In the address box of the browser, enter the IP address of the management logical port
of Tenant-A and port 8088, for example, https://192.168.1.30:8088. On the login page of
DeviceManager, enter the username and password of user user_a1 of Tenant-A, and
click Log In.
SmartMulti-Tenant Feature Configuration Lab Guide Page 30

The home page is displayed, as shown in the following figure.

Step 2 Create a LUN.

Choose Services > Block Service > LUN Groups > LUNs and click Create.
Set the parameters as follows:
Name: LUN-A01
Capacity: 3 GB
Quantity: 1
Retain the default values for other parameters and click OK.
SmartMulti-Tenant Feature Configuration Lab Guide Page 31

2.3.2 Creating a Host


Choose Services > Block Service > Host Groups > Hosts. On the Hosts tab page, choose
Create > Create Host.
SmartMulti-Tenant Feature Configuration Lab Guide Page 32

The Create Host dialog box is displayed on the right. Set the parameters as follows:
Name: Host-A01
OS: Linux
Click OK.

2.3.3 Creating a Mapping


Choose Services > Block Service > Host Groups > Hosts, select the host, and choose
Map > Map LUN.
SmartMulti-Tenant Feature Configuration Lab Guide Page 33

The Map LUN dialog box is displayed. Select Existing, select the LUN, and click OK.
SmartMulti-Tenant Feature Configuration Lab Guide Page 34

2.3.4 Verifying Host Connectivity


Step 1 Establish iSCSI connections on the host.
Log in to the Linux service host as user root and check the host IP address configurations.
The host must have two IP addresses that are on the same network segments as the
vStore management logical port and data logical port: 192.168.1.X and 192.168.2.X.

[root@dca-host ~]# ifconfig


eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.0.33 netmask 255.255.255.0 broadcast 192.168.0.255
inet6 fe80::f816:3eff:fe2b:83bb prefixlen 64 scopeid 0x20<link>
ether fa:16:3e:2b:83:bb txqueuelen 1000 (Ethernet)
RX packets 455610 bytes 246560516 (235.1 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 411295 bytes 113556749 (108.2 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500


inet 192.168.1.33 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::f816:3eff:fe2b:84bb prefixlen 64 scopeid 0x20<link>
ether fa:16:3e:2b:84:bb txqueuelen 1000 (Ethernet)
RX packets 17698 bytes 3833414 (3.6 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 11 bytes 1306 (1.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500


inet 192.168.2.33 netmask 255.255.255.0 broadcast 192.168.2.255
inet6 fe80::f816:3eff:fe2b:85bb prefixlen 64 scopeid 0x20<link>
ether fa:16:3e:2b:85:bb txqueuelen 1000 (Ethernet)
RX packets 17696 bytes 3833254 (3.6 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9 bytes 1270 (1.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Check the iSCSI software package:

[root@dca-host ~]# rpm -qa | grep iscsi


iscsi-initiator-utils-iscsiuio-6.2.0.874-22.el7_9.x86_64
iscsi-initiator-utils-6.2.0.874-22.el7_9.x86_64

If the iSCSI software is not installed on the OS or the version is not supported, install the
latest version.
Ensure the initiator name is easy to remember. If necessary, change it to ensure it is
unique. You are advised to replace the suffix with the host name.

[root@dca-host ~]# vi /etc/iscsi/initiatorname.iscsi


InitiatorName=iqn.1994-05.com.redhat:dca-host

Change the initiator name and restart iscsid.


SmartMulti-Tenant Feature Configuration Lab Guide Page 35

[root@dca-host ~]# systemctl restart iscsid

Search for the target based on the IP address of logical port A1 of Tenant-A configured
on the storage system.

[root@dca-host ~]# iscsiadm -m discovery -t st -p 192.168.1.30


192.168.1.30:3260,1025 iqn.2014-08.com.example::2100070000040506::20400:192.168.1.30

Log in to the target of Tenant-A.

[root@dca-host ~]# iscsiadm -m node -p 192.168.1.30 -l


Logging in to [iface: default, target: iqn.2014-
08.com.example::2100070000040506::20400:192.168.1.30, portal: 192.168.1.30,3260] (multiple)
Login to [iface: default, target: iqn.2014-08.com.example::2100070000040506::20400:192.168.1.30,
portal: 192.168.1.30,3260] successful.

Configure the iSCSI service to run automatically upon host startup:

systemctl enable iscsi.service

Step 2 Add an initiator to the host on the storage system.


Log in to DeviceManager as user_a1 of Tenant-A.
Choose Services > Block Service > Host Groups > Hosts, select the host for which you
want to add an initiator, click More on the right, and select Add Initiator.

Click the iSCSI tab, select the desired initiator, and click OK.
SmartMulti-Tenant Feature Configuration Lab Guide Page 36

Confirm the warning and click OK.


SmartMulti-Tenant Feature Configuration Lab Guide Page 37

Step 3 Scan for LUNs on the host.


Check the disk devices on the host.

[root@dca-host ~]# fdisk -l

Disk /dev/vda: 42.9 GB, 42949672960 bytes, 83886080 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000a7317

Device Boot Start End Blocks Id System


/dev/vda1 * 2048 83886079 41942016 83 Linux

Scan for LUNs on the host.

[root@dca-host ~]# iscsiadm -m session --rescan


SmartMulti-Tenant Feature Configuration Lab Guide Page 38

Rescanning session [sid: 1, target: iqn.2014-08.com.example::2100070000040506::20400:192.168.1.30,


portal: 192.168.1.30,3260]

Check the disk device again. The 3 GB disk (/dev/sda) is found.

[root@dca-host ~]# fdisk -l

Disk /dev/vda: 42.9 GB, 42949672960 bytes, 83886080 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000a7317

Device Boot Start End Blocks Id System


/dev/vda1 * 2048 83886079 41942016 83 Linux

Disk /dev/sda: 3221 MB, 3221225472 bytes, 6291456 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

2.3.5 Verifying Service Isolation


Step 1 Log in to DeviceManager.
Log in to DeviceManager as user_b1 of Tenant-B.
In the address box of a browser, enter the IP address of the management logical port of
Tenant-B and port 8088, for example, https://192.168.1.31:8088. On the login page of
DeviceManager, enter the username and password of user user_b1 of Tenant-B, and
click Log In.

Step 2 Verify LUN isolation.


SmartMulti-Tenant Feature Configuration Lab Guide Page 39

Choose Services > Block Service > LUN Groups > LUNs. The 3 GB LUN created by
user_a1 of Tenant-A is invisible.

Step 3 Verify host isolation.

Choose Services > Block Service > Host Groups > Hosts. The 3 GB LUN created by
user_a1 of Tenant-A is invisible.

2.4 vStore User Operations (File Services)


2.4.1 Creating a File System
Step 1 Log in to DeviceManager.
Log in to DeviceManager as user_b1 of Tenant-B.
In the address box of the browser, enter the IP address of the management logical port
of Tenant-B and port 8088, for example, https://192.168.1.31:8088. On the login page of
DeviceManager, enter the username and password of user user_b1 of Tenant-B, and
click Log In.
SmartMulti-Tenant Feature Configuration Lab Guide Page 40

Step 2 Create a file system.

Choose Services > File Service > File Systems and click Create. In the Create File
System dialog box, set the parameters as follows:
Name: FS-B01
Capacity: 2 GB
NFS: disabled
CIFS: disabled
Add to HyperCDP Schedule: disabled
Retain the default values for other parameters. Click OK.
SmartMulti-Tenant Feature Configuration Lab Guide Page 41

Confirm the warning and click OK.


SmartMulti-Tenant Feature Configuration Lab Guide Page 42

2.4.2 Sharing a File System


Step 1 Configure an NFS share.
Choose Services > File Service > Shares > NFS Shares and click Create. The Create NFS
Share page is displayed on the right.
Select file system FS-B01 and click OK.

Step 2 Add an NFS share client.

Choose Services > File Service > Shares > NFS Shares. Click More on the right of the
desired NFS share and choose Add Client.
SmartMulti-Tenant Feature Configuration Lab Guide Page 43

The Add Client page is displayed. Set the parameters as follows:


Type: Host
Clients: 192.168.2.33
UNIX Permission: Read-write
root Permission Constraint: no_root_squash
Retain the default values for other parameters and click OK.
SmartMulti-Tenant Feature Configuration Lab Guide Page 44

The Execution Result page is displayed. Click Close.


SmartMulti-Tenant Feature Configuration Lab Guide Page 45

Step 3 Access the NFS share.

Log in to the client as user root.


Run the showmount -e ipaddress command to view all NFS shares of the storage
system. ipaddress is the IP address of logical port B2 of Tenant-B.

[root@dca-host ~]# showmount -e 192.168.2.31


Export list for 192.168.2.31:
/FS-B01 192.168.2.33

The client can access the NFS share through logical port B2 of Tenant-B.
SmartMulti-Tenant Feature Configuration Lab Guide Page 46

2.4.3 Verifying Service Isolation


Step 1 Log in to DeviceManager.
Log in to DeviceManager as user_a1 of Tenant-A.
In the address box of a browser, enter the IP address of the management logical port of
Tenant-A and port 8088, for example, https://192.168.1.30:8088. On the login page of
DeviceManager, enter the username and password of user user_a1 of Tenant-A, and
click Log In.

Step 2 Verify file system isolation.


Choose Services > File Service > File Systems. File system FS-B01 created by user_b1 of
Tenant-B is invisible.
SmartMulti-Tenant Feature Configuration Lab Guide Page 47

Step 3 Verify NFS share isolation.


Log in to the client as user root.
Run the showmount -e ipaddress command to view all NFS shares of the storage
system. ipaddress is the IP address of logical port A2 of Tenant-A.

[root@dca-host ~]# showmount -e 192.168.2.30


Export list for 192.168.2.30:

NFS share /FS-B01 of Tenant-B is invisible.


Huawei Storage Certification Training

HCIP-Storage

HyperCDP Feature Configuration

Lab Guide
ISSUE: 5.5

Huawei Technologies Co., Ltd.

2
Copyright © Huawei Technologies Co., Ltd. 2024. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any
means without prior written consent of Huawei Technologies Co., Ltd.

Trademarks and Permissions

and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of
their respective holders.

Notice
The purchased products, services and features are stipulated by the contract made
between Huawei and the customer. All or part of the products, services and features
described in this document may not be within the purchase scope or the usage scope.
Unless otherwise specified in the contract, Huawei does not make any express or
implied representations or warranties in this document.
The information in this document is subject to change without notice. Every effort has
been made in the preparation of this document to ensure accuracy of the contents, but
all statements, information, and recommendations in this document do not constitute
a warranty of any kind, express or implied.

Huawei Technologies Co., Ltd.


Huawei Industrial Base, Bantian, Longgang, Shenzhen 518129 People's
Address:
Republic of China
Website: https://e.huawei.com

Huawei Proprietary and Confidential


Copyright © Huawei Technologies Co.,Ltd
HyperCDP Feature Configuration Lab Guide Page 1

Huawei Certification System


Huawei Certification is an integral part of the company's Platform + Ecosystem
strategy. It supports the development of ICT infrastructure that features Cloud-Pipe-
Device synergy. Our certification is always evolving to reflect the latest trends in ICT
development. Huawei Certification consists of three categories: ICT Infrastructure
Certification, Basic Software & Hardware Certification, and Cloud Platform & Services
Certification, making it the most extensive technical certification program in the
industry.
Huawei offers three levels of certification: Huawei Certified ICT Associate (HCIA),
Huawei Certified ICT Professional (HCIP), and Huawei Certified ICT Expert (HCIE).
Our programs cover all ICT fields and follow the industry's trend of ICT
convergence. With our leading talent development system and certification standards,
we are committed to fostering new digital ICT talent and building a sound ICT talent
ecosystem.
Huawei Certified ICT Professional-Storage (HCIP-Storage) is intended for Huawei
channel engineers, college students, and other ICT practitioners.
Its certification covers storage product technologies and applications, storage
product deployment and implementation, storage system performance tuning, and
storage O&M and fault management.
Huawei Certification introduces you to the industry and market, helps you in
innovation, and enables you to stand atop the storage frontiers.
HyperCDP Feature Configuration Lab Guide Page 2
HyperCDP Feature Configuration Lab Guide Page 3

About This Document

Overview
This lab guide provides HyperCDP feature configuration practices to help trainees
consolidate and review previously learned content. Upon completion of this course, you
will be able to get familiar with the configuration operations related to HyperCDP for
Huawei flash storage.

Content Description
This lab guide introduces two lab practices to simulate the scenarios where HyperCDP is
used to protect service data.
Lab practice 1: HyperCDP for block services
Lab practice 2: HyperCDP for file services

Background Knowledge Required


This course is for HCIP certification. To better understand this course, familiarize yourself
with:
⚫ Basic storage knowledge. You are advised to complete the study of HCIA-Storage and
pass the HCIA-Storage certification exam.
⚫ Basic knowledge of Linux operating system (OS) operations and networks.
HyperCDP Feature Configuration Lab Guide Page 4

Contents

About This Document ............................................................................................................... 3


Overview ............................................................................................................................................................................................. 3
Content Description ........................................................................................................................................................................ 3
Background Knowledge Required ............................................................................................................................................. 3
1 HyperCDP for Block Services................................................................................................ 6
1.1 Introduction ................................................................................................................................................................................ 6
About This Lab Practice ...................................................................................................................................................... 6
Objectives ................................................................................................................................................................................ 6
Networking Topology .......................................................................................................................................................... 6
1.2 Basic Block Service Configuration ...................................................................................................................................... 8
Configuring the Host Multipathing Software ............................................................................................................. 8
Configuring Block Storage Resources and Mappings .............................................................................................. 9
Configuring Host Connectivity .......................................................................................................................................14
Using Storage Space on the Host .................................................................................................................................20
1.3 Configuring HyperCDP .........................................................................................................................................................22
Checking License Files .......................................................................................................................................................22
Creating a HyperCDP Object for the LUN .................................................................................................................23
Creating a HyperCDP Consistency Group for a Protection Group ....................................................................24
Creating a HyperCDP Schedule .....................................................................................................................................27
1.4 Managing HyperCDP.............................................................................................................................................................30
Managing HyperCDP Objects .........................................................................................................................................30
Managing a HyperCDP CG ..............................................................................................................................................35
Managing a HyperCDP Schedule ..................................................................................................................................41
2 HyperCDP for File Services .................................................................................................46
2.1 Introduction ..............................................................................................................................................................................46
About This Lab Practice ....................................................................................................................................................46
Objectives ..............................................................................................................................................................................46
Networking Topology ........................................................................................................................................................46
2.2 Basic File Service Configuration ........................................................................................................................................48
Creating a File System ......................................................................................................................................................48
Creating a Logical Port .....................................................................................................................................................51
Configuring an NFS Share ...............................................................................................................................................52
Using Storage Space on the Host .................................................................................................................................56
2.3 Configuring HyperCDP .........................................................................................................................................................57
HyperCDP Feature Configuration Lab Guide Page 5

Checking License Files .......................................................................................................................................................57


Creating a HyperCDP Object for the File System ...................................................................................................57
Creating a HyperCDP Schedule for the File System ...............................................................................................59
2.4 Managing HyperCDP.............................................................................................................................................................62
Manage HyperCDP Objects .............................................................................................................................................62
Managing a HyperCDP Schedule ..................................................................................................................................65
HyperCDP Feature Configuration Lab Guide Page 6

1 HyperCDP for Block Services

1.1 Introduction
About This Lab Practice
This lab practice is about HyperCDP for block services.

Objectives
⚫ Understand the principles and usage of HyperCDP for block services.
⚫ Master operations for configuring and managing HyperCDP for block services.

Networking Topology
⚫ Devices

Device Type Quantity Software Version

Linux host 1 CentOS 7.6

Windows host 1 Windows Server 2016

Huawei flash storage 1 6.1.3, UltraPath_31.2.0

⚫ Networking topology
HyperCDP Feature Configuration Lab Guide Page 7

⚫ Network information

Managemen Storage IP
Device Information Description
t IP Address Address

192.168.1.33 Block services


DCA-Host 192.168.0.33
192.168.2.33 File services

192.168.1.34 Block services


DCA-Win 192.168.0.34
192.168.2.34 File services

192.168.1.30 Block service A

192.168.0.30 192.168.1.31 Block service B


DCA-Storage
192.168.2.30 File service A

192.168.2.31 File service B

Note: The IP addresses used in this lab guide are for reference only. The actual IP
addresses are subject to the lab environment plan.
HyperCDP Feature Configuration Lab Guide Page 8

1.2 Basic Block Service Configuration


Configuring the Host Multipathing Software
Use the SSH client to log in to the service host as user root and create directory
/root/up.
[root@dca-host ~]# mkdir /root/up

Use the FTP tool to upload the UltraPath installation package to the /root/up directory
on the service host.
Go to the /root/up directory and decompress the UltraPath installation package.
[root@dca-host ~]# cd up
[root@dca-host up]# ls
OceanStor_UltraPath_31.2.0_CentOS.zip
[root@dca-host up]# unzip OceanStor_UltraPath_31.2.0_CentOS.zip
Archive: OceanStor_UltraPath_31.2.0_CentOS.zip
creating: CentOS/
...

Go to the /root/up/CentOS directory and run the installation script.


[root@dca-host CentOS]# sh ./install.sh
iscsi is not installed.
complete FC checking.
Verify the UltraPath existence.
The UltraPath is not installed.
iscsi is not installed.
Modify system
configuration.[file:/etc/modprobe.d/nxupmodules.conf,module:qla2xxx,item:qlport_down_retry,value:5]
Modify system
configuration.[file:/etc/modprobe.d/nxupmodules.conf,module:lpfc,item:lpfc_nodev_tmo,value:5]
Modify system configuration.[file:/etc/systemd/system.conf,item:DefaultTimeoutStartSec,value:600s]
If the operating system is installed on a local drive of the server, you are advised
to choose boot from local; if the operating system is installed on a SAN storage
system, you must choose boot from san. Please choose the boot type of your system:
<1>--boot-from-Local
<2>--boot-from-SAN
please input your select:1

Type 1 and press Enter.


please input your select:1

The installation program prompts that the installation is complete and asks if you want
to restart the system:
The installation is complete. Whether to restart the system now?
<Y|N>:

Type Y and press Enter to restart the system.


HyperCDP Feature Configuration Lab Guide Page 9

Log in to the host again and check whether UltraPath takes effect.
[root@ dca-host ~]# upadmin check status

If the check result of each item is Pass, UltraPath has taken effect.
[root@ dca-host ~]# upadmin check status
------------------------------------------------------------
Checking path status:
There is no array information.
Pass
------------------------------------------------------------
Checking environment and config:
Pass
------------------------------------------------------------

Configuring Block Storage Resources and Mappings


Step 1 Configure LUNs and LUN groups.
Log in to DeviceManager of the storage system and create LUN001, LUN002, LUN003,
and LUN004 with 1 GB capacity for each LUN. Then create a LUN group LUNGroup001
and add LUN001, LUN002, LUN003, and LUN004 to the group.
Choose Services > Block Service > LUN Groups > LUNs and click Create.

The Create LUN dialog box is displayed. Set LUN parameters and click OK.
HyperCDP Feature Configuration Lab Guide Page 10

The Execution Result page is displayed. Click Close.

Choose Services > Block Service > LUN Groups > LUN Groups and click Create. Set the
LUN group name, select LUN001, LUN002, LUN003, and LUN004, and click OK.
HyperCDP Feature Configuration Lab Guide Page 11

The Execution Result page is displayed. Click Close.

Step 2 Configure hosts and host groups.


On DeviceManager of the storage system, create host dca-host for the storage device
and select Linux as the operating system. Then, create a host group HostGroup001 with
dca-host as the member.
Choose Services > Block Service > Host Groups > Hosts, click Create, and select Create
Host.
HyperCDP Feature Configuration Lab Guide Page 12

Specify the host name, select an operating system, and click OK.

Choose Services > Block Service > Host Groups > Host Groups, and click Create. Specify
the host group name, select the host, and click OK.
HyperCDP Feature Configuration Lab Guide Page 13

The Execution Result page is displayed. Click Close.

Step 3 Create a mapping.


On DeviceManager, create a mapping between the host group and LUN group.
Choose Services > Block Service > LUN Groups > LUN Groups, select LUNGroup001, and
click Map.

The Map LUN Group dialog box is displayed. Select HostGroup001 and click OK.
HyperCDP Feature Configuration Lab Guide Page 14

In the displayed Danger dialog box, confirm the warning and click OK.

The mapping status of LUNGroup001 is Mapped, as shown in the following figure.

Configuring Host Connectivity


Step 1 Create logical ports.
On DeviceManager, choose Services > Network > Logical Ports and click Create.
HyperCDP Feature Configuration Lab Guide Page 15

On the displayed Create Logical Port page, set logical port parameters. Click OK.

Similarly, create another logical port. The following figure shows the result.
HyperCDP Feature Configuration Lab Guide Page 16

Step 2 Establish iSCSI connections.


Log in to the service host as user root, and modify the parameters of network ports in
their respective network configuration files. The following uses network port eth1 as an
example:
[root@dca-host ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE="eth1"
BOOTPROTO="static"
ONBOOT="yes"
TYPE="Ethernet"
PERSISTENT_DHCLIENT="yes"
IPADDR=192.168.1.33
PREFIX=24

Restart the NIC after configuring the IP address:


[root@dca-host ~]# ifdown eth1 && ifup eth1
Device 'eth1' successfully disconnected.
Connection successfully activated (D-Bus active path:
/org/freedesktop/NetworkManager/ActiveConnection/29)

Check the iSCSI software package:


[root@dca-host ~]# rpm -qa | grep iscsi
iscsi-initiator-utils-iscsiuio-6.2.0.874-22.el7_9.x86_64
iscsi-initiator-utils-6.2.0.874-22.el7_9.x86_64

Note: If the iSCSI software package is not installed, install it before performing
subsequent operations.
To make the initiator name easy to identify, you need to change the initiator name and
replace the suffix with the host name dca-host.
[root@ dca-host ~]# vi /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:dca-host

Change the initiator name and restart iscsid.


[root@ dca-host ~]# systemctl restart iscsid

Search for the targets based on the IP addresses of the logical ports configured on the
storage system:
[root@dca-host ~]# iscsiadm -m discovery -t st -p 192.168.1.30
192.168.1.30:3260,1025 iqn.2014-08.com.example::2100030000040506::20400:192.168.1.30
[root@dca-host ~]# iscsiadm -m discovery -t st -p 192.168.1.31
192.168.1.31:3260,1025 iqn.2014-08.com.example::2100030000040506::20400:192.168.1.31

Log in to the targets:


[root@dca-host ~]# iscsiadm -m node -p 192.168.1.30 -l
HyperCDP Feature Configuration Lab Guide Page 17

Logging in to [iface: default, target: iqn.2014-08.com.example::2100030000040506::20400:192.168.1.30,


portal: 192.168.1.30,3260] (multiple)
Login to [iface: default, target: iqn.2014-08.com.example::2100030000040506::20400:192.168.1.30,
portal: 192.168.1.30,3260] successful.
[root@dca-host ~]# iscsiadm -m node -p 192.168.1.31 -l
Logging in to [iface: default, target: iqn.2014-08.com.example::2100030000040506::20400:192.168.1.31,
portal: 192.168.1.31,3260] (multiple)
Login to [iface: default, target: iqn.2014-08.com.example::2100030000040506::20400:192.168.1.31,
portal: 192.168.1.31,3260] successful.

Configure the iSCSI service to run automatically upon host startup:


[root@ dca-host ~]# systemctl enable iscsi.service

Step 3 Adding an initiator


On DeviceManager of the storage device, choose Services > Block Service > Host
Groups > Hosts, select the host for which you want to add an initiator, click More on the
right, and select Add Initiator.

Click the iSCSI tab, select the initiator with the corresponding host name, and click OK.
HyperCDP Feature Configuration Lab Guide Page 18

Confirm the warning and click OK.

Click the host name. On the page that is displayed, click the Initiators tab. Ensure that
the Status of the initiator is Online.
HyperCDP Feature Configuration Lab Guide Page 19

Step 4 Scan for LUNs on the host.


Check the disk devices on the host:
[root@dca-host ~]# fdisk -l

Disk /dev/vda: 42.9 GB, 42949672960 bytes, 83886080 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000a7317

Device Boot Start End Blocks Id System


/dev/vda1 * 2048 83886079 41942016 83 Linux

Scan for LUNs on the host.


[root@dca-host ~]# hot_add
Begin to delete LUNs whose mappings do not exist
Begin to delete LUNs whose mappings are changed.
no vitural lun found
begin scan host0
begin scan host1
begin scan host2
begin scan host4
The device scanning is complete.

Check the disk devices again. Four 1 GB disks are found.


[root@dca-host ~]# fdisk -l

Disk /dev/vda: 42.9 GB, 42949672960 bytes, 83886080 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
HyperCDP Feature Configuration Lab Guide Page 20

I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk label type: dos
Disk identifier: 0x000a7317

Device Boot Start End Blocks Id System


/dev/vda1 * 2048 83886079 41942016 83 Linux

Disk /dev/sda: 1073 MB, 1073741824 bytes, 2097152 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdb: 1073 MB, 1073741824 bytes, 2097152 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdc: 1073 MB, 1073741824 bytes, 2097152 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdd: 1073 MB, 1073741824 bytes, 2097152 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Using Storage Space on the Host


Step 1 Create and mount file systems.
Partition the discovered disk /dev/sda.
[root@dca-host ~]# fdisk /dev/sda
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table


Building a new DOS disklabel with disk identifier 0x64f05bb5.

Command (m for help): n


Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
HyperCDP Feature Configuration Lab Guide Page 21

Select (default p): p


Partition number (1-4, default 1):
First sector (2048-2097151, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-2097151, default 2097151):
Using default value 2097151
Partition 1 of type Linux and of size 1023 MiB is set

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.


Syncing disks.

Description of the fdisk command:


fdisk /dev/sda #Format the sda block device. (You can run the fdisk -l command to obtain the drive
letter.)
fdisk parameters are described as follows:
n (starting partition creation)
p (creating a primary partition)
1 (setting the partition ID to 1)
w (saving the partition table)
Note: You do not need to specify the start and end sectors. Press Enter.

Create a file system on the partition volume:


[root@dca-host ~]# mkfs.xfs /dev/sda1
Discarding blocks...Done.
meta-data=/dev/sda1 isize=512 agcount=4, agsize=65472 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=261888, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=855, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

Similarly, partition and format the other three LUNs.


Create a mounting directory:
mkdir /mnt/lab01 /mnt/lab02 /mnt/lab03 /mnt/lab04

Mount the file systems:


[root@dca-host ~]# mount /dev/sda1 /mnt/lab01&mount /dev/sdb1 /mnt/lab02&mount /dev/sdc1
/mnt/lab03&mount /dev/sdd1 /mnt/lab04
HyperCDP Feature Configuration Lab Guide Page 22

Step 2 Write test files.


Create test1, test2, test3, and test4 files in /mnt/lab01, /mnt/lab02, /mnt/lab03, and
/mnt/lab04, respectively. Write 111, 222, 333, and 444 to test1, test2, test3, and test4
files, respectively.
[root@dca-host ~]# echo 111 > /mnt/lab01/test1&echo 222 > /mnt/lab02/test2&echo 333 >
/mnt/lab03/test3&echo 444 > /mnt/lab04/test4
[1] 24876
[2] 24877
[3] 24878
[1] Done echo 111 > /mnt/lab01/test1
[2]- Done echo 222 > /mnt/lab02/test2
[3]+ Done echo 333 > /mnt/lab03/test3
[root@dca-host ~]# cat /mnt/lab01/test1 /mnt/lab02/test2 /mnt/lab03/test3 /mnt/lab04/test4
111
222
333
444

1.3 Configuring HyperCDP


Checking License Files
Log in to DeviceManager. Choose Settings > License Management. In the middle
information pane, verify the information about the active license file.
HyperCDP Feature Configuration Lab Guide Page 23

Creating a HyperCDP Object for the LUN


Step 1 Create a HyperCDP object.
Choose Data Protection > Plans > Block HyperCDP > HyperCDP Objects. From the
vStore drop-down list in the upper left corner, select the owning vStore of the LUN for
which you want to create a HyperCDP object and click Create.

The Create HyperCDP Object page is displayed.


In the Available LUNs list, select LUN001 and add it to the Selected LUNs area. Set the
HyperCDP object name to HCDP001, retain the default values for other parameters, and
click OK.
HyperCDP Feature Configuration Lab Guide Page 24

Check that the HyperCDP object has been created and its running status is Activated.

Creating a HyperCDP Consistency Group for a Protection


Group
Step 1 Create a protection group (PG).
Choose Data Protection > Protection Entities > Protection Groups > Protection
Groups. From the vStore drop-down list in the upper left corner, select a vStore for which
you want to create a protection group and click Create.

The Create PG page is displayed on the right.


Set the name of the protection group to PG0001.
Set the objects in the protection group: Select Existing LUN and then select LUN002 and
LUN003 from the Available LUNs list.
Retain the default values for other parameters and click OK.
HyperCDP Feature Configuration Lab Guide Page 25

The Execution Result page is displayed. Click Close.

Step 2 Create a HyperCDP consistency group.


Choose Data Protection > Plans > Block HyperCDP > HyperCDP CGs. From the vStore
drop-down list in the upper left corner, select the owning vStore of the LUN for which
you want to create a HyperCDP CG and click Create.
HyperCDP Feature Configuration Lab Guide Page 26

The Create HyperCDP CG page is displayed. Set PG. Set the name of the HyperCDP CG
to HCDPCG001. Retain the default settings for other parameters and click OK.

The Execution Result page is displayed. Click Close.


HyperCDP Feature Configuration Lab Guide Page 27

Check that the HyperCDP CG has been created and its running status is Activated.

Creating a HyperCDP Schedule


Step 1 Create a HyperCDP schedule.
Choose Data Protection > Plans > Block HyperCDP > HyperCDP Schedules. From the
vStore drop-down list in the upper left corner, select the owning vStore of the LUN or PG
for which you want to create a HyperCDP schedule and click Create.

The Create HyperCDP Schedule page is displayed.


Configure basic information about the HyperCDP schedule.
Name: HCDPPlan001
Member: Select LUN004 from Available LUNs.
HyperCDP Feature Configuration Lab Guide Page 28

Click Next.

Configuration Policy: Set Fixed Period to every 1 minute and Retained Objects to 60 .
Click Next.

Click OK.
HyperCDP Feature Configuration Lab Guide Page 29

The Execution Result page is displayed. Click Close.

Step 2 View HyperCDP objects.


Choose Data Protection > Plans > Block HyperCDP > HyperCDP Objects. From the
vStore drop-down list in the upper left corner, select the owning vStore of the LUN for
which you want to create the HyperCDP object, click LUN004. HyperCDP objects
generated by the HyperCDP schedule are displayed on the right.
HyperCDP Feature Configuration Lab Guide Page 30

1.4 Managing HyperCDP


Managing HyperCDP Objects
1.4.1.1 Modifying HyperCDP Object Attributes
Choose Data Protection > Plans > Block HyperCDP > HyperCDP Objects.
Select the desired vStore from the vStore drop-down list in the upper left corner.
Select LUN001, click More on the right of HCDP001, and select Modify.

The Modify HyperCDP Object page is displayed on the right.


Change the name to HCDP008 and change the rollback speed to High. Click OK.
HyperCDP Feature Configuration Lab Guide Page 31

Confirm the warning and click OK.

1.4.1.2 Using a HyperCDP Object for Rollback


This section describes how to use a HyperCDP object to roll back LUN001 after data on
LUN001 is deleted.

Step 1 Delete hos data.


[root@dca-host ~]# rm -f /mnt/lab01/test1
[root@dca-host ~]# ls /mnt/lab01/test1
ls: cannot access /mnt/lab01/test1: No such file or directory

Step 2 Unmount the partition.


[root@dca-host ~]# umount /mnt/lab01
HyperCDP Feature Configuration Lab Guide Page 32

Step 3 Roll back data.


Choose Data Protection > Plans > Block HyperCDP > HyperCDP Objects.
Select the desired vStore from the vStore drop-down list in the upper left corner.
Select LUN001, select HCDP008, and click Start Rollback.

In the Start HyperCDP Object Rollback dialog box that is displayed on the right, click
OK.

Confirm the warning and click OK.


HyperCDP Feature Configuration Lab Guide Page 33

The running status is Rolling back and the rollback progress is displayed. After the
rollback is complete, the running status changes back to Activated.

Step 4 Mount the partition.


[root@dca-host ~]# mount /dev/sda1 /mnt/lab01

Step 5 Verify data.


[root@dca-host ~]# ls /mnt/lab01/test1
/mnt/lab01/test1
[root@dca-host ~]# cat /mnt/lab01/test1
111

The data has been restored.

1.4.1.3 Creating a Snapshot Duplicate


Choose Data Protection > Plans > Block HyperCDP > HyperCDP Objects.
Select the desired vStore from the vStore drop-down list in the upper left corner.
Select LUN001, select HCDP008, and choose More > Create Duplicate on the right.
HyperCDP Feature Configuration Lab Guide Page 34

The Create Snapshot Duplicate page is displayed on the right.


Select New duplicate, name it as LUN001-HCDP008, and click OK.

Choose Services > Block Service > LUN Groups > LUNs. The created snapshot duplicate
is displayed.
HyperCDP Feature Configuration Lab Guide Page 35

Managing a HyperCDP CG
1.4.2.1 Modifying Properties of a HyperCDP CG
Choose Data Protection > Plans > Block HyperCDP > HyperCDP CGs.
Select the desired vStore from the vStore drop-down list in the upper left corner.
Click More on the right of the desired HyperCDP CG and select Modify.

The Modify HyperCDP CG page is displayed.


Change the name to HCDPCG009 and change the rollback rate to High. Click OK.
HyperCDP Feature Configuration Lab Guide Page 36

Confirm the warning and click OK.

1.4.2.2 Using a HyperCDP CG for Rollback


This section describes how to use a HyperCDP CG to roll back deleted or modified data of
a LUN in the HyperCDP CG.

Step 1 Delete and modify host data.


[root@dca-host ~]# rm -f /mnt/lab02/test2
[root@dca-host ~]# ls /mnt/lab02/test2
ls: cannot access /mnt/lab02/test2: No such file or directory
[root@dca-host ~]# echo xxx >> /mnt/lab03/test3
[root@dca-host ~]# cat /mnt/lab03/test3
333
HyperCDP Feature Configuration Lab Guide Page 37

xxx

Step 2 Unmount the partition.


umount /mnt/lab02 /mnt/lab03

Step 3 Roll back data.


Choose Data Protection > Plans > Block HyperCDP > HyperCDP CGs.
Select the desired vStore from the vStore drop-down list in the upper left corner.
Select the desired HyperCDP CG and click Start Rollback.

In the Start HyperCDP CG Rollback dialog box that is displayed on the right, click OK.
HyperCDP Feature Configuration Lab Guide Page 38

Confirm the warning and click OK.


HyperCDP Feature Configuration Lab Guide Page 39

The running status changes to Rolling back. After the rollback is complete, the running
status changes back to Activated.

Step 4 Mount the partition.


[root@dca-host ~]# mount /dev/sdb1 /mnt/lab02&mount /dev/sdc1 /mnt/lab03

Step 5 Verify data.


[root@dca-host ~]# ls /mnt/lab02/test2 /mnt/lab03/test3
/mnt/lab02/test2 /mnt/lab03/test3
[root@dca-host ~]# cat /mnt/lab02/test2 /mnt/lab03/test3
222
333

The data is restored.

1.4.2.3 Creating a Duplicate for a HyperCDP CG


Choose Data Protection > Plans > Block HyperCDP > HyperCDP CGs.
Select the desired vStore from the vStore drop-down list in the upper left corner.
Click More on the right of the HyperCDP CG and select Create Duplicate.

The Create Snapshot CG Duplicate page is displayed.


Select New duplicate, name it as PG0001-HCDPCG009, and click OK.
HyperCDP Feature Configuration Lab Guide Page 40

Choose Services > Block Service > LUN Groups > LUNs. The created snapshot duplicate
is displayed.
HyperCDP Feature Configuration Lab Guide Page 41

Managing a HyperCDP Schedule


1.4.3.1 Modifying a HyperCDP Schedule
Step 1 Disabling a HyperCDP Schedule
Choose Data Protection > Plans > Block HyperCDP > HyperCDP Schedules.
Select the desired vStore from the vStore drop-down list in the upper left corner.
Click More on the right of the desired HyperCDP schedule and choose Disable.

Confirm the warning and click OK.

Step 2 Modify the HyperCDP schedule.


Click More on the right of the desired HyperCDP schedule and choose Modify.
HyperCDP Feature Configuration Lab Guide Page 42

The Modify HyperCDP Schedule page is displayed.


Change the name to HCDPPlan002, set Fixed Period to 2 minutes and Retained Objects
to 30. Click OK.

Step 3 Enable a HyperCDP schedule.


Click More on the right of the desired HyperCDP schedule and choose Enable.
HyperCDP Feature Configuration Lab Guide Page 43

1.4.3.2 Adding or Removing a Member


Step 1 Add a member.
Choose Data Protection > Plans > Block HyperCDP > HyperCDP Schedules.
Select the desired vStore from the vStore drop-down list in the upper left corner.
Click More on the right of the desired HyperCDP schedule and select Add Member.

The Add Member page is displayed. In the LUN area, select LUN001, add it to the right
pane, and click OK.

The execution result is displayed. Click Close.


HyperCDP Feature Configuration Lab Guide Page 44

Step 2 Remove a member.


Click More on the right of the desired HyperCDP schedule and select Remove Member.

The Remove Member page is displayed. In the LUN area, select LUN001 and add it to
the right pane. Click OK.
HyperCDP Feature Configuration Lab Guide Page 45

Confirm the warning and click OK.

The Execution Result page is displayed. Click Close.


HyperCDP Feature Configuration Lab Guide Page 46

2 HyperCDP for File Services

2.1 Introduction
About This Lab Practice
This lab practice is about HyperCDP for file services.

Objectives
⚫ Understand the principles and usage of HyperCDP for file services.
⚫ Master operations for configuring and managing HyperCDP for file services.

Networking Topology
⚫ Devices

Device Type Quantity Software Version

Linux host 1 CentOS 7.6

Windows host 1 Windows Server 2016

Huawei flash storage 1 6.1.3, UltraPath_31.2.0


HyperCDP Feature Configuration Lab Guide Page 47

⚫ Networking topology

⚫ Network information

Managemen Storage IP
Device Information Description
t IP address Address

192.168.1.33 Block services


DCA-Host 192.168.0.33
192.168.2.33 File services

192.168.1.34 Block services


DCA-Win 192.168.0.34
192.168.2.34 File services

192.168.1.30 Block service A

192.168.1.31 Block service B


DCA-Storage 192.168.0.30
192.168.2.30 File service A

192.168.2.31 File service B

Note: The IP addresses used in this lab are for reference only. The actual IP addresses are
subject to the lab environment plan.
HyperCDP Feature Configuration Lab Guide Page 48

2.2 Basic File Service Configuration


Creating a File System
Step 1 Check the license.
Log in to DeviceManager. Choose Settings > License Management.
Verify the information about active license files and check whether NAS Foundation is
included.

Step 2 Create a file system.


Create two 1 GB file systems.
Choose Services > File System and click Create. The Create File System dialog box is
displayed on the right.
Set file system parameters.
Name: FS001
Owning vStore: System_vStore
Capacity: 1 GB
NFS share: disabled
CIFS share: disabled
Add to HyperCDP Schedule: disabled
Retain the default settings for other parameters and click OK.
HyperCDP Feature Configuration Lab Guide Page 49

Confirm the warning and click OK.

Similarly, create a second file system.


Choose Services > File System and click Create. The Create File System dialog box is
displayed on the right.
HyperCDP Feature Configuration Lab Guide Page 50

Set file system parameters.


Name: FS002
Owning vStore: System_vStore
Capacity: 1 GB
NFS share: disabled
CIFS share: disabled
Add to HyperCDP Schedule: disabled
Retain the default settings for other parameters and click OK.

Confirm the warning and click OK.


HyperCDP Feature Configuration Lab Guide Page 51

Creating a Logical Port


On DeviceManager, choose Services > Network > Logical Ports and click Create.

On the displayed Create Logical Port page, set logical port parameters and click OK.
HyperCDP Feature Configuration Lab Guide Page 52

The following figure shows the result.

Configuring an NFS Share


Choose Services > File Service > Shares > NFS Shares. Click Create.

The Create NFS Share page is displayed on the right. Select file system FS001 for which
you want to create a share.
Click Add to set the access permission of the NFS share.
HyperCDP Feature Configuration Lab Guide Page 53

The Add Client dialog box is displayed on the right.


Set Type to Host, Clients to 192.168.2.0/24, UNIX Permission to Read-write, select
no_root_squash for root Permission Control, and click OK.
HyperCDP Feature Configuration Lab Guide Page 54

In the Create NFS Share dialog box, click OK.


HyperCDP Feature Configuration Lab Guide Page 55

The Execution Result page is displayed. Click Close.


HyperCDP Feature Configuration Lab Guide Page 56

Similarly, create an NFS share for FS002.

Using Storage Space on the Host


Step 1 Check the network and software packages.
Log in to the service host as user root, and modify the parameters of network ports in
their respective network configuration files. The following uses network port eth1 as an
example:
[root@dca-host ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE="eth1"
BOOTPROTO="static"
ONBOOT="yes"
TYPE="Ethernet"
PERSISTENT_DHCLIENT="yes"
IPADDR=192.168.2.33
PREFIX=24

Restart the NIC after configuring the IP address:


[root@dca-host ~]# ifdown eth2 && ifup eth2
Device 'eth1' successfully disconnected.
Connection successfully activated (D-Bus active path:
/org/freedesktop/NetworkManager/ActiveConnection/29)

Check the NFS software package.


[root@dca-host ~]# rpm -qa | grep nfs
libnfsidmap-0.25-19.el7.x86_64
nfs-utils-1.3.0-0.68.el7.2.x86_64

Note: If the NFS software package is not installed, install it before performing subsequent
operations.

Step 2 Mounte the file system.


[root@dca-host ~]# showmount -e 192.168.2.30
Export list for 192.168.2.30:
/FS002 192.168.2.0/24
/FS001 192.168.2.0/24
[root@dca-host ~]# mkdir /mnt/nfs1 /mnt/nfs2
[root@dca-host ~]# mount -t nfs 192.168.2.30:/FS001 /mnt/nfs1
[root@dca-host ~]# mount -t nfs 192.168.2.30:/FS002 /mnt/nfs2
HyperCDP Feature Configuration Lab Guide Page 57

Step 3 Write test files.


[root@dca-host ~]# echo aaa > /mnt/nfs1/testa
[root@dca-host ~]# echo bbb > /mnt/nfs2/testb
[root@dca-host ~]# cat /mnt/nfs1/testa /mnt/nfs2/testb
aaa
bbb

2.3 Configuring HyperCDP


Checking License Files
Log in to DeviceManager. Choose Settings > License Management. In the middle
information pane, verify the information about the active license file.

Creating a HyperCDP Object for the File System


Step 1 Create a HyperCDP object.
Choose Data Protection > Plans > File HyperCDP > HyperCDP Objects. From the vStore
drop-down list in the upper left corner, select the owning vStore of the file system for
which you want to create a HyperCDP object and click Create.
HyperCDP Feature Configuration Lab Guide Page 58

The Create HyperCDP Object page is displayed.


In the Available File Systems list, select FS001 and add it to the Selected File Systems
area. Set the HyperCDP object name to HCDP002, retain the default values for other
parameters, and click OK.

Check that the HyperCDP object has been created and its health status is Normal.
HyperCDP Feature Configuration Lab Guide Page 59

Creating a HyperCDP Schedule for the File System


Step 1 Creating a HyperCDP Schedule
Choose Data Protection > Plans > File HyperCDP > HyperCDP Schedules. From the
vStore drop-down list in the upper left corner, select the owning vStore of the LUN or PG
for which you want to create a HyperCDP schedule and click Create.

The Create HyperCDP Schedule page is displayed.


Configure basic information about the HyperCDP schedule.
Name: HCDPPlan003
Member: Select FS002 from the available file systems.
Click Next.

Configuration Policy: Set Fixed Period to every 1 minute and Retained Objects to 60 .
Click Next.
HyperCDP Feature Configuration Lab Guide Page 60

Click OK.

The Execution Result page is displayed. Click Close.


HyperCDP Feature Configuration Lab Guide Page 61

Step 2 View HyperCDP objects.


Choose Data Protection > Plans > File HyperCDP > HyperCDP Objects. From the vStore
drop-down list in the upper left corner, select the owning vStore of the file system for
which you want to create the HyperCDP object, click FS002. HyperCDP objects generated
by the HyperCDP schedule are displayed on the right.
HyperCDP Feature Configuration Lab Guide Page 62

2.4 Managing HyperCDP


Manage HyperCDP Objects
2.4.1.1 Modifying HyperCDP Object Attributes
Choose Data Protection > Plans > File HyperCDP > HyperCDP Objects.
Select the desired vStore from the vStore drop-down list in the upper left corner.
Select FS001, click More on the right of HCDP002, and select Modify.

The Modify HyperCDP Object page is displayed on the right.


Change the name to HCDP006 and click OK.
HyperCDP Feature Configuration Lab Guide Page 63

2.4.1.2 Using a HyperCDP Object for Rollback


Step 1 Delete hos data.
[root@dca-host ~]# rm -f /mnt/nfs1/testa
[root@dca-host ~]# ls /mnt/nfs1/testa
ls: cannot access /mnt/nfs1/testa: No such file or directory

Step 2 Unmounting the file system


[root@dca-host ~]# umount /mnt/nfs1

Step 3 Roll back data.


Choose Data Protection > Plans > File HyperCDP > HyperCDP Objects.
Select the desired vStore from the vStore drop-down list in the upper left corner.
Select FS001, select HCDP006, and click Roll Back.

In the Roll Back to HyperCDP Object dialog box that is displayed on the right, click OK.
HyperCDP Feature Configuration Lab Guide Page 64

Confirm the warning and click OK.

The rollback progress is displayed. After the rollback is complete, the rollback progress is
displayed as --.
HyperCDP Feature Configuration Lab Guide Page 65

Step 4 Mounte the file system.


[root@dca-host ~]# mount -t nfs 192.168.2.30:/FS001 /mnt/nfs1

Step 5 Verify data.


[root@dca-host ~]# ls /mnt/nfs1/testa
/mnt/nfs1/testa
[root@dca-host ~]# cat /mnt/nfs1/testa
aaa

The data is restored.

Managing a HyperCDP Schedule


2.4.2.1 Modifying a HyperCDP Schedule
Step 1 Disable the HyperCDP schedule.
Choose Data Protection > Plans > File HyperCDP > HyperCDP Schedules.
Select the desired vStore from the vStore drop-down list in the upper left corner.
Click More on the right of the desired HyperCDP schedule and choose Disable.

Confirm the warning and click OK.


HyperCDP Feature Configuration Lab Guide Page 66

Step 2 Modifying a HyperCDP Schedule


Click More on the right of the desired HyperCDP schedule and choose Modify.

The Modify HyperCDP Schedule page is displayed.


Change the name to HCDPPlan004, set Fixed Period to 2 minutes and Retained Objects
to 30. Click OK.
HyperCDP Feature Configuration Lab Guide Page 67

Step 3 Enable a HyperCDP schedule.


Click More on the right of the desired HyperCDP schedule and choose Enable.

2.4.2.2 Adding or Removing a Member


Step 1 Add a member.
Choose Data Protection > Plans > File HyperCDP > HyperCDP Schedules.
Select the desired vStore from the vStore drop-down list in the upper left corner.
Click More on the right of the desired HyperCDP schedule and select Add Member.
HyperCDP Feature Configuration Lab Guide Page 68

The Add Member page is displayed. In the Available File Systems area, select FS001,
add it to the right pane, and click OK.

The Execution Result page is displayed. Click Close.


HyperCDP Feature Configuration Lab Guide Page 69

Step 2 Remove a member.


Click More on the right of the desired HyperCDP schedule and select Remove Member.

The Remove Member page is displayed. In the Available File Systems area, select
FS001, and add it to the right pane. Click OK.
HyperCDP Feature Configuration Lab Guide Page 70

Confirm the warning and click OK.

The Execution Result page is displayed. Click Close.


HyperCDP Feature Configuration Lab Guide Page 71
Huawei Storage Certification Training

HCIP-Storage

HyperMetro Feature Configuration

Lab Guide
ISSUE: 5.5

Huawei Technologies Co., Ltd.

2
Copyright © Huawei Technologies Co., Ltd. 2024. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any
means without prior written consent of Huawei Technologies Co., Ltd.

Trademarks and Permissions

and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of
their respective holders.

Notice
The purchased products, services and features are stipulated by the contract made
between Huawei and the customer. All or part of the products, services and features
described in this document may not be within the purchase scope or the usage scope.
Unless otherwise specified in the contract, all statements, information, and
recommendations in this document are provided "AS IS" without warranties,
guarantees or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has
been made in the preparation of this document to ensure accuracy of the contents, but
all statements, information, and recommendations in this document do not constitute
a warranty of any kind, express or implied.

Huawei Technologies Co., Ltd.


Address: Huawei Industrial Base
Bantian, Longgang
Shenzhen 518129
People's Republic of China
Website: https://e.huawei.com

Huawei Proprietary and Confidential


Copyright © Huawei Technologies Co.,Ltd
HyperMetro Feature Configuration Lab Guide Page 1

Huawei Certification System


Huawei Certification is an integral part of the company's Platform + Ecosystem
strategy. It supports the development of ICT infrastructure that features Cloud-Pipe-
Device synergy. Our certification is always evolving to reflect the latest trends in ICT
development. Huawei Certification consists of three categories: ICT Infrastructure
Certification, Basic Software & Hardware Certification, and Cloud Platform & Services
Certification, making it the most extensive technical certification program in the
industry.
Huawei offers three levels of certification: Huawei Certified ICT Associate (HCIA),
Huawei Certified ICT Professional (HCIP), and Huawei Certified ICT Expert (HCIE).
Our programs cover all ICT fields and follow the industry's trend of ICT
convergence. With our leading talent development system and certification standards,
we are committed to fostering new digital ICT talent and building a sound ICT talent
ecosystem.
Huawei Certified ICT Professional-Storage (HCIP-Storage) is intended for Huawei
channel engineers, college students, and other ICT practitioners.
Its certification covers storage product technologies and applications, storage
product deployment and implementation, storage system performance tuning, and
storage O&M and fault management.
Huawei Certification introduces you to the industry and market, helps you in
innovation, and enables you to stand atop the storage frontiers.
HyperMetro Feature Configuration Lab Guide Page 2
HyperMetro Feature Configuration Lab Guide Page 3

About This Document

Overview
This lab guide provides HyperMetro feature configuration practices to help trainees
consolidate and review previously learned content. Upon completion of this course, you
will be able to get familiar with the configuration process and operations related to
HyperMetro for file systems.

Description
This lab guide introduces one lab practice to simulate HyperMetro configuration for file
systems of Huawei flash storage.
⚫ Lab practice 1: Use HyperMetro to implement an active-active solution for file
systems.

Background Knowledge Required


This course is for HCIP certification. To better understand this course, familiarize yourself
with:
⚫ Basic storage knowledge. You are advised to complete the study of HCIA-Storage and
pass the HCIA-Storage certification exam.
⚫ Operations of the Linux operating system (OS) and basic network knowledge.
HyperMetro Feature Configuration Lab Guide Page 4

Contents

About This Document ............................................................................................................... 3


Overview ............................................................................................................................................................................................. 3
Description ......................................................................................................................................................................................... 3
Background Knowledge Required ............................................................................................................................................. 3
1 References and Tools ............................................................................................................. 5
1.1 References ................................................................................................................................................................................... 5
1.2 Software and Tools .................................................................................................................................................................. 5
2 HyperMetro Feature Configuration ................................................................................... 7
2.1 Introduction ................................................................................................................................................................................ 7
2.1.1 About This Lab Practice ...................................................................................................................................................... 7
2.1.2 Objectives ................................................................................................................................................................................ 7
2.1.3 Networking Topology .......................................................................................................................................................... 8
2.2 Basic Service Configuration .................................................................................................................................................. 9
2.2.1 Configuring a File System .................................................................................................................................................. 9
2.2.2 Configuring an NFS Share ...............................................................................................................................................14
2.3 HyperMetro-based Operations ..........................................................................................................................................19
2.3.1 Configuring a Quorum Server ........................................................................................................................................19
2.3.2 Configuring HyperMetro ..................................................................................................................................................25
2.4 HyperMetro Tests ...................................................................................................................................................................35
2.5 Quiz .............................................................................................................................................................................................39
HyperMetro Feature Configuration Lab Guide Page 5

1 References and Tools

1.1 References
Commands and documents listed in this document are for reference only, and actual
ones may vary with product versions.
Product documentation

Log in to Huawei's technical support website (https://support.huawei.com/enterprise/) and input the


name of a document or tool in the search box to search for, browse, and download the desired
document or tool.

1.2 Software and Tools


1. PuTTY

Use the open-source software PuTTY to log in to a terminal. You can use the common domain name
(putty.org) of PuTTY to browse or download the desired document or tool.

2. eStor

Log in to the Huawei technical support website (http://support.huawei.com/enterprise/) and type


eStor in the search box to search for, browse, and download the desired document or tool.

3. WinSCP

It is used for transferring files between Windows and Linux OSs. WinSCP is recommended. You can
select other similar tools as required.

4. Quorum server software package

The software package name is OceanStor_Dorado_Version Number_QuorumServer_Architecture


Type.zip.
HyperMetro Feature Configuration Lab Guide Page 6

Log in to http://support.huawei.com/enterprise, click Software Download under PRODUCT


SUPPORT, choose Flash Storage under Data Storage, and find your storage product model. On the
page of the desired product model, specify the version in Select Version, and click the desired version
in the Version and Patch area. Then download the software on the displayed page.
HyperMetro Feature Configuration Lab Guide Page 7

2 HyperMetro Feature Configuration

2.1 Introduction
2.1.1 About This Lab Practice
This lab practice is about the local high availability (HA) disaster recovery (DR) for data
centers where HyperMetro is deployed.
In the lab practice, file systems are configured on the two OceanStor 6.1.x storage
systems and mounted to local hosts. Local HA is implemented using the HyperMetro
feature.

2.1.2 Objectives
⚫ Understand the basic principles of HyperMetro DR for flash storage.
⚫ Master the HyperMetro DR networking architecture for flash storage.
⚫ Master the procedures and methods to configure HyperMetro DR for flash storage.
⚫ Master the operation and configuration methods of the HyperMetro DR for flash
storage.
HyperMetro Feature Configuration Lab Guide Page 8

2.1.3 Networking Topology


Figure 2-1 Topology of the HyperMetro feature for the flash storage lab
environment

The preceding figure shows the lab environment networking for the flash storage
HyperMetro feature, where DCA-Host and DCA-Storage are deployed in data center A
(DCA) and DCB-Host and DCB-Storage are deployed in data center B (DCB). DCA and
DCB work as a backup to each other, and a quorum server QuorumServer is deployed
between them.

The storage device can be OceanStor 6.1.x, OceanStor Dorado 6.1.x, or eStor.

Device Configurations
To meet the requirements of HCIP-Storage lab practices, you are advised to use the
following configurations in each lab environment.

Device Type Model Quantity Earliest Software


Version

Flash storage OceanStor 6.1.x 2 6.1.5


OceanStor Dorado 6.1.x
eStor

Application server x86 server 2 CentOS 7.6

Management switch Ethernet switch 2 Version 5.160


HyperMetro Feature Configuration Lab Guide Page 9

Device Type Model Quantity Earliest Software


Version

Service switch CE series switch 4 Version 8.210

Network Planning
The following table lists the network planning for this lab practice.

Site Device Management Storage IP Description


IP Address Address

DCA DCA-Host 192.168.0.33 192.168.2.33 File services

DCA-Storage 192.168.0.30 192.168.1.30 Block services

192.168.2.30 File services

192.168.3.30 Replication

192.168.4.30 Arbitration

DCB DCB-Host 192.168.0.43 192.168.2.43 File services

DCB-Storage 192.168.0.40 192.168.1.40 Block services

192.168.2.40 File services

192.168.3.40 Replication

192.168.4.40 Arbitration

Arbitration DC- 192.168.0.70 192.168.4.70 Arbitration


QuorumServer

The preceding information is for reference only. Before performing the lab practice, contact the
administrator to access or configure devices to meet the actual requirements.

2.2 Basic Service Configuration


2.2.1 Configuring a File System
Step 1 Check the license.

1. On DeviceManager of the storage device in DCA, check whether a license file has
been imported and whether SmartQuota and NAS Foundation are displayed in the
feature list.
2. On the navigation bar, choose Settings > License Management.
HyperMetro Feature Configuration Lab Guide Page 10

3. In the middle function pane, check whether SmartQuota and NAS Foundation exist
in the Feature column.

Step 2 Create a storage pool.

To ensure that the application servers can use the storage space of the storage systems,
log in to the storage device in DCA, and create a storage pool named StoragePool001.
1. Choose System > Storage Pools.
2. Click Create.
3. Set the storage pool parameters.

4. Click OK.
HyperMetro Feature Configuration Lab Guide Page 11

Step 3 Create a file system.

Create a 10 GB file system named FileSystem001.


1. Choose Services > File Service > File Systems.
2. In the vStore drop-down list in the upper left corner, select Tenant-B for which you
want to create a file system.
3. Click Create.
4. Set the name of the file system to FileSystem001 and the owning storage pool to
StoragePool001.
5. Set the capacity of the file system to 10 GB and the application type to
NAS_Default. Disable the shares and protection.
HyperMetro Feature Configuration Lab Guide Page 12

6. Click OK.

Step 4 Create a dtree.

A dtree is created to manage the space used by all files in a directory and the access
permission of the directory.
1. Choose Services > File Service > Dtrees.
2. Select Tennant-B to which the desired file system belongs from the vStore drop-
down list in the upper left corner.
3. Click Create.
4. Set the dtree name to Dtree001.
HyperMetro Feature Configuration Lab Guide Page 13

5. Click OK.

Step 5 Create a quota.

A quota is created to control the space usage or file quantity of a dtree.


1. Choose Services > File Service > Quotas > Custom Quotas.
2. Select Tennant-B to which the desired file system belongs from the vStore drop-
down list in the upper left corner.
3. Click Create.
4. Select file system FileSystem001 and dtree Dtree001 for which you want to create a
quota.
5. Set Quota Type to Directory quota.
6. Set Hard Quota and Soft Quota of Space Quota to 10 GB and 8 GB, respectively.
7. Set Hard Quota and Soft Quota of File Quantity Quota to 30 thousand and 20
thousand, respectively.
HyperMetro Feature Configuration Lab Guide Page 14

8. Click OK.

2.2.2 Configuring an NFS Share


Step 1 Enable the NFSv4 service.

1. Choose Settings > File Service > NFS Service.


2. Select Tennant-B for which you want to enable the NFSv4 service from the vStore
drop-down list in the upper left.
3. Click Modify in the upper right.
HyperMetro Feature Configuration Lab Guide Page 15

4. Select NFSv4.0 Service as required.


5. Click Save.

6. Confirm the information in the dialog box and select I have read and understand
the consequences associated with performing this operation.

7. Click OK.

Step 2 Configure the network and create a logical port.

1. Choose Services > Network > Logical Ports.


2. Click Create. The Create Logical Port page is displayed on the right.
HyperMetro Feature Configuration Lab Guide Page 16

3. Configure logical port parameters as follows: Set Name to FSLP-AA, Role to Service,
Data Protocol to NFS, IP Address to 192.168.2.30, and Home Port to
CTE0.A.IOM4.P1. In Activation Status, select Activate.

4. Click OK.

Step 3 Create an NFS share.

1. Choose Services > File Service > Shares > NFS Shares.
2. Select Tennant-B to which the desired file system belongs from the vStore drop-
down list in the upper left corner.
3. Click Create.
4. Set basic parameters of the NFS share as follows: Set File System to FileSystem001
and Dtree to Dtree001.

5. Click OK.
HyperMetro Feature Configuration Lab Guide Page 17

Step 4 Add an NFS share client.

1. Choose Services > File Service > Shares > NFS Shares.
2. Select Tennant-B to which the desired NFS share belongs from the vStore drop-
down list in the upper left corner.
3. Click More on the right of the desired NFS share and select Add Client.

4. Set client attributes of DCA-Host as follows: Enter 192.168.2.33 in the Client text
box, set UNIX Permission to Read-write and root Permission Constraint to
no_root_squash, and retain the default values for other parameters.
HyperMetro Feature Configuration Lab Guide Page 18

5. Click OK and view the result.

Step 5 Access the NFS share.

1. Log in to the client of DCA-Host as user root.


2. Run the showmount -e 192.168.2.30 command to view available NFS shares of the
storage system.

[root@dca-host ~]# showmount -e 192.168.2.30


Export list for 192.168.2.30:
/FileSystem001/Dtree001 192.168.2.33

3. Run the mount -t nfs 192.168.2.30:/FileSystem001/Dtree001 /mnt command to


mount the NFS share.
4. Run the df -h command to check whether the mount is successful.

[root@dca-host ~]# mount -t nfs 192.168.2.30:/FileSystem001/Dtree001 /mnt


[root@dca-host ~]# df -h
HyperMetro Feature Configuration Lab Guide Page 19

Filesystem Size Used Avail Use% Mounted on


devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.8G 0 1.8G 0% /dev/shm
tmpfs 1.8G 8.6M 1.8G 1% /run
tmpfs 1.8G 0 1.8G 0% /sys/fs/cgroup
/dev/sda1 40G 2.7G 35G 8% /
tmpfs 367M 0 367M 0% /run/user/0
192.168.2.30:/FileSystem001/Dtree001 10G 0 10G 0% /mnt

Step 6 Create a test file.

1. Go to the /mnt directory and create a file named a.txt.

[root@dca-host ~]# cd /mnt


[root@dca-host mnt]# touch a.txt
[root@dca-host mnt]# ls

The following information is displayed:

a.txt

2. Write "Hello Huawei!" to the created a.txt file.

[root@dca-host mnt]# echo 'Hello Huawei!' > a.txt


[root@dca-host mnt]# cat a.txt
Hello Huawei!

2.3 HyperMetro-based Operations


2.3.1 Configuring a Quorum Server
Prerequisites: The quorum site has been connected to the two data centers, and the
required quorum server software package has been obtained.

Step 1 Log in to the quorum server.

Log in to the command-line interface (CLI) of the system and switch to user root.

Step 2 Configure the network.

1. Use the vim tool to edit the network configuration file and modify the network
information.

vim /etc/sysconfig/network-scripts/ifcfg-eth0

2. Modify the configuration of network port eth0 with BOOTPROTO set to static and
ONBOOT set to yes.

BOOTPROTO=static
ONBOOT=yes
IPADDR="192.168.0.70"
HyperMetro Feature Configuration Lab Guide Page 20

PREFIX="24"
GATEWAY="192.168.0.1"

3. Modify the configuration of network port eth1.

BOOTPROTO=static
ONBOOT=yes
IPADDR="192.168.4.70"
PREFIX="24"
GATEWAY="192.168.4.1"

4. Restart the network service.

systemctl restart network

Step 3 Configure a firewall.

1. Enable the firewall function and set the port number allowed by the firewall to
30002.

systemctl start firewalld


firewall-cmd --permanent --add-port=30002/tcp

The following information is displayed:

success

2. Restart the firewall service and check whether the firewall configuration takes effect.

systemctl restart firewalld


firewall-cmd --query-port=30002/tcp

The following information is displayed:

yes

Step 4 Install the quorum server software.

1. Upload the quorum server software package


OceanStor_QuorumServer_6.1.5_X86_64.zip to the /root directory as user root.
2. Decompress the software package and install the software.

unzip OceanStor_QuorumServer_6.1.5_X86_64.zip
cd package
sh ./quorum_server.sh -install
Verify the QuorumServer existence.
The QuorumServer is not installed.
The current user is the root user. A quorum server administrator account needs to be provided.
Continue to install?
<Y|N>:Y

3. Enter Y to continue the installation.


HyperMetro Feature Configuration Lab Guide Page 21

Enter an adminstrator account for the quorum server:[default: quorumsvr]:

4. Press Enter to use the default settings to continue the installation.

Created new account: quorumsvr.


usermod: no changes
Changing password for user quorumsvr.
New password:
Retype new password:

5. Enter the user-defined password to continue the installation.

passwd: all authentication tokens updated successfully.


Installing the quorum server.
Preparing... ################################# [100%]
Updating / installing...
1:QuorumServer-Dorado_V6-linux ################################# [100%]
[Notice] No old configuration need to resume.
Created symlink from /etc/systemd/system/multi-user.target.wants/quorum_server.service to
/etc/systemd/system/quorum_server.service.
QuorumServer install success completed.

6. Verify that the installation is successful.

qsadmin
start main!
Waiting for connecting to server...
admin:/>

Step 5 Configure the quorum server.

1. In the CLI of the quorum server software, run the add server_ip command to add all
IP addresses and port ID of the quorum server to the quorum server software for
management.

admin:/>add server_ip ip=192.168.0.70 port=30002


Command executed successfully.
admin:/>add server_ip ip=192.168.4.70 port=30002
Command executed successfully.

2. After the configuration is complete, run the show server_ip command.


If the command output shows the added IP addresses and port ID, the configuration is
successful.

admin:/>show server_ip

Index Server IP Server Port


----- --------------- ---------------
1 192.168.0.70 30002
2 192.168.4.70 30002

Index Local IP Local Port Remote IP Remote Port State


HyperMetro Feature Configuration Lab Guide Page 22

----- --------------- --------------- --------------- --------------- ----------

Step 6 Disable the arbitration whitelist.

1. Log in to the quorum server and run the qsadmin command to go to the CLI of the
quorum server software.
2. On the CLI, run the change white_list enable_switch=no command to disable the
whitelist.

admin:/>change white_list enable_switch=no


Command executed successfully.

Step 7 Issue and import the HyperMetro arbitration certificate.

1. Log in to DeviceManager of DCA-Storage and choose Settings > Certificates >


Certificate Management.

2. Select HyperMetro arbitration certificate and click Export Request File. Set
Certificate Key Algorithm to RSA 2048.
HyperMetro Feature Configuration Lab Guide Page 23

3. Find the .csr file in the download path of the browser.


4. Copy the .csr certificate request file to the /opt/quorum_server/export_import
directory of the quorum server, change the owning user and user group of the
certificate request file to the user and user group used to install the quorum server
software, and ensure that the certificate request file can be read.

[root@qs package]# cd /opt/quorum_server/export_import/


[root@qs export_import]# ls
DCA-Storage.csr
[root@qs export_import]# chown quorumsvr:quorumsvr DCA-Storage.csr
[root@qs export_import]#

5. On the CLI of the quorum server, run the generate tls_cert csr=? [days=?]
[cert_name=?] [sign_algorithm=?] command to issue certificates.
This command will generate a certificate file (with the same name as the certificate
request file but with the extension of .crt) and a CA file cps_ca.crt.

[root@qs export_import]# qsadmin


start main!
Waiting for connecting to server...
admin:/>generate tls_cert csr=DCA-Storage.csr cert_name=DCA-Storage.csr.crt

Command executed successfully.

6. Obtain the issued certificate from the /opt/quorum_server/export_import directory


on the quorum server.

admin:/>quit
Logout.
[root@qs export_import]# ls
cps_ca.crt DCA-Storage.csr DCA-Storage.csr.crt

7. On DeviceManager of DCA-Storage, import the generated certificate file (DCA-


Storage.crt) and CA file (cps_ca.crt).
Log in to DeviceManager and choose Settings > Certificates > Certificate
Management.
Select HyperMetro arbitration certificate and click Import Certificate to import the
issued certificate file and CA file.
HyperMetro Feature Configuration Lab Guide Page 24

Click OK to confirm the warning information.

Verify the execution result, and click Close.

8. Delete the certificate file from the quorum server and repeat the preceding steps to
generate the certificate to be imported to DCB-Storage.
HyperMetro Feature Configuration Lab Guide Page 25

9. In the CLI of the arbitration software, run the export tls_cert command to export the
device information. The qs_certreq.csr file is generated in the
/opt/quorum_server/export_import directory of the quorum server.

admin:/>export tls_cert
Command executed succesfully.

10. On the CLI of the quorum server, run the generate tls_cert csr=? [days=?]
[cert_name=?] [sign_algorithm=?] command to issue certificates.

admin:/>generate tls_cert csr=qs_certreq.csr cert_name=qs_cert.crt

Command executed successfully.

11. On the CLI of the quorum server software, run the import tls_cert cert_name=?
ca=? cert=? [private_key=?] [class=?] command to import the certificates.

admin:/>import tls_cert cert_name=hm_third_cert1 ca=cps_ca.crt cert=qs_cert.crt class=hm


Command executed succesfully.

2.3.2 Configuring HyperMetro


Step 1 Check the license.

1. Log in to DeviceManager of the local and remote storage systems.


To use the HyperMetro feature, the license containing the NAS Foundation and
HyperMetro (for FS) features must be available.
2. Choose Settings > License Management.
3. In the middle information pane, verify the information about the active license file.
HyperMetro Feature Configuration Lab Guide Page 26

Step 2 Add a remote device.

Adding an authentication user:


1. Log in to DeviceManager of the remote device DCB-Storage.
2. Choose Settings > User and Security > Users and Roles > Users.
3. Click Create.
4. Set user information as follows: Set Type to Local user, Username to mm_user, and
Role to Remote device administrator.

5. Click OK.
HyperMetro Feature Configuration Lab Guide Page 27

6. Click Close.

Creating access addresses for the IP replication links:


1. Log in to DeviceManager of the local and remote devices.
2. Choose Services > Network > Logical Ports.
3. Click Create.
4. On the displayed Create Logical Port page, set Name to AA-1, Role to Replication,
IP Address Type to IPv4, IP Address to 192.168.3.30, Subnet Mask to
255.255.255.0, Port Type to Ethernet port, and Home Port to CTE0.A.IOM4.P2.
HyperMetro Feature Configuration Lab Guide Page 28

5. Click OK.
6. Create another logical port as follows: Set Name to AA-2, Role to Replication, IP
Address Type to IPv4, IP Address to 192.168.3.31, Subnet Mask to 255.255.255.0,
Port Type to Ethernet port, and Home Port to CTE0.B.IOM2.P2. Then, click OK.

7. Log in to DeviceManager of the remote device and repeat the preceding steps to
create logical ports for remote replication.
⚫ Name: AA-1; Role: Replication; IP Address Type: IPv4; IP Address: 192.168.3.40;
Subnet Mask: 255.255.255.0; Port Type: Ethernet port; Home Port:
CTE0.A.IOM2.P2
⚫ Name: AA-2; Role: Replication; IP Address Type: IPv4; IP Address: 192.168.3.41;
Subnet Mask: 255.255.255.0; Port Type: Ethernet port; Home Port:
CTE0.B.IOM2.P2
HyperMetro Feature Configuration Lab Guide Page 29

Adding a remote device:


1. Log in to DeviceManager of the local storage system.
2. Choose Data Protection > Configuration > Remote Devices.
3. Click . The Add Remote Device page is displayed.
4. Set the information about the remote device to be added as follows: Set Link Type
to IP link, Local Port for the IP link to AA-1(192.168.3.30), Remote IP Address to
192.168.3.40, and Remote Device Administrator to mm_user.

5. Click Connect.
HyperMetro Feature Configuration Lab Guide Page 30

6. After the connection is successful, click OK.

Step 3 Create a file system HyperMetro domain.

1. Log in to DeviceManager of the local storage system.


2. Choose Data Protection > Configuration > HyperMetro Domains.

3. Click under File System HyperMetro Domains. The Create HyperMetro


Domain page is displayed on the right.
4. Set Name to FileHyperMetroDomain_AA and set Working Mode of the
HyperMetro domain to HyperMetro in active-active mode.
HyperMetro Feature Configuration Lab Guide Page 31

5. Enable Quorum Server.


⚫ In the Remote Device area, select a remote device that you want to add to the
HyperMetro domain.
⚫ In the Quorum Server area, select a quorum server that you want to add to the
HyperMetro domain.
⚫ Click . On the page that is displayed, create a quorum server.

6. After the configuration is complete, click OK to create a quorum server.


HyperMetro Feature Configuration Lab Guide Page 32

7. Click Close to return to the Create HyperMetro Domain page, select the new
quorum server, and click OK.

8. Complete the configuration as prompted.


In the file system HyperMetro domain list, you can view information about the newly
created file system HyperMetro domain.
HyperMetro Feature Configuration Lab Guide Page 33

Step 4 Create a HyperMetro vStore pair.

1. Log in to DeviceManager of the local storage system.


2. Choose Data Protection > Protection Entities > vStores > vStores.
3. Select the desired vStore and click Create HyperMetro vStore Pair. The Create
HyperMetro vStore Pair page is displayed.
4. From the File System HyperMetro Domains drop-down list, select
FileHyperMetroDomain_AA. Set Pair Creation to Automatic and Remote vStore to
Tenant-B for the HyperMetro vStore pair to be created.

5. Click OK. Confirm your operation as prompted.


HyperMetro Feature Configuration Lab Guide Page 34

Step 5 Create a HyperMetro pair.

1. Choose Data Protection > Protection Entities > File Systems > HyperMetro Pairs.
2. Select the vStore to which the desired file system belongs from the vStore drop-
down list in the upper left corner.
3. Click Create. The Create HyperMetro Pair page is displayed on the right.
4. Select the vStore to which the desired file system belongs from the vStore drop-
down list in the upper left corner.
5. In the Available File Systems area, select one or more file systems based on service
requirements.
6. Specify Remote Storage Pool for creating a remote file system.
The system will create a remote file system on the remote device of the HyperMetro
vStore pair and create a HyperMetro pair for the local and remote file system.

7. Click OK.
HyperMetro Feature Configuration Lab Guide Page 35

8. Click Close.

2.4 HyperMetro Tests


Step 1 View the configuration of the remote storage device.

1. Log in to DeviceManager of the storage device in DCB.


2. Choose Services > vStore Service > vStores to view details.

3. Choose Services > File Service > File Systems.


4. Select Tennant-B from the vStore drop-down list in the upper left corner.
HyperMetro Feature Configuration Lab Guide Page 36

5. Choose Services > File Service > Dtrees to view details.

6. Choose Services > File Service > Shares and click the displayed share path to view
details.

7. Choose Services > File Service > Quotas to view details.

The configurations of DCB-Storage are the same as those of DCA-Storage, indicating


that the HyperMetro feature configuration has taken effect.
HyperMetro Feature Configuration Lab Guide Page 37

Step 2 Add an NFS share client.

1. Log in to DeviceManager of the storage device in DCA.


2. Choose Services > File Service > Shares > NFS Shares.
3. Select Tenant-B to which the desired NFS share belongs from the vStore drop-down
list in the upper left corner.
4. Click /Filesystem001/Dtree001 in the Share Path column. On the displayed page,
click the Permissions tab, click Add, and specify the information of DCB-Host as
follows: Enter 192.168.2.43 in the Clients text box, set UNIX Permission to Read-
write and root Permission Constraint to no_root_squash, and retain the default
values for other parameters.

5. Click OK.
HyperMetro Feature Configuration Lab Guide Page 38

6. Log in to DeviceManager of the storage device in DCB.


7. Choose Services > File Service > Shares. Click the NFS Shares tab and click
/Filesystem001/Dtree001 in the Share Path column.

Information about DCB-Host has been automatically added.

Step 3 View the test file.

1. Log in to the client of DCB-Host as user root.


HyperMetro Feature Configuration Lab Guide Page 39

2. Run the showmount -e 192.168.2.40 command to view available NFS shares of the
storage system.

[root@dcb-host ~]# showmount -e 192.168.2.40


Export list for 192.168.2.40:
/FileSystem001/Dtree001 192.168.2.43,192.168.2.33

3. Run the mount -t nfs 192.168.2.40:/FileSystem001/Dtree001 /mnt command to


mount the NFS share.
4. Run the df -h command to check whether the mount is successful.

[root@dcb-host ~]# mount -t nfs 192.168.2.40:/FileSystem001/Dtree001 /mnt


[root@dcb-host ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.8G 0 1.8G 0% /dev/shm
tmpfs 1.8G 8.6M 1.8G 1% /run
tmpfs 1.8G 0 1.8G 0% /sys/fs/cgroup
/dev/sda1 40G 2.7G 35G 8% /
tmpfs 367M 0 367M 0% /run/user/0
192.168.2.40:/FileSystem001/Dtree001 10G 0 10G 0% /mnt

5. Run the following commands to go to the /mnt directory and view the file:

cd /mnt
cat a.txt

The following information is displayed:

Hello Huawei!

2.5 Quiz
Question:
What are the deployment scenarios of HyperMetro for file services?
Answer:
There are two deployment scenarios: single-DC deployment and cross-DC deployment.
1. Single-DC deployment
In this scenario, the storage systems are deployed in two equipment rooms in the same
DC.
Hosts communicate with storage systems through a switched fabric (IP switches for NAS
file systems). HyperMetro replication links are deployed between the storage systems to
ensure continuous operation of services.
2. Cross-DC deployment
In this scenario, the storage systems are deployed in two DCs in the same city or in two
cities located in close proximity. The distance between the two DCs is within 300 km.
HyperMetro Feature Configuration Lab Guide Page 40

Both of the DCs can handle service requests concurrently, thereby accelerating service
response and improving resource utilization. If one DC fails, its services are automatically
switched to the other DC.
In cross-DC deployment scenarios involving long-distance transmission, dense wavelength
division multiplexing (DWDM) devices must be used to ensure a short transmission
latency. In addition, HyperMetro replication links must be deployed between the active-
active storage systems for data synchronization.
Huawei Storage Certification Training

HCIP-Storage

HyperReplication Feature
Configuration

Lab Guide
ISSUE: 5.5

Huawei Technologies Co., Ltd.

2
Copyright © Huawei Technologies Co., Ltd. 2024. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any
means without prior written consent of Huawei Technologies Co., Ltd.

Trademarks and Permissions

and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of
their respective holders.

Notice
The purchased products, services and features are stipulated by the contract made
between Huawei and the customer. All or part of the products, services and features
described in this document may not be within the purchase scope or the usage scope.
Unless otherwise specified in the contract, all statements, information, and
recommendations in this document are provided "AS IS" without warranties,
guarantees or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has
been made in the preparation of this document to ensure accuracy of the contents, but
all statements, information, and recommendations in this document do not constitute
a warranty of any kind, express or implied.

Huawei Technologies Co., Ltd.


Huawei Industrial Base Bantian, Longgang Shenzhen 518129 People's
Address:
Republic of China
Website: https://e.huawei.com

Huawei Proprietary and Confidential


Copyright © Huawei Technologies Co., Ltd
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 1

Huawei Certification System


Huawei Certification is an integral part of the company's Platform + Ecosystem
strategy. It supports the development of ICT infrastructure that features Cloud-Pipe-
Device synergy. Our certification is always evolving to reflect the latest trends in ICT
development. Huawei Certification consists of three categories: ICT Infrastructure
Certification, Basic Software & Hardware Certification, and Cloud Platform & Services
Certification, making it the most extensive technical certification program in the
industry.
Huawei offers three levels of certification: Huawei Certified ICT Associate (HCIA),
Huawei Certified ICT Professional (HCIP), and Huawei Certified ICT Expert (HCIE).
Our programs cover all ICT fields and follow the industry's trend of ICT
convergence. With our leading talent development system and certification standards,
we are committed to fostering new digital ICT talent and building a sound ICT talent
ecosystem.
Huawei Certified ICT Professional-Storage (HCIP-Storage) is intended for Huawei
channel engineers, college students, and other ICT practitioners. Its certification covers
the introduction to storage systems, flash storage technologies and applications, scale-
out storage technologies and applications, storage planning, design and
implementation, and storage O&M and fault management.
Huawei Certification introduces you to the industry and market, helps you in
innovation, and enables you to stand atop the storage frontiers.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 2
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 3

About This Document

Overview
This lab guide provides HyperReplication (remote replication) feature configuration
practice to help trainees consolidate and review previously learned content. Upon
completion of this lab guide, you will be familiar with the configuration process and
operations related to HyperReplication for file systems.

Description
This lab guide introduces the following lab practice to simulate HyperReplication
configuration for file systems in storage project delivery:
⚫ Lab practice 1: Use HyperReplication to implement remote replication of file systems.

Background Knowledge Required


The purpose of completing the lab practice described in this guide is to help you obtain
HCIP certification. Before performing the lab practice, you should be familiar with:
⚫ Basic storage knowledge. You are advised to complete the study of HCIA-Storage and
pass the HCIA-Storage certification exam.
⚫ Operations of the Linux operating system (OS) and basic network knowledge.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 4

Contents

About This Document ............................................................................................................... 3


Overview ............................................................................................................................................................................................. 3
Description ......................................................................................................................................................................................... 3
Background Knowledge Required ............................................................................................................................................. 3
1 References and Tools ............................................................................................................. 1
1.1 References ................................................................................................................................................................................... 1
1.2 Software and Tools .................................................................................................................................................................. 1
2 HyperReplication Feature Configuration .......................................................................... 1
2.1 Introduction ................................................................................................................................................................................ 1
2.1.1 About This Lab Practice ...................................................................................................................................................... 1
2.1.2 Objectives ................................................................................................................................................................................ 1
2.1.3 Networking Topology .......................................................................................................................................................... 2
2.2 Basic Service Configuration .................................................................................................................................................. 3
2.2.1 Configuring a File System .................................................................................................................................................. 3
2.2.2 Configuring an NFS Share ................................................................................................................................................. 7
2.3 Configuring HyperReplication ............................................................................................................................................12
2.3.1 Configuring HyperReplication ........................................................................................................................................12
2.3.2 Verifying the Configuration .............................................................................................................................................20
2.4 Quiz .............................................................................................................................................................................................28
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 1

1 References and Tools

1.1 References
Commands and documents listed in this document are for reference only, and actual
ones may vary depending on the product version.
Product documentation

Log in to Huawei's technical support website (https://support.huawei.com/enterprise/)


and input the name of a document or tool in the search box to search for, browse, and
download the desired document or tool.

1.2 Software and Tools


1. PuTTY

Use the open-source software PuTTY to log in to a terminal. You can use the common
domain name (putty.org) of PuTTY to browse or download the desired document or
tool.

2. eStor

Log in to Huawei's technical support website (http://support.huawei.com/enterprise/)


and type eStor in the search box to search for, browse, and download the desired
document or tool.

3. WinSCP

It is recommended that you use WinSCP to transfer files between devices running
Windows and Linux OSs. However, you can also use other similar tools to do so.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 1

2 HyperReplication Feature
Configuration

2.1 Introduction
2.1.1 About This Lab Practice
Substantial progress has been made in the digital transformation of various industries,
and data is increasingly becoming the core component of business operations. As a
result, enterprises have a growing need for more stable data storage systems. Storage
systems that support file systems are also being widely applied in various industries.
Many enterprises have adopted highly stable storage systems. However, these storage
systems may still be subject to unrecoverable damages caused by natural disasters.
Remote disaster recovery (DR) solutions have been developed to ensure continuous data
access and high recoverability and availability, of which remote replication for file
systems is one of the critical technologies. This lab guide describes how to configure the
HyperReplication (remote replication) feature.

2.1.2 Objectives
⚫ Understand the technical principles of HyperReplication.
⚫ Master the networking architecture of HyperReplication.
⚫ Master the procedures and methods of configuring HyperReplication.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 2

2.1.3 Networking Topology

Figure 2-1 Networking topology for configuring HyperReplication

The preceding figure shows the networking topology of the active-passive DR lab
environment for flash storage. DCA serves as the production center and DCB as the DR
center. DCA-Host and DCA-Storage are deployed in the production center and DCB-Host
and DCB-Storage are deployed in the DR center.

The storage device can be OceanStor 6.1.x, OceanStor Dorado 6.1.x, or eStor.

Device Configurations
You are advised to use the following configurations in each lab environment when
performing the HCIP-Storage lab practice.

Earliest Software
Device Type Model Quantity
Version

OceanStor 6.1.x
Flash storage OceanStor Dorado 6.1.x 2 6.1.5
eStor

Application CentOS 7.6,


x86 server 4
server openEuler 20.03

Management Ethernet switch 2 Version 5.160


HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 3

Earliest Software
Device Type Model Quantity
Version
switch

Service switch CE series switch 4 Version 8.210

Network Planning
The following table lists the network planning for this lab practice.

Management
Site Device Storage IP Address Description
IP Address

DCA-Host 192.168.0.33 192.168.2.33 File services

192.168.1.30 Block services

DCA 192.168.0.30 192.168.2.30 File services


DCA-Storage
192.168.3.30 Replication

192.168.4.30 Arbitration

DCB-Host 192.168.0.43 192.168.2.43 File services

192.168.1.40 Block services

DCB 192.168.2.40 File services


DCB-Storage 192.168.0.40
192.168.3.40 Replication

192.168.4.40 Arbitration

The preceding information is for reference only. Before performing the lab practice,
contact the administrator to access or configure devices to meet the actual requirements.

2.2 Basic Service Configuration


2.2.1 Configuring a File System
Step 1 Create a storage pool.

To ensure that the application servers can use the storage space of storage systems, log
in to the storage system in DCA and create a storage pool named StoragePool001.
Choose System > Storage Pools, click Create, and set parameters for the storage pool.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 4

Click OK.

Step 2 Create a file system.

Create a 10 GB file system named FileSystem001.


Choose Services > File Service > File Systems.
Select Tenant-B from the vStore drop-down list in the upper left corner.
Click Create.
Set the name of the file system to FileSystem001 and the owning storage pool to
StoragePool001.
Set the capacity of the file system to 10 GB and the application type to NAS_Default.
Disable the shares and protection.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 5

Click OK.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 6

Step 3 Create a dtree.

A dtree is created to manage the space used by all files in a directory and the access
permission of the directory.
Choose Services > File Service > Dtrees.
Select Tenant-B to which the desired file system belongs from the vStore drop-down list
in the upper left corner.
Click Create.
Set the dtree name to Dtree001.

Click OK.

Step 4 Create a quota.

A quota is created to control the space usage or file quantity of a dtree.


Choose Services > File Service > Quotas > Custom Quotas.
Select Tenant-B to which the desired file system belongs from the vStore drop-down list
in the upper left corner.
Click Create, select file system FileSystem001 and dtree Dtree001, and set Quota Type
to Directory quota.
Set Hard Quota and Soft Quota of Space Quota to 10 GB and 8 GB, respectively.
Set Hard Quota and Soft Quota of File Quantity Quota to 30 thousand and 20
thousand, respectively.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 7

Click OK.

2.2.2 Configuring an NFS Share


Step 1 Enable the NFSv4 service.

Choose Settings > File Service > NFS Service.


Select Tenant-B for which you want to enable the NFSv4 service from the vStore drop-
down list in the upper left.
Click Modify in the upper right.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 8

Select NFSv4.0 Service.


Click Save.

Confirm the information in the dialog box and select I have read and understand the
consequences associated with performing this operation.

Click OK.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 9

Step 2 Configure the network and create a logical port.

Choose Services > Network > Logical Ports.


Click Create. The Create Logical Port page is displayed on the right.
Configure logical port parameters as follows: Set Name to FSLP-REP, Role to Service,
Data Protocol to NFS, IP Address to 192.168.2.30, and Home Port to CTE0.A.IOM4.P1.
In Activation Status, select Activate.

Click OK.

Step 3 Create an NFS share.

Choose Services > File Service > Shares > NFS Shares.
Select Tenant-B to which the desired file system belongs from the vStore drop-down list
in the upper left corner.
Click Create.
Set basic parameters of the NFS share as follows: Set File System to FileSystem001 and
Dtree to Dtree001.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 10

Click OK.

Step 4 Add an NFS share client.

Choose Services > File Service > Shares > NFS Shares.
Select Tenant-B to which the desired NFS share belongs from the vStore drop-down list
in the upper left corner.
Click More on the right of the desired NFS share and choose Add Client.

Set client information as follows: Enter the IP address 192.168.2.33 of DCA-Host in the
Clients text box, set UNIX Permission to Read-write and root Permission Constraint to
no_root_squash, and retain the default values for other parameters.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 11

Click OK and view the result.

Step 5 Access the NFS share.

Log in to the client of DCA-Host as user root.


Run the showmount -e 192.168.2.30 command to view available NFS shares of the
storage system.

[root@dca-host ~]# showmount -e 192.168.2.30


Export list for 192.168.2.30:
/FileSystem001/Dtree001 192.168.2.33

Run the mount -t nfs 192.168.2.30:/FileSystem001/Dtree001 /mnt command to mount


the NFS share.
Run the df -h command to check whether the mount is successful.

[root@dca-host ~]# mount -t nfs 192.168.2.30:/FileSystem001/Dtree001 /mnt


[root@dca-host ~]# df -h
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 12

Filesystem Size Used Avail Use% Mounted on


devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.8G 0 1.8G 0% /dev/shm
tmpfs 1.8G 8.6M 1.8G 1% /run
tmpfs 1.8G 0 1.8G 0% /sys/fs/cgroup
/dev/sda1 40G 2.7G 35G 8% /
tmpfs 367M 0 367M 0% /run/user/0
192.168.2.30:/FileSystem001/Dtree001 10G 0 10G 0% /mnt

Step 6 Create a test file.

Go to the /mnt directory and create a file named a.txt.

[root@dca-host ~]# cd /mnt


[root@dca-host mnt]# touch a.txt
[root@dca-host mnt]# ls

The following information is displayed:

a.txt

Write Hello Huawei! to the created a.txt file.

[root@dca-host mnt]# echo 'Hello Huawei!' > a.txt


[root@dca-host mnt]# cat a.txt
Hello Huawei!

2.3 Configuring HyperReplication


2.3.1 Configuring HyperReplication
Step 1 Check the license.

Choose Settings > License Management.


In the middle function pane, check that the activated license contains HyperReplication
(Remote Replication) and NAS Foundation.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 13

Step 2 Create logical ports.

Log in to DeviceManager of the local and remote storage systems.


Choose Services > Network > Logical Ports.
Select All vStores from the vStore drop-down list in the upper left corner.
Click Create.
The Create Logical Port page is displayed on the right.
Set the logical port parameters as follows: Set Name to REP-1, Role to Replication, IP
Address Type to IPv4, IP Address to 192.168.3.30, Subnet Mask to 255.255.255.0, Port
Type to Ethernet port, and Home Port to CTE0.A.IOM4.P2.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 14

Click OK. The logical port is created.


Create another logical port as follows: Set Name to REP-2, Role to Replication, IP
Address Type to IPv4, IP Address to 192.168.3.31, Subnet Mask to 255.255.255.0, Port
Type to Ethernet port, and Home Port to CTE0.B.IOM4.P2.

Log in to DeviceManager of the remote device DCB-Storage and repeat the preceding
steps to create logical ports for remote replication.
⚫ Set Name to REP-1, Role to Replication, IP Address Type to IPv4, IP Address to
192.168.3.40, Subnet Mask to 255.255.255.0, Port Type to Ethernet port, and
Home Port to CTE0.A.IOM4.P2.
⚫ Set Name to REP-2, Role to Replication, IP Address Type to IPv4, IP Address to
192.168.3.41, Subnet Mask to 255.255.255.0, Port Type to Ethernet port, and
Home Port to CTE0.B.IOM4.P2.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 15

Step 3 Create an authentication user on the remote device.

Log in to DeviceManager of the remote device DCB-Storage.


Choose Settings > User and Security > Users and Roles > Users.
Click Create. The Create User page is displayed.
Set user information as follows: Set Type to Local user, Username to mm_user, and
Role to Remote device administrator.

Click OK.

Click Close.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 16

Step 4 Add a remote device.

Log in to DeviceManager of DCA-Storage.


Choose Data Protection > Configuration > Remote Devices.

Click . The Add Remote Device page is displayed on the right.


Set the information about the remote device to be added as follows: Set Link Type to IP
link, Local Port for the IP link to REP-1(192.168.3.30), Remote IP Address to
192.168.3.40, and Remote Device Administrator to mm_user.

Click Connect.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 17

After the connection is successful, click OK.

Step 5 Create a remote replication vStore pair.

Choose Data Protection > Protection Entities > vStores > Remote Replication vStore
Pairs.
Click Create.
The Create Remote Replication vStore Pair page is displayed on the right.
Select the local vStore Tenant-B for which you want to create a remote replication
vStore pair.
From the Remote Device drop-down list, select the remote device DCB-Storage with
which you want to create a remote replication vStore pair.
Set Pair Creation to Automatic.
Select Synchronize Share and Authentication and Synchronize Network
Configuration.

Click OK and confirm the warning.


HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 18

Click Close.

Step 6 Create a remote replication pair.

Choose Data Protection > Protection Entities > File Systems > Remote Replication
Pairs.
Click Create. The Create Remote Replication Pair page is displayed on the right.
In the Available File Systems area, select the file system for which you want to create a
remote replication pair and add it to the Selected File Systems area.

Click Next. The Configure Protection page is displayed.


HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 19

Set Synchronize Configuration to Yes, Pair Creation to Automatic, and Sync Type to
Manual. Retain the default values for other parameters.

Click Next. The information summary page is displayed.

Confirm information about the created remote replication pair and click OK.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 20

Click Close.

2.3.2 Verifying the Configuration


Step 1 Synchronize data manually.

Log in to DeviceManager of the storage system at the primary site and choose Data
Protection > Protection Entities > File Systems > Remote Replication Pairs. Select
Tenant-B from the vStore drop-down list in the upper left corner. Select local resource
FileSystem001.

Click Synchronize to manually synchronize data.


HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 21

In the Synchronize Remote Replication Pair dialog box that is displayed, select I have
read and understand the consequences associated with performing this operation
and click OK.

Step 2 Split the remote replication vStore pair.

After the manual synchronization is complete, choose Data Protection > Protection
Entities > vStores > Remote Replication vStore Pairs and select the remote replication
vStore pair Tenant-B.

Click Split.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 22

In the Split Remote Replication vStore Pair dialog box that is displayed, select I have
read and understand the consequences associated with performing this operation
and click OK.

Step 3 Disable protection for the secondary resource.

Log in to DeviceManager of DCB-Storage, choose Data Protection > Protection


Entities > vStores > Remote Replication vStore Pairs, and select Tenant-B.

Choose More > Disable Protection for Secondary Resource.


HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 23

Click OK.
Choose Data Protection > Protection Entities > File Systems > Remote Replication
Pairs and select FileSystem001.

Choose More > Disable Protection for Secondary Resource.

Click OK.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 24

Step 4 Add an NFS share client.

Choose Services > File Service > Shares > NFS Shares, select Tenant-B, and click
More > Add Client on the right of share path /FileSystem001/Dtree001.

Set client information as follows: Enter the IP address 192.168.2.43 of DCB-Host in the
Clients text box, set UNIX Permission to Read-write and root Permission Constraint to
no_root_squash, and retain the default values for other parameters.

Click OK and view the result.


HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 25

Step 5 Create a logical port.

Choose Services > Network > Logical Ports.


Click Create. The Create Logical Port page is displayed on the right.
Configure logical port parameters as follows: Set Name to FSLP-REP-TEST, Role to
Service, Data Protocol to NFS, IP Address to 192.168.2.40, and Home Port to
CTE0.A.IOM4.P1. In Activation Status, select Activate.

Click OK.

Step 6 Disable the logical ports of the primary storage system.

Log in to DeviceManager of DCA-Storage and choose Data Protection > Protection


Entities > vStores > Remote Replication vStore Pairs.

On the right of Tenant-B, click More, and select Disable Logical Port.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 26

In the Disable Logical Port dialog box that is displayed, select I have read and
understand the consequences associated with performing this operation and click
OK.

Step 7 Enable the logical ports of the secondary storage system.

Log in to DeviceManager of DCB-Storage and choose Data Protection > Protection


Entities > vStores > Remote Replication vStore Pairs.

On the right of Tenant-B, click More, and select Enable Logical Port.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 27

In the Enable Logical Port dialog box that is displayed, select I have read and
understand the consequences associated with performing this operation and click
OK.

Step 8 Access the NFS share.

Log in to the client of DCB-Host as user root.


Run the showmount -e 192.168.2.40 command to view available NFS shares of the
storage system.

[root@dcb-host ~]# showmount -e 192.168.2.40


Export list for 192.168.2.40:
/FileSystem001/Dtree001 192.168.2.43,192.168.2.33

Run the mount -t nfs 192.168.2.40:/FileSystem001/Dtree001 /mnt command to mount


the NFS share.
Run the df -h command to check whether the mount is successful.

[root@dcb-host ~]# mount -t nfs 192.168.2.40:/FileSystem001/Dtree001 /mnt


[root@dcb-host ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.8G 0 1.8G 0% /dev/shm
tmpfs 1.8G 8.7M 1.8G 1% /run
tmpfs 1.8G 0 1.8G 0% /sys/fs/cgroup
/dev/sda1 40G 2.7G 35G 8% /
tmpfs 367M 0 367M 0% /run/user/0
192.168.2.40:/FileSystem001/Dtree001 10G 0 10G 0% /mnt

Step 9 Verify the test file.

Run the following commands to go to the /mnt directory and view the file:

cd /mnt
cat a.txt
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 28

The following information is displayed:

Hello Huawei!

2.4 Quiz
Question:
After creating a remote replication vStore pair and a remote replication pair, can I
perform operations on the remote replication pair?
Answer:
If the secondary resource protection status and data synchronization direction of the
remote replication vStore pair are the same as those of the remote replication pair, you
are not allowed to perform a primary/secondary switchover or disable or enable
secondary resource protection for the remote replication pair. You are advised to perform
the operations on the remote replication vStore pair.
If the secondary resource protection status or data synchronization direction of the
remote replication vStore pair is inconsistent with that of the remote replication pair, you
can perform related operations on the remote replication pair to ensure that the status
of the remote replication vStore pair is consistent with that of the remote replication pair.
After you synchronize, split, perform a primary/secondary switchover for, or
disable/enable secondary resource protection for a remote replication vStore pair, you
can view the result of these operations for the remote replication vStore pair on
DeviceManager. The remote replication pair runs in the background. You can run the
following command to check the running status of the remote replication pair and
ensure that the operations for the remote replication pair are complete:

show remote_replication unified


Huawei Storage Certification Training

HCIP-Storage

Storage System Performance


Tuning

Lab Guide
ISSUE: 5.5

HUAWEI TECHNOLOGIES CO., LTD

2
Copyright © Huawei Technologies Co., Ltd. 2024. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any
means without prior written consent of Huawei Technologies Co., Ltd.

Trademarks and Permissions

and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of
their respective holders.

Notice
The purchased products, services and features are stipulated by the contract made
between Huawei and the customer. All or part of the products, services and features
described in this document may not be within the purchase scope or the usage scope.
Unless otherwise specified in the contract, all statements, information, and
recommendations in this document are provided "AS IS" without warranties,
guarantees or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has
been made in the preparation of this document to ensure accuracy of the contents, but
all statements, information, and recommendations in this document do not constitute a
warranty of any kind, express or implied.

Huawei Technologies Co., Ltd.


Address: Huawei Industrial Base Bantian, Longgang Shenzhen 518129
People's Republic of China
Website: http://e.huawei.com

Huawei Proprietary and Confidential


Copyright © Huawei Technologies Co.,Ltd
HCIP-Storage Storage System Performance Tuning Lab Guide Page 1

Huawei Certification System


Huawei Certification is an integral part of the company's Platform + Ecosystem
strategy. It supports the development of ICT infrastructure that features Cloud-Pipe-
Device synergy. Our certification is always evolving to reflect the latest trends in ICT
development. Huawei Certification consists of three categories: ICT Infrastructure
Certification, Basic Software & Hardware Certification, and Cloud Platform & Services
Certification, making it the most extensive technical certification program in the
industry.
Huawei offers three levels of certification: Huawei Certified ICT Associate (HCIA),
Huawei Certified ICT Professional (HCIP), and Huawei Certified ICT Expert (HCIE).
Our programs cover all ICT fields and follow the industry's trend of ICT
convergence. With our leading talent development system and certification standards,
we are committed to fostering new digital ICT talent and building a sound ICT talent
ecosystem.
Huawei Certified ICT Professional-Storage (HCIP-Storage) is intended for Huawei
channel engineers, college students, and other ICT practitioners.
Its certification covers storage product technologies and applications, storage
product deployment and implementation, storage system performance tuning, and
storage O&M and fault management.
Huawei Certification introduces you to the industry and market, helps you in
innovation, and enables you to stand atop the storage frontiers.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 2
HCIP-Storage Storage System Performance Tuning Lab Guide Page 3

About This Document

Overview
This course uses Huawei flash storage products as an example to describe operations
related to storage performance tuning. Performance assurance of storage products is very
important. Storage products play a vital role in the IT system. They store historical service
data and also support the normal running of services. In addition, storage performance
assurance and optimization are necessary for normal running and affect the operating
status of enterprises.
Based on the performance assurance operations of storage products, this course classifies
the operations that are frequently performed during performance tuning into tasks, and
deepens the understanding through analysis and discussion, operation drills, and
summarization.

Description
This lab guide consists of three lab practices covering installation and use of storage
system performance monitoring tools, performance problem locating, and usage of
performance test tools.
⚫ Lab practice 1: storage system performance monitoring
⚫ Lab practice 2: storage system performance problem locating
⚫ Lab practice 3: storage system performance test

Background Knowledge Required


This course is for HCIP certification. To better understand this course, familiarize yourself
with:
⚫ Basic storage knowledge. You are advised to complete the study of HCIA-Storage and
pass the HCIA-Storage certification exam.
⚫ Operations of the Linux operating system (OS) and basic network knowledge.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 4

1 References and Tools

1.1 References
Commands and documents listed in this document are for reference only, and actual
ones may vary with product versions.
Product documentation

Log in to Huawei technical support website (https://support.huawei.com/enterprise/en/index.html)


and input the name of a document or tool in the search box to search for, browse, and download the
desired document or tool.

1.2 Software Tools


1. PuTTY

Use the open-source software PuTTY to log in to a terminal. You can use the common domain name
(putty.org) of PuTTY to browse or download the desired document or tool.

2. WinSCP

Use WinSCP to transfer files between Windows and Linux OSs. You can use the common domain
name (winscp.net) of WinSCP to browse or download the desired document or tool.

3. Iometer

Use the open-source software Iometer to test disk performance. You can use the common domain
name (iometer.org) of Iometer to browse or download the corresponding document or tool.

4. Huawei OceanStor UltraPath

Log in to Huawei technical support website (https://support.huawei.com/enterprise/en/index.html)


and type UltraPath in the search box to search for, browse, and download the desired
documentation or tool.

5. eStor
HCIP-Storage Storage System Performance Tuning Lab Guide Page 5

Log in to Huawei technical support website (https://support.huawei.com/enterprise/en/index.html)


and type eStor in the search box to search for, browse, and download the desired document or tool.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 6

2 Storage System Performance Tuning

2.1 Introduction
2.1.1 About This Lab Practice
A company purchases a Huawei all-flash storage device to run multiple services.
According to the plan, LUNs will be created on storage device DCA-Storage in data
center DCA and separately mapped to Linux host DCA-Host and Windows host DCA-
Win. NAS shares will be created and separately mounted to Linux host DCA-Host and
Windows host DCA-Win. DCA-Host carries Oracle database services, while DCA-Win
carries SQL Server database services. To ensure high performance, you need to check
whether performance bottlenecks exist on the host or storage device and check the
performance parameters or features of the storage device.

2.1.2 Objectives
⚫ After the lab practice, you will be able to complete:
⚫ Performance monitoring
⚫ Performance problem locating
⚫ Performance test

2.1.3 Lab Networking Topology


⚫ Lab devices

Device Type Quantity Software Version

Linux host 1 CentOS 7.6

Windows host 1 Windows Server 2016

Huawei flash storage 1 6.1.5, UltraPath_31.2.0

⚫ Networking topology
HCIP-Storage Storage System Performance Tuning Lab Guide Page 7

⚫ Lab network information

Device Management Storage IP


Site Description
Information IP Address Address

192.168.1.33 Block service


DCA-Host 192.168.0.33
192.168.2.33 File service

192.168.1.34 Block service


DCA-Win 192.168.0.34
192.168.2.34 File service
DCA
192.168.1.30 Block service A

192.168.1.31 Block service B


DCA-Storage 192.168.0.30
192.168.2.30 File service A

192.168.2.31 File service B

Note: The IP addresses used in this lab practice are only for reference. The actual IP
addresses are subject to the lab environment plan.
⚫ Environment initialization
Initialize the environment according to the following table.
Create and map LUNs to Linux and Windows service hosts.
Install Huawei UltraPath on the hosts.
Mount the LUNs to the hosts and format the LUNs.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 8

Host to
LUN
LUN Name LUN Group Which a LUN Description
Size
Is Mapped

The application type


is Oracle_OLTP. The
LUN_OLTP_Linux LUNGroup_Linux 4 GB Linux LUN is mounted to
/mnt/oltp. An ext4
file system is created.

The application type


is Oracle_OLAP. The
LUN_OLAP_Linux LUNGroup_Linux 5 GB Linux LUN is mounted to
/mnt/olap. An ext4
file system is created.

The application type


is SQL_Server_OLTP.
LUN_OLTP_Win LUNGroup_Win 3 GB DCA-Win The LUN is mounted
to E:\. An NTFS file
system is created.

The application type


is SQL_Server_OLAP.
LUN_OLAP_Win LUNGroup_Win 5 GB DCA-Win The LUN is mounted
to F:\. An NTFS file
system is created.

Create shares and mount them to the hosts.

Share Name File System Quota Host Description

The share is
N/A FileSystem001 5 GB Linux mounted to
/mnt/fs.

The share is
share_Win FileSystem002 5 GB DCA-Win
mounted to Z:\.

2.2 Quiz
What metrics are used for performance measurement? Describe the service scenarios
related to each metric.
[Suggested answer]
IOPS: This metric is typically used to measure system performance in application
scenarios such as online transaction processing (OLTP) services and SPC-1 authentication.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 9

Bandwidth: This metric is typically used to measure system performance in application


scenarios such as online analytical processing (OLAP), media asset, and video surveillance
services.
Latency: The average response time and the maximum response time are commonly used
metrics. For example, database OLTP typically requires that the latency be shorter than
10 ms; a Virtual Desktop Infrastructure (VDI) scenario typically requires that the latency
be shorter than 30 ms; the latency required for video on demand (VOD) and video
security services varies depending on the bit rate.

2.3 Performance Monitoring


2.3.1 Monitoring the Performance of a Linux Host
In this lab practice, you will use the Linux host performance tools to check the
performance of CPUs, memory, network, and disks. You need to install the sysstat system
performance tool package in advance.
Log in to Linux host DCA-Host, mount and configure the system ISO image as the local
Yum source, and install sysstat.

[root@linux ~]# yum install sysstat

Step 1 View the host CPU information.

To view the CPU information of a Linux host, run the cat /proc/cpuinfo command.

[root@linux cpu]# cat /proc/cpuinfo


processor :0
vendor_id : GenuineIntel
cpu family :6
model : 85
model name : Intel(R) Xeon(R) Gold 6151 CPU @ 3.00GHz
stepping :4
microcode : 0x1
cpu MHz : 3000.000
cache size : 25344 KB
physical id :0
siblings :4
core id :0
cpu cores :2
apicid :0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush
mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology
nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe
popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch
invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm
HCIP-Storage Storage System Performance Tuning Lab Guide Page 10

mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt
xsavec xgetbv1 arat md_clear flush_l1d
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa
itlb_multihit mmio_stale_data retbleed gds
bogomips : 6000.00
clflush size : 64
cache_alignment : 64
address sizes : 42 bits physical, 48 bits virtual
power management:
...

Step 2 Use the mpstat tool to check the CPU information.

[root@linux cpu]# mpstat


Linux 5.10.0-153.29.0.106.oe2203sp2.x86_64 (linux) 10/17/2023 _x86_64_ (4 CPU)

03:02:10 PM
CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
03:02:10 PM all 1.07 0.00 0.22 0.03 0.02 0.01 0.00 0.00 0.00
98.64

Step 3 Use the top tool to check the performance information.

[root@linux cpu]# top


top - 15:10:33 up 43 min, 1 user, load average: 0.00, 0.00, 0.00
Tasks: 140 total, 1 running, 139 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.1 us, 0.0 sy, 0.0 ni, 99.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 15486.7 total, 14817.5 free, 530.5 used, 425.5 buff/cache
MiB Swap: 4096.0 total, 4096.0 free, 0.0 used. 14956.2 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND


6776 cockpit+ 20 0 531348 16332 11220 S 0.3 0.1 0:04.17 cockpit-ws
14163 root 20 0 26808 5744 3548 R 0.3 0.0 0:00.07 top
1 root 20 0 167952 14688 9356 S 0.0 0.1 0:02.81 systemd

Press 1 to expand the CPU load information.

[root@linux cpu]# top


top - 15:11:44 up 44 min, 1 user, load average: 0.00, 0.00, 0.00
Tasks: 140 total, 1 running, 139 sleeping, 0 stopped, 0 zombie
%Cpu0 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu1 : 0.3 us, 0.3 sy, 0.0 ni, 99.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu2 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu3 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 15486.7 total, 14815.7 free, 531.9 used, 425.8 buff/cache
MiB Swap: 4096.0 total, 4096.0 free, 0.0 used. 14954.8 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND


6776 cockpit+ 20 0 531348 16332 11220 S 0.3 0.1 0:04.29 cockpit-ws
1 root 20 0 167952 14688 9356 S 0.0 0.1 0:02.86 systemd

Step 4 Use the vmstat tool to check performance information.


HCIP-Storage Storage System Performance Tuning Lab Guide Page 11

[root@linux cpu]# vmstat


procs -----------memory---------- ---swap-- -----io---- -system-- -------cpu-------
r b swpd free buff cache si so bi bo in cs us sy id wa st gu
2 0 0 15142136 33384 425688 0 0 23 5 107 57 1 0 99 0 0 0

Step 5 Use the perf tool to check CPU performance information.

perf is a performance analysis tool provided by the Linux kernel. It can be used to
perform statistical analysis on CPU performance.
You can run the perf top command to view the processes and function calls with high
CPU usage in the system.

[root@linux cpu]# perf top --sort comm,dso

The following information is displayed.

Step 6 Use the free tool to check the memory performance.

According to the free -h command output, the free memory is 14 GiB and the buffer for
storing the data to be output to disks is 450 MiB.

[root@linux cpu]# free -h


total used free shared buff/cache available
Mem: 15Gi 551Mi 14Gi 10Mi 450Mi 14Gi
Swap: 4.0Gi 0B 4.0Gi

You can run the following commands to release the data occupied by the system cache
and check the memory usage again:
HCIP-Storage Storage System Performance Tuning Lab Guide Page 12

sync
echo 3 > /proc/sys/vm/drop_caches
[root@linux cpu]# sync
[root@linux cpu]# echo 3 > /proc/sys/vm/drop_caches
[root@linux cpu]# free -h
total used free shared buff/cache available
Mem: 15Gi 481Mi 14Gi 10Mi 102Mi 14Gi
Swap: 4.0Gi 0B 4.0Gi

Step 7 Use the iostat tool.

iostat (option) (parameter)


Options:
-c: displays the CPU utilization report.
-d: displays the device utilization report.
-k: displays statistics in kilobytes per second.
-m: displays statistics in megabytes per second.
-p: displays statistics for block devices and all their partitions that are used by the system.
-t: prints the time for each report displayed.
-V: prints version number then exits.
-x: displays extended statistics.
Parameters:
interval: indicates the amount of time in seconds between each report.
count: indicates the number of reports generated at interval seconds apart.

[root@linux cpu]# iostat -d 3 3


Linux 3.10.0-1160.92.1.el7.x86_64 (linux) 11/01/20XX _x86_64_ (4 CPU)

Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn


sda 1.98 44.40 11.50 504802 130704
up-0 0.03 0.37 191.74 4253 2180212
up-1 0.04 0.64 236.37 7329 2687672
up-2 0.01 0.27 183.01 3096 2080864
up-3 0.02 0.00 236.40 16 2688032
sdb 0.04 0.65 5.97 7349 67828
sdc 0.05 0.65 11.78 7345 133888

Parameters related to CPU attributes:


tps: indicates the number of transfers per second that were issued to the device.
kB_read/s: Indicate the amount of data read from the device expressed in kilobytes per
second.
kB_wrtn/s: Indicate the amount of data written to the device expressed in kilobytes per
second.
kB_read: indicates the total number of kilobytes read.
kB_wrt: indicates the total number of kilobytes written.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 13

[root@linux ~]# iostat -mx 1


Linux 3.10.0-1160.92.1.el7.x86_64 (linux) 11/01/20XX _x86_64_ (4 CPU)

avg-cpu: %user %nice %system %iowait %steal %idle


0.09 0.00 0.12 0.02 0.00 99.77

Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await
r_await w_await svctm %util
sda 0.00 0.98 1.04 0.89 0.05 0.01 62.07 0.00 0.88
0.97 0.77 0.31 0.06
up-0 0.00 1.75 0.01 0.02 0.00 0.23 13695.71 0.00 122.47
7.06 209.35 5.12 0.02
up-1 0.00 1.76 0.02 0.02 0.00 0.28 12956.74 0.00 65.09
3.20 137.30 2.57 0.01
up-2 0.00 0.03 0.01 0.00 0.00 0.22 31816.18 0.00 13.11
3.96 34.69 3.02 0.00
up-3 0.00 1.75 0.00 0.02 0.00 0.28 30202.79 0.00 196.53
3.25 200.98 2.80 0.01
sdb 0.00 0.00 0.02 0.02 0.00 0.01 334.12 0.00 90.62
5.81 178.49 4.25 0.02
sdc 0.00 0.00 0.02 0.04 0.00 0.01 475.53 0.00 104.45
3.20 167.52 1.97 0.01

Parameters related to CPU attributes:


%user: indicates the percentage of CPU utilization that occurred while executing at the
user level (application).
%nice: indicates the percentage of CPU utilization that occurred while executing at the
user level with nice priority.
%system: indicates the percentage of CPU utilization that occurred while executing at
the system level (kernel).
%iowait: indicates the percentage of time that the CPU or CPUs were idle during which
the system had an outstanding disk I/O request.
%steal: indicates the percentage of time spent in involuntary wait by the virtual CPU or
CPUs while the hypervisor was servicing another virtual processor.
%idle: indicates the percentage of time that the CPU or CPUs were idle and the system
did not have an outstanding disk I/O request.
Remarks:
If the value of %iowait is too large, the disk experiences an I/O bottleneck.
If the value of %idle is large, the CPU is idle.
If the value of %idle is large but the system responds slowly, the CPU may be waiting for
memory allocation. In this case, increase the memory capacity.
If the value of %idle is continuously lower than 10, it indicates that the CPU processing
capability is relatively low. In this case, CPU-related problems need to be handled.
r/s: indicates the number of read requests that were issued to the device per second.
w/s: indicates the number of write requests that were issued to the device per second.
rMB/s: indicates the number of megabytes read from the device per second.
wMB/s: indicates the number of megabytes written to the device per second.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 14

avgrq-sz: indicates the average size (in 512-byte sectors) of the requests that were issued
to the device.
avgqu-sz: indicates the average queue length of the requests that were issued to the
device.
await: indicates the average time (in milliseconds) for I/O requests issued to the device
to be served. This includes the time spent by the requests in queue and the time spent
servicing them.
r_wait: indicates the average time required for each read operation, including not only
the read operation time of the disk but also the time in the kernel queue.
w_wait: indicates the average time required for each write operation, including not only
the write operation time of the disk but also the time in the kernel queue.
To display information about a specified disk, run the following command:

[root@linux cpu]# iostat -d /dev/sdb


Linux 3.10.0-1160.92.1.el7.x86_64 (linux) 11/01/20XX _x86_64_ (4 CPU)

Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn


sdb 0.04 0.64 5.88 7349 67828

2.3.2 Monitoring the Performance of a Windows Host


In this lab practice, you will use the Windows host performance tools to check the
performance of CPUs, memory, network, and disks.

Step 1 Check whether the memory usage and CPU usage of the host exceed the
corresponding thresholds.

Open the Windows task manager, click the Performance tab, and check the memory
usage and CPU usage of the host.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 15

Step 2 Use Performance Monitor to identify performance problems.

Performance Monitor is a Microsoft Management Console (MMC) snap-in that can be


used to obtain system performance information. You can use this tool to:
Analyze the impact of applications and services on computer performance.
Have an overview of system performance.
Collect detailed information for troubleshooting.
In the Start Menu search box, type Performance Monitor.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 16

Start Performance Monitor.


HCIP-Storage Storage System Performance Tuning Lab Guide Page 17

Click to add a performance counter.

After the performance counter is added, you can view the corresponding chart.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 18

Step 3 Use Resource Monitor to view the current resource usage.

Resource Monitor in Windows Server provides detailed information about the real-time
performance of the server. Resource Monitor monitors the usage and performance of
CPUs, disks, networks, and memory resources in real time. It helps you identify and
resolve resource conflicts and bottlenecks.
In the Start Menu search box, type Resource Monitor. Start Resource Monitor.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 19

Step 4 Use Reliability Monitor to view reliability.

By default, Reliability Monitor is installed on Windows Server to monitor hardware and


software problems that occur over a selected time interval. Depending on the number
and types of problems, Reliability Monitor uses a stability index (range: 1-10) to indicate
server stability. The value 1 indicates the most unstable server state, whereas the value
10 indicates the most stable server state. Indexes can be used to quickly evaluate server
reliability.
Access the control panel and click Security and Maintenance.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 20

Click Maintenance.

Click View reliability history to access Reliability Monitor.


HCIP-Storage Storage System Performance Tuning Lab Guide Page 21

2.3.3 Monitoring Storage Performance


Performance reflects the comprehensive capability of a storage system. During service
running, you can monitor storage system performance in real time and analyze the
performance trend to fully understand the performance of the storage system. If a
performance problem occurs, you can analyze and locate the problem based on the
performance monitoring statistics.

Step 1 Configure performance monitoring parameters.

This operation enables you to configure the monitoring status and whether to save
monitoring files into the storage system.
Log in to DeviceManager. Choose Settings > Monitoring Settings to switch to the
Monitoring Settings page.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 22

Click Modify and configure performance monitoring parameters.

Parameter Description Value

Monitoring Status Whether performance monitoring [Example]


is enabled for the current device. Enabled
If performance monitoring is
enabled, the monitoring start time
is displayed.

Sampling Interval Interval between two real-time [Value]


of Real-Time performance data sampling tasks. 5, 10, 30, or 60, expressed in
Statistics seconds
[Default value]
5
[Example]
5

Retain historical Whether to retain historical [Example]


monitoring data monitoring data. Historical Enabled
performance data is sampled and
saved based on real-time
performance data.

Retention Period When Retain historical [Value]


(years) monitoring data is selected, you 1–3
need to set the retention period.
[Example]
1
HCIP-Storage Storage System Performance Tuning Lab Guide Page 23

Parameter Description Value

When Retain historical monitoring


data is selected, performance
monitoring files are saved on the
storage device. In this case, you
Data Storage need to select the storage pool [Example]
Location resource of the storage device.
StoragePool001
NOTE:
Historical performance data
occupies a maximum of 200 GB
space of the storage pool.

Sampling interval and retention time of historical performance data

Data Sampling Interval Data Retention Time

Original interval (5 seconds by default) Latest 7 days

5 minutes Latest 30 days

15 minutes Latest 3 months

1 hour User-defined value of Retention Period.

6 hours User-defined value of Retention Period.

24 hours User-defined value of Retention Period.

If the current value of the Sampling Interval of Real-Time Statistics parameter is less
than the value corresponding to the Sampling Interval of Real-Time Statistics in the
preceding table, the storage system automatically adjusts the parameter value to the
value in the table.

Step 2 Configure thresholds.

This operation enables you to configure alarm thresholds and alarm clearance thresholds
for performance metrics of specific objects (such as controllers and front-end ports) in a
storage device. If the performance metric value is above (or below) the alarm threshold,
DeviceManager sends an alarm to notify you of checking and handling the device fault. If
the performance metric value is above (or below) the alarm clearance threshold,
DeviceManager clears the alarm and includes it in the historical alarm list.
Log in to DeviceManager. Choose Settings > Monitoring Settings to switch to the
Monitoring Settings page.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 24

Click Modify and configure thresholds for an object.


Select Controller.
Click Threshold Status to turn on the threshold switch of the corresponding metric.
Configure threshold parameters for each metric. The following table describes the
threshold parameters.

Parameter Description Value

Metric Name Name of the metric. [Example]


Avg. CPU Usage (%)

Threshold Type Threshold type of the metric. If [Example]


the metric value is above or >
below the threshold, an alarm is
generated. The options are as
follows: > and <.

Alarm Threshold Threshold value of an object [Example]


metric. When the threshold is 90
reached, an alarm is generated.

Alarm Clearance When the threshold is reached, [Example]


Threshold the generated alarm is cleared 85
and added to the list of
historical alarms.

Trigger Number of times the threshold [Example]


Condition has been reached. When the 12
(Times) number of times is reached, a
threshold alarm is generated.

Severity Alarm severity includes [Example]


HCIP-Storage Storage System Performance Tuning Lab Guide Page 25

Parameter Description Value


Warning, Major, and Critical. Warning

Threshold Status Switch for controlling whether a [Example]


threshold alarm is generated. Enabled

Step 3 Create a metric chart.

You can create a metric chart to view metrics involved in services that you are concerned
about.
Log in to DeviceManager. Choose Insight > Performance > Analysis to Create a metric
template.
Click Create to switch to the Create Chart dialog box.
Configure the metric template as prompted.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 26

Set the basic information, monitored object, statistical metric, and chart display mode of
the chart.
Select Avg. CPU Usage (%) when creating a metric chart for a controller. After the
metric chart is created, you can view it.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 27

Metric chart

On the CLI, run show performance controller to check the CPU usage of the controller
with controller_id of 0A.

admin:/>show performance controller controller_id=0A


0.Memory Usage(%)
1.Percentage of Cache Flushes to Write Requests(%)
2.Cache Flushing Bandwidth(MB/s)
...
54.CPU Usage(%)
...
HCIP-Storage Storage System Performance Tuning Lab Guide Page 28

Input item(s) number separated by comma:54

View the CPU usage of controllers by entering the corresponding number. Press Q to exit.

CPU Usage(%) : 16

CPU Usage(%) : 16

CPU Usage(%) : 17

CPU Usage(%) : 16

To view performance metrics, you can create a metric chart for the controller as
prompted on the Analysis page.

Metric chart
HCIP-Storage Storage System Performance Tuning Lab Guide Page 29

On the CLI, run the show port general command to query port information.

admin:/>show port general


ETH port:
--------------- Host Port:----------------

ID Health Status Running Status Type IPv4 Address IPv6 Address MAC
Role Working Rate(Mbps) Enabled Max Speed(Mbps) Number Of Initiators
-------------- ------------- -------------- --------- ------------ ------------ ---- -
------------ ----------- ------------------ ------- --------------- ------------------
--
CTE0.A.IOM4.P0 Normal Link Up Host Port -- -- fa:1
6:3e:2b:84:8d INI and TGT 56000 Yes 1000 0
CTE0.A.IOM4.P1 Normal Link Up Host Port -- -- fa:1
6:3e:2b:85:8d INI and TGT 56000 Yes 1000 0
CTE0.A.IOM4.P2 Normal Link Up Host Port -- -- fa:1
6:3e:2b:86:8d INI and TGT 56000 Yes 1000 0
CTE0.A.IOM4.P3 Normal Link Up Host Port -- -- fa:1
6:3e:2b:87:8d INI and TGT 56000 Yes 1000 0
CTE0.B.IOM4.P0 Normal Link Up Host Port -- -- fa:1
6:3e:2b:84:8e INI and TGT 56000 Yes 1000 0
CTE0.B.IOM4.P1 Normal Link Up Host Port -- -- fa:1
6:3e:2b:85:8e INI and TGT 56000 Yes 1000 0
CTE0.B.IOM4.P2 Normal Link Up Host Port -- -- fa:1
6:3e:2b:86:8e INI and TGT 56000 Yes 1000 0
CTE0.B.IOM4.P3 Normal Link Up Host Port -- -- fa:1
6:3e:2b:87:8e INI and TGT 56000 Yes 1000 0

Run the show performance port command to query information of the port with the
corresponding port_id. Then, enter the number for the metric you want to query. To exit,
enter Q.

admin:/>show performance port port_id=CTE0.A.IOM4.P0


0.Max. Bandwidth(MB/s) 1.Usage Ratio(%)
2.Queue Length 3.Bandwidth(MB/s) / Block
Bandwidth(MB/s)
HCIP-Storage Storage System Performance Tuning Lab Guide Page 30

4.Throughput(IOPS)(IO/s) 5.Read Bandwidth(MB/s)


6.Average Read I/O Size(KB) 7.Read Throughput(IOPS)(IO/s)
8.Write Bandwidth(MB/s) 9.Average Write I/O Size(KB)
10.Write Throughput(IOPS)(IO/s) 11.Read I/O Granularity Distribution:
[4K,8K)(%)
12.Read I/O Granularity Distribution: [8K,16K)(%) 13.Read I/O Granularity Distribution:
[16K,32K)(%)
14.Read I/O Granularity Distribution: [32K,64K)(%) 15.Read I/O Granularity Distribution:
[64K,128K)(%)
16.Write I/O Granularity Distribution: [4K,8K)(%) 17.Write I/O Granularity Distribution:
[8K,16K)(%)
18.Write I/O Granularity Distribution: [16K,32K)(%) 19.Write I/O Granularity Distribution:
[32K,64K)(%)
20.Write I/O Granularity Distribution: [64K,128K)(%) 21.Service Time(Excluding Queue Time)(us)
22.Average IO Size(KB) 23.% Read
24.% Write 25.Max IOPS(IO/s)
26.Max. I/O Latency(us) 27.Average I/O Latency(us)
28.Average Read I/O Latency(us) 29.Average Write I/O Latency(us)
30.Sending bandwidth for replication(KB/s) 31.Receiving bandwidth for
replication(KB/s)
32.The cumulative count of I/Os 33.The cumulative count of data
transferred in Kbytes
34.The cumulative elapsed I/O time(ms) 35.The cumulative count of all reads
36.The cumulative count of all writes 37.Read I/O Granularity Distribution:
[0K,4K)(%)
38.Read I/O Granularity Distribution: >= 128K(%) 39.Write I/O Granularity Distribution:
[0K,4K)(%)
40.Write I/O Granularity Distribution: >= 128K(%) 41.Bandwidth For NFS V3(KB/s)
42.Bandwidth For NFS V4(KB/s) 43.Bandwidth For NFS(KB/s)
44.Bandwidth For SMB1(KB/s) 45.Bandwidth For SMB2(KB/s)
46.Bandwidth For SMB(KB/s) 47.Read Bandwidth For NFS V3 (KB/s)
48.Read Bandwidth For NFS V4(KB/s) 49.Read Bandwidth For NFS(KB/s)
50.Read Bandwidth For SMB1(KB/s) 51.Read Bandwidth For SMB2(KB/s)
52.Read Bandwidth For SMB (KB/s) 53.Write Bandwidth For NFS V3 (KB/s)
54.Write Bandwidth For NFS V4(KB/s) 55.Write Bandwidth For NFS(KB/s)
56.Write Bandwidth For SMB1(KB/s) 57.Write Bandwidth For SMB2(KB/s)
58.Write Bandwidth For SMB (KB/s) 59.OPS For NFS V3
60.OPS For NFS V4 61.OPS For NFS
62.OPS For SMB1 63.OPS For SMB2
64.OPS For SMB 65.Read OPS For NFS V3
66.Read OPS For NFS V4 67.Read OPS For NFS
68.Read OPS For SMB1 69.Read OPS For SMB2
70.Read OPS For SMB 71.Write OPS For NFS V3
72.Write OPS For NFS V4 73.Write OPS For NFS
74.Write OPS For SMB1 75.Write OPS For SMB2
76.Write OPS For SMB 77.Other OPS For NFS V3
78.Other OPS For NFS V4 79.Other OPS For NFS
80.Other OPS For SMB1 81.Other OPS For SMB2
82.Other OPS For SMB 83.IO Average Response Time For NFS
V3(us)
84.IO Average Response Time For NFS V4(us) 85.IO Average Response Time For
NFS(us)
86.IO Average Response Time For SMB1(us) 87.IO Average Response Time For
SMB2(us)
HCIP-Storage Storage System Performance Tuning Lab Guide Page 31

88.IO Average Response Time For SMB(us) 89.Read IO Average Response Time For
NFS V3(us)
90.Read IO Average Response Time For NFS V4(us) 91.Read IO Average Response Time For
NFS(us)
92.Read IO Average Response Time For SMB1(us) 93.Read IO Average Response Time For
SMB2(us)
94.Read IO Average Response Time For SMB(us) 95.Write IO Average Response Time
For NFS V3(us)
96.Write IO Average Response Time For NFS V4(us) 97.Write IO Average Response Time For
NFS(us)
98.Write IO Average Response Time For SMB1(us) 99.Write IO Average Response Time For
SMB2(us)
100.Write IO Average Response Time For SMB(us) 101.Other IO Average Response Time
For NFS V3(us)
102.Other IO Average Response Time For NFS V4(us) 103.Other IO Average Response Time
For NFS(us)
104.Other IO Average Response Time For SMB1(us) 105.Other IO Average Response Time
For SMB2(us)
106.Other IO Average Response Time For SMB(us) 107.Avg. Write I/O Link Transmission
Delay(us)
Input item(s) number separated by comma:

2.4 Performance Problem Locating


2.4.1 Locating Host Problems
Step 1 Check whether the host CPU frequency decreases.

On a Linux host, run cat /proc/cpuinfo |grep -i Mhz to view the CPU frequency.

[root@linux cpu]# cat /proc/cpuinfo | grep -i Mhz


cpu MHz : 3000.000
cpu MHz : 3000.000
cpu MHz : 3000.000
cpu MHz : 3000.000

To check the CPU frequency of a Windows host, you can use the DirectX Diagnostic Tool.
Choose Start > Run. In the Run dialog box, enter DXDIAG and click OK to check whether
the CPU frequency decreases.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 32

Change the host running mode to the high-performance mode to ensure that the CPU
frequency does not decrease.
Operation path: Start > Control Panel > System and Security > Power Options >
Choose or customize a power plan > High performance
HCIP-Storage Storage System Performance Tuning Lab Guide Page 33

Step 2 Adjust the I/O scheduling algorithm.

[Task requirements]
/dev/sda1 configured for the Linux host is a solid state disk (SSD). Select a proper I/O
scheduling algorithm and provide the configuration.
[Procedure]
The Linux OS supports four block device scheduling algorithms: noop, anticipatory,
deadline, and cfq. You can configure the related scheduling algorithm in the
/sys/block/sda1/queue/scheduler file. (In this task, /dev/sda1 needs to be configured
according to the actual drive letter.)

echo noop > /sys/block/sda1/queue/scheduler

Step 3 Adjust the number of disk queues.

[Task requirements]
Increase the number of /dev/sda1 disk queues on the Linux host to 1024 and provide the
configuration.
[Procedure]
The queue depth determines the maximum number of concurrent I/Os written to the
block device. In Linux, the default value is 128. Do not change the value unless absolutely
necessary. In the event of testing the highest system performance, you can set the queue
depth to a larger value by modifying the /sys/block/sda1/queue/nr_requests file to
increase the I/O write pressure and the probability of combining I/Os in the queue. (In
this lab practice, /dev/sda1 is used and can be replaced based on actual requirements.)

echo 1024 > /sys/block/sda1/queue/nr_requests

Step 4 Adjust the memory parameters.

[Task requirements]
1. To improve the memory access performance of the Linux host, try to avoid using the
swap partition to reduce the possibility of hard page faults. Provide the related
configuration.
2. To reduce dirty pages in the memory and prevent unexpected data loss, ensure that
the proportion of dirty pages in the memory does not exceed 5%. Provide the related
configuration.
[Procedure]
1. Modify the vm.swappiness parameter in the /etc/sysctl.conf configuration file to
minimize use of the swap partition and reduce the possibility of hard page faults.

vi /etc/sysctl.conf

Add the parameter.

vm.swappiness = 0
HCIP-Storage Storage System Performance Tuning Lab Guide Page 34

2. You can change the value of /proc/sys/vm/dirty_ratio to configure the proportion of


dirty pages in the memory.

echo 5 >/proc/sys/vm/dirty_ratio

Step 5 Modify kernel parameters.

[Task requirements]
Modify the kernel parameters according to the following requirements:
1. Change the maximum buffer for receiving TCP data to 513920.
2. Change the maximum buffer for transmitting TCP data to 513920.
3. Change the default FIN_TIMEOUT value to 30s.
4. Change the maximum number of kernel connections to 65535.
5. Change the maximum number of NIC queues to 30000.
6. Increase the maximum number of connections waiting for the client to establish to
20000.
7. Set the range of ports that can be opened by the system to 1024 to 65000.
[Procedure]
Add the following content to the /etc/sysctl.conf configuration file:
1. net.core.rmem_max: indicates the maximum buffer for receiving TCP data.
2. net.core.wmem_max: indicates the maximum buffer for transmitting TCP data.
3. net.ipv4.tcp_fin_timeout: indicates the default FIN_TIMEOUT time.
4. net.core.somaxconn: indicates the maximum number of kernel connections.
5. net.core.netdev_max_backlog: indicates the maximum value of the NIC queue.
6. net.ipv4.tcp_max_syn_backlog: indicates the maximum number of connections
waiting for the client to establish.
7. net.ipv4.ip_local_port_range: indicates the range of ports that can be opened by the
system.

vi /etc/sysctl.conf

Modify the content as follows:

1. net.core.rmem_max = 513920
2. net.core.wmem_max = 513920
3. net.ipv4.tcp_fin_timeout = 30
4. net.core.somaxconn = 65535
5. net.core.netdev_max_backlog = 30000
6. net.ipv4.tcp_max_syn_backlog = 20000
7. net.ipv4.ip_local_port_range = 1024 65000

2.4.2 Locating Link Problems


Step 1 Check the multipathing status.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 35

On a Windows host, open UltraPath Console and check whether links are normal and
whether owning controllers are working properly.

On a Linux host, run the upadmin show path command to check whether physical paths
are normal.

[root@linux ~]# upadmin show path


----------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------
Path ID Initiator Port Array Name
Controller Target Port Path State
Check State Port Type Port ID
0 iqn.1994-05.com.redhat:Linux::3139:322E:3136:382E:312E:3232:0:0::3 OceanStor 0A
iqn.2014-08.com.example::2100030000040506::20400:192.168.1.10::0 Normal --
iSCSI CTE0.A.IOM4.P0
1 iqn.1994-05.com.redhat:Linux::3139:322E:3136:382E:312E:3232:0:0::4 OceanStor 0B
iqn.2014-08.com.example::2100030000040506::20400:192.168.1.11::0 Normal --
iSCSI CTE0.B.IOM4.P0
----------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------

If Path State is Normal, the paths are working properly.

Step 2 Check the multipathing parameter settings.

If the load balancing mode or algorithm, or other parameters of multipathing software


are set inappropriately, the I/O pressure on links will be imbalanced or the sequence of
I/Os will be affected. These prevent optimal performance delivery and adversely affect
bandwidth capability.
On a Windows host, use UltraPath Console to query the multipathing parameter settings.
Navigation path: System > Global Settings
HCIP-Storage Storage System Performance Tuning Lab Guide Page 36

Click Common Settings to view the common parameter settings.


HCIP-Storage Storage System Performance Tuning Lab Guide Page 37

Click Advanced Settings to view the advanced parameter settings.


HCIP-Storage Storage System Performance Tuning Lab Guide Page 38

Click Link Reliability to view the link reliability.

On a Linux host, run the upadmin show upconfig command to query the multipathing
parameter settings.

[root@linux ~]# upadmin show upconfig


=======================================================
UltraPath Configuration
=======================================================
Basic Configuration
Working Mode : load balancing within controller
LoadBalance Mode : min-queue-depth
Loadbanlance io threshold : 100
LUN Trespass : off

Advanced Configuration
Io Retry Times : 10
Io Retry Delay : 0
Faulty path check interval : 10
Idle path check interval : 60
Failback Delay Time : 60
Io Suspension Time : 60
Max io retry timeout : 1800
Performance Record : off
Pending delete period of obsolete paths : 28800

Path reliability configuration


Timeout degraded statistical time : 600
HCIP-Storage Storage System Performance Tuning Lab Guide Page 39

Timeout degraded threshold : 1


Timeout degraded path recovery time : 1800
Frequent timeout degraded statistical time : 86400
Frequent timeout degraded threshold : 3
Frequent timeout degraded path recovery time : 86400
Intermittent IO error degraded statistical time : 300
Min. I/Os for intermittent IO error degraded statistical : 5000
Intermittent IO error degraded threshold : 20
Intermittent IO error degraded path recovery time : 1800
Intermittent fault degraded statistical time : 1800
Intermittent fault degraded threshold : 3
Intermittent fault degraded path recovery time : 3600
High latency degraded statistical time : 300
High latency degraded threshold : 1000
High latency degraded path recovery time : 3600
Sensitive delayed degraded threshold : 0
Sensitive delayed degraded recovery time : 120

HyperMetro configuration
HyperMetro Primary Array SN : Not configured
HyperMetro WorkingMode : read write within primary array
HyperMetro Split Size : 128MB
HyperMetro Load Balance Mode : round-robin

2.4.3 Locating Storage Problems


Step 1 Check the CPU usage of controllers.

When the CPU usage is high, the latency of system scheduling increases. As a result, the
I/O latency increases.
The CPU usage of a storage system is closely related to and varies with I/O models and
networking modes. To query the CPU usage of the current controller, use DeviceManager
or run the CLI command.
Performance monitoring using DeviceManager
Navigation path: Insight > Performance > Analysis. Select Avg. CPU Usage (%) when
creating a metric chart for a controller.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 40

After the metric chart is created, you can view it.

On the CLI, run the show performance controller command to check the CPU usage of
controllers.

admin:/>show performance controller controller_id=0A


0.Memory Usage(%)
1.Percentage of Cache Flushes to Write Requests(%)
...
54.CPU Usage(%)
55.SCSI IOPS (IO/s)
56.ISCSI IOPS (IO/s)
HCIP-Storage Storage System Performance Tuning Lab Guide Page 41

...
95.Total Block Bandwidth (MB/S)
Input item(s) number separated by comma:54

CPU Usage(%) : 15

Step 2 Check information about front-end ports.

Before analyzing the performance of front-end host ports, confirm the positions of
interface modules and the number, working status, and speeds of connected ports.
You can use DeviceManager or the CLI to query information about front-end host ports.
Use DeviceManager to query information about front-end ports.

On the CLI, you can run the show port general command to query information about
front-end ports.

admin:/>show port general physical_type=ETH


--------------- Host Port:----------------

ID Health Status Running Status Type IPv4 Address IPv6 Address MAC
Role Working Rate(Mbps) Enabled Max Speed(Mbps) Number Of Initiators
-------------- ------------- -------------- --------- ------------ ------------ ----------------- -------
---- ------------------ ------- --------------- --------------------
CTE0.A.IOM4.P0 Normal Link Up Host Port -- --
fa:16:3e:2b:84:8d INI and TGT 56000 Yes 1000 2
CTE0.A.IOM4.P1 Normal Link Up Host Port -- --
fa:16:3e:2b:85:8d INI and TGT 56000 Yes 1000 0
CTE0.A.IOM4.P2 Normal Link Up Host Port -- --
fa:16:3e:2b:86:8d INI and TGT 56000 Yes 1000 0
CTE0.A.IOM4.P3 Normal Link Up Host Port -- --
fa:16:3e:2b:87:8d INI and TGT 56000 Yes 1000 0
CTE0.B.IOM4.P0 Normal Link Up Host Port -- --
fa:16:3e:2b:84:8e INI and TGT 56000 Yes 1000 2
CTE0.B.IOM4.P1 Normal Link Up Host Port -- --
fa:16:3e:2b:85:8e INI and TGT 56000 Yes 1000 0
HCIP-Storage Storage System Performance Tuning Lab Guide Page 42

CTE0.B.IOM4.P2 Normal Link Up Host Port -- --


fa:16:3e:2b:86:8e INI and TGT 56000 Yes 1000 0
CTE0.B.IOM4.P3 Normal Link Up Host Port -- --
fa:16:3e:2b:87:8e INI and TGT 56000 Yes 1000 0
------------ Management Port:-------------

ID Health Status Running Status Type IPv4 Address IPv6 Address


MAC Role Working Rate(Mbps) Enabled Max Speed(Mbps)
----------- ------------- -------------- --------------- ------------ ------------ ----------------- ----
------------------ ------- ---------------
CTE0.A.MGMT Normal -- Management Port 192.168.0.10 --
00:00:00:00:00:00 -- 10000 Yes 1000
CTE0.B.MGMT Normal -- Management Port 192.168.0.11 --
00:00:00:00:00:00 -- 10000 Yes 1000
------------ Maintenance Port:------------

ID Health Status Running Status Type IPv4 Address IPv6


Address MAC Role Working Rate(Mbps) Enabled Max Speed(Mbps)
------------------ ------------- -------------- ---------------- ------------ ------------ ---------------
-- ---- ------------------ ------- ---------------
CTE0.A.MAINTENANCE Normal Link Up Maintenance Port -- --
00:00:00:00:00:00 -- 10000 Yes 1000
CTE0.B.MAINTENANCE Normal Link Up Maintenance Port -- --
00:00:00:00:00:00 -- 10000 Yes 1000

Step 3 Check the concurrency pressure of front-end ports.

Run the CLI command to obtain the approximate number of front-end concurrent tasks.
This method applies to scenarios where pressure changes. You can run the show
controller io io_type=frontEnd controller_id=XX command to query the front-end
concurrent I/O tasks delivered to a specified controller. Run this command multiple times
and use a stable value as an approximate number of front-end concurrent tasks. XX
indicates the controller ID.

admin:/>show controller io io_type=frontEnd controller_id=0A

Controller Id : 0A
Front End IO :0
Front End Limit : 17408

Step 4 Check whether the front-end host ports experience bit errors.

If the performance fluctuates frequently or declines unexpectedly, faults may have


occurred on front-end ports or links. In this case, check whether the front-end ports
experience bit errors.
You can use DeviceManager, the CLI command, or the inspection report to check whether
front-end host ports have bit errors.
Use DeviceManager to view the bit errors of front-end ports (Fibre Channel/iSCSI).
HCIP-Storage Storage System Performance Tuning Lab Guide Page 43

Run the show port bit_error command to view the bit errors of front-end ports.

admin:/>show port bit_error


ETH port:

ID Error Packets Lost Packets Over Flowed Packets Start Time


CRC Errors Frame Errors Frame Length Errors
------------------ ------------- ------------ ------------------- ----------------------------- ----------
------------ -------------------
CTE0.A.IOM4.P0 0 0 0 2023-10-17/08:35:34
UTC+08:00 0 0 0
CTE0.A.IOM4.P1 0 0 0 2023-10-17/08:35:34
UTC+08:00 0 0 0
CTE0.A.IOM4.P2 0 0 0 2023-10-17/08:35:34
UTC+08:00 0 0 0
CTE0.A.IOM4.P3 0 0 0 2023-10-17/08:35:34
UTC+08:00 0 0 0
CTE0.B.IOM4.P0 0 0 0 2023-10-17/08:35:51
UTC+08:00 0 0 0
CTE0.B.IOM4.P1 0 0 0 2023-10-17/08:35:51
UTC+08:00 0 0 0
CTE0.B.IOM4.P2 0 0 0 2023-10-17/08:35:51
UTC+08:00 0 0 0
CTE0.B.IOM4.P3 0 0 0 2023-10-17/08:35:51
UTC+08:00 0 0 0
CTE0.A.MGMT 0 0 0 2023-10-
17/08:35:14 UTC+08:00 0 0 0
CTE0.B.MGMT 0 0 0 2023-10-17/08:35:31
UTC+08:00 0 0 0
CTE0.A.MAINTENANCE 0 0 0 2023-10-
17/08:35:14 UTC+08:00 0 0 0
CTE0.B.MAINTENANCE 0 0 0 2023-10-
17/08:35:31 UTC+08:00 0 0 0
FC port:
SAS port:
FCoE port:
HCIP-Storage Storage System Performance Tuning Lab Guide Page 44

PCIE port:
RDMA port:

ID Error Packets Lost Packets Over Flowed Packets CRC Errors Frame Errors
Frame Length Errors
-------------- ------------- ------------ ------------------- ---------- ------------ -------------------
CTE0.A.IOM2.P0 0 0 0 0 0
0
CTE0.A.IOM2.P1 0 0 0 0 0
0
CTE0.A.IOM2.P2 0 0 0 0 0
0
CTE0.A.IOM2.P3 0 0 0 0 0
0
CTE0.B.IOM2.P0 0 0 0 0 0
0
CTE0.B.IOM2.P1 0 0 0 0 0
0
CTE0.B.IOM2.P2 0 0 0 0 0
0
CTE0.B.IOM2.P3 0 0 0 0 0
0
RoCE port:

Step 5 Analyze cache performance.

A cache is the key module that improves performance and user experience. When
analyzing the cache performance, pay attention to the impact of cache configuration on
the write performance.
If the cache write policy is write-back, a write success message is returned to the host as
soon as each write I/O arrives at the cache. Then, the cache sorts and combines data
before writing to disks.
If problems such as backup battery unit (BBU) fault, single-controller state, controller
overheating, and excessive number of LUN fault pages occur during service running, the
LUN health status may switch to write protection. In this situation, data cannot be
written to disks but can be read. Write protection prevents data in disks from being
modified.
You can query the properties of a LUN on DeviceManager or the CLI and obtain the
current LUN health status and cache write policy.
To query the information on DeviceManager:
HCIP-Storage Storage System Performance Tuning Lab Guide Page 45

To query the information on the CLI, run the show lun general command.

admin:/>show lun general lun_name=LUN_OLTP_Linux

ID :1
Name : LUN_OLTP_Linux
Pool ID :0
Capacity : 4.000GB
Subscribed Capacity : 2.203MB
Protection Capacity : 0.000B
Sector Size : 512.000B
Health Status : Normal
Running Status : Online
Type : Thin
IO Priority : Low
WWN : 6030000100040506137346a400000001
Exposed To Initiator : Yes
Data Distributing : --
Write Policy : Write Back
Running Write Policy : Write Back
Prefetch Policy : None
Read Cache Policy : --
Write Cache Policy : --
Cache Partition ID : --
Prefetch Value : --
Owner Controller : --
Work Controller : --
Snapshot ID(s) : --
LUN Copy ID(s) : --
Remote Replication ID(s) : --
Split Clone ID(s) : --
Relocation Policy : --
Initial Distribute Policy : --
SmartQoS Policy ID : --
HCIP-Storage Storage System Performance Tuning Lab Guide Page 46

Protection Duration(days) : --
Has Protected For(h) : --
Estimated Data To Move To Tier0 : --
Estimated Data To Move To Tier1 : --
Estimated Data To Move To Tier2 : --
Is Add To Lun Group : Yes
Smart Cache Partition ID : --
DIF Switch : No
Remote LUN WWN : --
Disk Location : Internal
LUN Migration : --
Progress(%) : --
Smart Cache Cached Size : --
Smart Cache Hit Rage(%) :0
Mirror Type : --
Thresholds Percent(%) : 90
Thresholds Switch : Off
Usage Type : Internal
HyperMetro ID(s) : --
Dedup Enabled : --
Compression Enabled : --
Workload Type Name : Oracle_OLTP
Is Clone : No
LUN Clone ID(s) : --
Snapshot Schedule ID : --
Description :
HyperCopy ID(s) : --
HyperCDP Schedule ID : --
LUN consistency group ID : --
Clone ID(s) : --
LUN protection group ID(s) : --
Function Type : Lun
NGUID : 7100040506137346030000a400000001
Create Time : 20XX-XX-XX/11:06:17 UTC+08:00
Vstore ID :0
Workload Type ID :2

Health Status refers to the LUN health status.


Write Policy refers to the configured cache write policy.
Running Write Policy refers to the cache write policy currently in use.

2.4.4 Quiz
What are the common performance problems?
[Suggested answer]
From the perspective of users, storage performance is reflected through the application
response time or service processing duration. Common performance problems are as
follows:
1. I/O latency is high and users can obviously feel the slow response.
2. The IOPS and bandwidth do not meet customers' service requirements.
3. Performance data fluctuates significantly.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 47

2.5 Performance Test


2.5.1 Performance Test Using fio
fio is an I/O tool used to perform pressure test and verification on hardware. This section
describes how to perform sequential read bandwidth tests and random mixed read/write
IOPS tests using fio.

Step 1 Install fio.

The Yum mode is recommended.

[root@linux ~]# yum install fio


...

Step 2 Perform sequential read bandwidth tests.

Test the two LUNs mounted in the OLTP and OLAP scenarios and check the performance
difference between the LUNs.
Use 8 KB I/Os and 512 KB I/Os to perform tests on /dev/sdb in the scenario where the
LUN is created with the application type of Oracle_OLTP and default block size of 8 KB.
Perform an 8 KB read test.

[root@linux cpu]# fio --name=RB --filename=/dev/sdb --bs=8k --numjobs=4 --iodepth=1 --rw=read


--ioengine=libaio --direct=1 --norandommap --group_reporting --runtime=180 --time_based
RB: (g=0): rw=read, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=libaio,
iodepth=1
...
fio-3.7
Starting 4 processes
Jobs: 4 (f=4): [R(4)][100.0%][r=21.1MiB/s,w=0KiB/s][r=2698,w=0 IOPS][eta 00m:00s]
RB: (groupid=0, jobs=4): err= 0: pid=8017: Wed Nov 1 15:27:05 2023
read: IOPS=3360, BW=26.2MiB/s (27.5MB/s)(4725MiB/180002msec)
slat (usec): min=3, max=203, avg= 9.94, stdev= 4.44
clat (usec): min=898, max=414371, avg=1179.49, stdev=1132.98
lat (usec): min=1006, max=414383, avg=1189.57, stdev=1132.99
clat percentiles (usec):
| 1.00th=[ 1037], 5.00th=[ 1045], 10.00th=[ 1057], 20.00th=[ 1074],
| 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1090], 60.00th=[ 1106],
| 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1188], 95.00th=[ 1336],
| 99.00th=[ 2540], 99.50th=[ 3949], 99.90th=[10552], 99.95th=[16057],
| 99.99th=[40633]
bw ( KiB/s): min= 2464, max= 7216, per=25.01%, avg=6723.49, stdev=543.14, samples=1437
iops : min= 308, max= 902, avg=840.43, stdev=67.89, samples=1437
lat (usec) : 1000=0.01%
lat (msec) : 2=98.32%, 4=1.18%, 10=0.39%, 20=0.08%, 50=0.02%
lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%
cpu : usr=0.20%, sys=1.54%, ctx=604871, majf=0, minf=145
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=604820,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
HCIP-Storage Storage System Performance Tuning Lab Guide Page 48

Run status group 0 (all jobs):


READ: bw=26.2MiB/s (27.5MB/s), 26.2MiB/s-26.2MiB/s (27.5MB/s-27.5MB/s), io=4725MiB
(4955MB), run=180002-180002msec

Disk stats (read/write):


sdb: ios=604579/0, merge=0/0, ticks=706972/0, in_queue=706599, util=99.98%

You can view the bandwidth change of the storage system on DeviceManager.

Perform a 512 KB read test.

[root@linux cpu]# fio --name=RB --filename=/dev/sdb --bs=512k --numjobs=4 --iodepth=1 --


rw=read --ioengine=libaio --direct=1 --norandommap --group_reporting --runtime=180 --time_based
RB: (g=0): rw=read, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio,
iodepth=1
...
fio-3.7
Starting 4 processes
Jobs: 4 (f=4): [R(4)][100.0%][r=282MiB/s,w=0KiB/s][r=563,w=0 IOPS][eta 00m:00s]
RB: (groupid=0, jobs=4): err= 0: pid=10255: Wed Nov 1 15:34:04 2023
read: IOPS=501, BW=251MiB/s (263MB/s)(44.1GiB/180008msec)
slat (usec): min=17, max=372, avg=34.60, stdev=13.41
clat (msec): min=3, max=222, avg= 7.94, stdev= 3.95
lat (msec): min=3, max=222, avg= 7.97, stdev= 3.95
clat percentiles (msec):
| 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 4], 20.00th=[ 6],
| 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 9], 60.00th=[ 9],
| 70.00th=[ 10], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 12],
| 99.00th=[ 15], 99.50th=[ 20], 99.90th=[ 41], 99.95th=[ 48],
| 99.99th=[ 115]
bw ( KiB/s): min=16384, max=135168, per=25.00%, avg=64174.85, stdev=20798.48,
samples=1439
iops : min= 32, max= 264, avg=125.32, stdev=40.63, samples=1439
lat (msec) : 4=18.79%, 10=52.00%, 20=28.74%, 50=0.43%, 100=0.03%
lat (msec) : 250=0.01%
cpu : usr=0.04%, sys=0.56%, ctx=90284, majf=0, minf=649
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=90266,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
HCIP-Storage Storage System Performance Tuning Lab Guide Page 49

Run status group 0 (all jobs):


READ: bw=251MiB/s (263MB/s), 251MiB/s-251MiB/s (263MB/s-263MB/s), io=44.1GiB (47.3GB),
run=180008-180008msec

Disk stats (read/write):


sdb: ios=90194/0, merge=0/0, ticks=714948/0, in_queue=714825, util=99.99%

You can view the port bandwidth on DeviceManager.

Use 32 KB I/Os and 512 KB I/Os to perform read tests on /dev/sdc in the scenario where
the LUN is created with the application type of Oracle_OLAP and default block size of 32
KB.
Perform a 32 KB read test.

[root@linux cpu]# fio --name=RB --filename=/dev/sdc --bs=32k --numjobs=4 --iodepth=1 --rw=read


--ioengine=libaio --direct=1 --norandommap --group_reporting --runtime=180 --time_based
RB: (g=0): rw=read, bs=(R) 32.0KiB-32.0KiB, (W) 32.0KiB-32.0KiB, (T) 32.0KiB-32.0KiB, ioengine=libaio,
iodepth=1
...
fio-3.7
Starting 4 processes
Jobs: 4 (f=4): [R(4)][100.0%][r=105MiB/s,w=0KiB/s][r=3350,w=0 IOPS][eta 00m:00s]
RB: (groupid=0, jobs=4): err= 0: pid=13241: Wed Nov 1 15:43:39 2023
read: IOPS=3052, BW=95.4MiB/s (100MB/s)(16.8GiB/180001msec)
slat (usec): min=4, max=255, avg=11.55, stdev= 6.35
clat (usec): min=899, max=136651, avg=1297.56, stdev=1377.35
lat (usec): min=1060, max=136662, avg=1309.27, stdev=1377.34
clat percentiles (usec):
| 1.00th=[ 1074], 5.00th=[ 1090], 10.00th=[ 1106], 20.00th=[ 1123],
| 30.00th=[ 1123], 40.00th=[ 1139], 50.00th=[ 1156], 60.00th=[ 1172],
| 70.00th=[ 1188], 80.00th=[ 1237], 90.00th=[ 1614], 95.00th=[ 1893],
| 99.00th=[ 2802], 99.50th=[ 4490], 99.90th=[12911], 99.95th=[21627],
| 99.99th=[76022]
bw ( KiB/s): min= 7872, max=27584, per=24.99%, avg=24416.62, stdev=2656.91, samples=1436
iops : min= 246, max= 862, avg=763.01, stdev=83.03, samples=1436
lat (usec) : 1000=0.02%
lat (msec) : 2=97.66%, 4=1.72%, 10=0.46%, 20=0.08%, 50=0.04%
lat (msec) : 100=0.01%, 250=0.01%
cpu : usr=0.19%, sys=1.54%, ctx=549558, majf=0, minf=173
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=549531,0,0,0 short=0,0,0,0 dropped=0,0,0,0
HCIP-Storage Storage System Performance Tuning Lab Guide Page 50

latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):


READ: bw=95.4MiB/s (100MB/s), 95.4MiB/s-95.4MiB/s (100MB/s-100MB/s), io=16.8GiB (18.0GB),
run=180001-180001msec

Disk stats (read/write):


sdc: ios=549151/0, merge=0/0, ticks=706854/0, in_queue=706479, util=99.99%

You can view the port bandwidth on DeviceManager.

Perform a 512 KB read test.

[root@linux cpu]# fio --name=RB --filename=/dev/sdc --bs=512k --numjobs=4 --iodepth=1 --


rw=read --ioengine=libaio --direct=1 --norandommap --group_reporting --runtime=180 --time_based
RB: (g=0): rw=read, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio,
iodepth=1
...
fio-3.7
Starting 4 processes
Jobs: 4 (f=4): [R(4)][100.0%][r=180MiB/s,w=0KiB/s][r=359,w=0 IOPS][eta 00m:00s]
RB: (groupid=0, jobs=4): err= 0: pid=14212: Wed Nov 1 15:47:18 2023
read: IOPS=344, BW=172MiB/s (181MB/s)(30.3GiB/180010msec)
slat (usec): min=19, max=213, avg=37.68, stdev=10.02
clat (msec): min=3, max=230, avg=11.57, stdev= 3.13
lat (msec): min=3, max=230, avg=11.61, stdev= 3.13
clat percentiles (msec):
| 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 12],
| 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12],
| 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 12],
| 99.00th=[ 17], 99.50th=[ 23], 99.90th=[ 52], 99.95th=[ 67],
| 99.99th=[ 153]
bw ( KiB/s): min=24576, max=48128, per=25.00%, avg=44093.84, stdev=1886.17, samples=1440
iops : min= 48, max= 94, avg=86.11, stdev= 3.68, samples=1440
lat (msec) : 4=0.06%, 10=0.97%, 20=98.28%, 50=0.57%, 100=0.09%
lat (msec) : 250=0.02%
cpu : usr=0.03%, sys=0.40%, ctx=62031, majf=0, minf=649
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=62012,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
HCIP-Storage Storage System Performance Tuning Lab Guide Page 51

Run status group 0 (all jobs):


READ: bw=172MiB/s (181MB/s), 172MiB/s-172MiB/s (181MB/s-181MB/s), io=30.3GiB (32.5GB),
run=180010-180010msec

Disk stats (read/write):


sdc: ios=61969/0, merge=0/0, ticks=716291/0, in_queue=716217, util=99.98%

You can view the port bandwidth on DeviceManager.

Step 3 Perform random mixed read/write IOPS tests.

Use fio to simulate the impact of I/Os with different read/write ratios on storage
performance in OLTP and OLAP scenarios.
Use 8 KB random I/Os (70% read I/Os and 30% write I/Os) to test the two LUNs.
Perform tests on /dev/sdb.

[root@linux cpu]# fio --name=RB --filename=/dev/sdb --bs=8k --numjobs=64 --iodepth=1 --


rw=randrw --rwmixread=70 --ba=8k --ioengine=libaio --direct=1 --norandommap --group_reporting -
-runtime=180 --time_based
RB: (g=0): rw=randrw, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=libaio,
iodepth=1
...
fio-3.7
Starting 64 processes
Jobs: 64 (f=64): [m(64)][100.0%][r=48.0MiB/s,w=21.2MiB/s][r=6271,w=2718 IOPS][eta 00m:00s]
RB: (groupid=0, jobs=64): err= 0: pid=17577: Wed Nov 1 15:57:57 2023
read: IOPS=9274, BW=72.5MiB/s (75.0MB/s)(12.7GiB/180026msec)
slat (usec): min=2, max=453, avg= 8.63, stdev= 6.44
clat (usec): min=845, max=430799, avg=4727.05, stdev=9417.36
lat (usec): min=1025, max=430806, avg=4735.82, stdev=9417.13
clat percentiles (usec):
| 1.00th=[ 1074], 5.00th=[ 1139], 10.00th=[ 1221], 20.00th=[ 1434],
| 30.00th=[ 1762], 40.00th=[ 2147], 50.00th=[ 2507], 60.00th=[ 3032],
| 70.00th=[ 3851], 80.00th=[ 5407], 90.00th=[ 9372], 95.00th=[ 15270],
| 99.00th=[ 35390], 99.50th=[ 47449], 99.90th=[105382], 99.95th=[170918],
| 99.99th=[329253]
bw ( KiB/s): min= 64, max= 2272, per=1.56%, avg=1159.37, stdev=345.13, samples=23037
iops : min= 8, max= 284, avg=144.89, stdev=43.15, samples=23037
write: IOPS=3968, BW=31.0MiB/s (32.5MB/s)(5581MiB/180026msec)
slat (usec): min=3, max=405, avg= 9.97, stdev= 6.88
clat (msec): min=2, max=433, avg= 5.04, stdev=10.25
HCIP-Storage Storage System Performance Tuning Lab Guide Page 52

lat (msec): min=2, max=433, avg= 5.05, stdev=10.25


clat percentiles (msec):
| 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3],
| 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 5],
| 70.00th=[ 5], 80.00th=[ 6], 90.00th=[ 7], 95.00th=[ 10],
| 99.00th=[ 23], 99.50th=[ 42], 99.90th=[ 159], 99.95th=[ 247],
| 99.99th=[ 376]
bw ( KiB/s): min= 16, max= 1024, per=1.56%, avg=496.03, stdev=160.16, samples=23037
iops : min= 2, max= 128, avg=61.96, stdev=20.02, samples=23037
lat (usec) : 1000=0.01%
lat (msec) : 2=25.06%, 4=42.00%, 10=25.24%, 20=5.08%, 50=2.17%
lat (msec) : 100=0.30%, 250=0.10%, 500=0.03%
cpu : usr=0.06%, sys=0.35%, ctx=2384165, majf=0, minf=2347
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=1669663,714385,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):


READ: bw=72.5MiB/s (75.0MB/s), 72.5MiB/s-72.5MiB/s (75.0MB/s-75.0MB/s), io=12.7GiB (13.7GB),
run=180026-180026msec
WRITE: bw=31.0MiB/s (32.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=5581MiB
(5852MB), run=180026-180026msec

View the port bandwidth and IOPS information.

Perform tests on /dev/sdb using 32 KB I/Os.

[root@linux cpu]# fio --name=RB --filename=/dev/sdc --bs=32k --numjobs=64 --iodepth=1 --


rw=randrw --rwmixread=70 --ba=64k --ioengine=libaio --direct=1 --norandommap --group_reporting
--runtime=180 --time_based
RB: (g=0): rw=randrw, bs=(R) 32.0KiB-32.0KiB, (W) 32.0KiB-32.0KiB, (T) 32.0KiB-32.0KiB,
ioengine=libaio, iodepth=1
...
fio-3.7
HCIP-Storage Storage System Performance Tuning Lab Guide Page 53

Starting 64 processes
Jobs: 64 (f=64): [m(64)][100.0%][r=226MiB/s,w=95.1MiB/s][r=7232,w=3044 IOPS][eta 00m:00s]
RB: (groupid=0, jobs=64): err= 0: pid=20304: Wed Nov 1 16:06:24 2023
read: IOPS=6609, BW=207MiB/s (217MB/s)(36.3GiB/180025msec)
slat (usec): min=3, max=504, avg=10.10, stdev= 8.84
clat (usec): min=963, max=407875, avg=7263.47, stdev=9819.44
lat (usec): min=1076, max=407880, avg=7273.72, stdev=9819.09
clat percentiles (usec):
| 1.00th=[ 1270], 5.00th=[ 2024], 10.00th=[ 2278], 20.00th=[ 2802],
| 30.00th=[ 3294], 40.00th=[ 3884], 50.00th=[ 4555], 60.00th=[ 5538],
| 70.00th=[ 6980], 80.00th=[ 9372], 90.00th=[ 14615], 95.00th=[ 21103],
| 99.00th=[ 40109], 99.50th=[ 50594], 99.90th=[ 96994], 99.95th=[181404],
| 99.99th=[316670]
bw ( KiB/s): min= 320, max= 6080, per=1.56%, avg=3304.46, stdev=750.63, samples=23031
iops : min= 10, max= 190, avg=103.22, stdev=23.46, samples=23031
write: IOPS=2830, BW=88.4MiB/s (92.7MB/s)(15.5GiB/180025msec)
slat (usec): min=3, max=417, avg=10.48, stdev= 8.96
clat (msec): min=2, max=408, avg= 5.61, stdev= 7.11
lat (msec): min=2, max=408, avg= 5.62, stdev= 7.11
clat percentiles (msec):
| 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 4],
| 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 5], 60.00th=[ 5],
| 70.00th=[ 6], 80.00th=[ 7], 90.00th=[ 9], 95.00th=[ 12],
| 99.00th=[ 22], 99.50th=[ 29], 99.90th=[ 84], 99.95th=[ 140],
| 99.99th=[ 342]
bw ( KiB/s): min= 64, max= 2880, per=1.56%, avg=1415.28, stdev=406.68, samples=23028
iops : min= 2, max= 90, avg=44.18, stdev=12.71, samples=23028
lat (usec) : 1000=0.01%
lat (msec) : 2=3.31%, 4=38.07%, 10=43.78%, 20=10.57%, 50=3.85%
lat (msec) : 100=0.33%, 250=0.07%, 500=0.02%
cpu : usr=0.05%, sys=0.28%, ctx=1699503, majf=0, minf=2405
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=1189857,509541,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):


READ: bw=207MiB/s (217MB/s), 207MiB/s-207MiB/s (217MB/s-217MB/s), io=36.3GiB (38.0GB),
run=180025-180025msec
WRITE: bw=88.4MiB/s (92.7MB/s), 88.4MiB/s-88.4MiB/s (92.7MB/s-92.7MB/s), io=15.5GiB (16.7GB),
run=180025-180025msec

Disk stats (read/write):


sdc: ios=1189164/509218, merge=0/0, ticks=8618190/2849163, in_queue=9999314, util=100.00%

2.5.2 Performance Test Using Iometer


Iometer is used to test disk and network I/O performance. With it, you can test metrics
like maximum disk I/O performance and maximum data throughput by setting
parameters such as data blocks and queue depth for read/write or write tests. You can
also set different parameters to simulate the read and write performance of the disk
system in real environments such as the web server and file server.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 54

Download Iometer by referring to section 1.2 and perform the following operations to
test the storage device mounted to Windows host DCA-Win:

Step 1 Run Iometer.

Decompress the downloaded installation package. The obtained folder contains the
Dynamo.exe and IOmeter.exe files. Run IOmeter.exe as the administrator.

Step 2 Test the storage device.

Switch to the Disk Targets tab page, and select disk E:\ to be tested.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 55

Switch to the Access Specifications tab page, select the type for the test (for example, 4
KiB; 25% Read; 0% random), and click Add.

Switch to the Results Display tab page. Set Results Since to Start of Test and Update
Frequency (seconds) to 5.

Click on the tool bar to start the test. In the displayed dialog box, select the path
for saving the test result. Wait several minutes until the iobw.tst file generated by
Iometer is filled in the disk.
View the test information.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 56

Click to stop the test.

Step 3 Perform the test in a custom scenario.

Swithc to the Access Specifications tab page. Select 4 KiB; 25% Read; 0% random and
click Remove. Click New to create a scenario configuration.

The configuration is detailed as follows.


HCIP-Storage Storage System Performance Tuning Lab Guide Page 57

Switch to the Disk Targets tab page, and select disk Z:\ to be tested.

Switch to the Access Specifications tab page, select the type for the test (for example,
select FileSystem for the first scenario with the maximum I/O processing capability), and
click Add.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 58

Perform the test.

2.5.3 Quiz
How do I test storage performance using 8 KB random I/Os (30% read I/Os and 70%
write I/Os)?
Huawei Storage Certification Training

HCIP-Storage

Storage System O&M Management


and Troubleshooting

Lab Guide
Issue: 5.5

Huawei Technologies Co., Ltd.

2
Copyright © Huawei Technologies Co., Ltd. 2024. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any
means without prior written consent of Huawei Technologies Co., Ltd.

Trademarks and Permissions

and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of
their respective holders.

Notice
The purchased products, services and features are stipulated by the contract made
between Huawei and the customer. All or part of the products, services and features
described in this document may not be within the purchase scope or the usage scope.
Unless otherwise specified in the contract, all statements, information, and
recommendations in this document are provided "AS IS" without warranties,
guarantees or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has
been made in the preparation of this document to ensure accuracy of the contents, but
all statements, information, and recommendations in this document do not constitute
a warranty of any kind, express or implied.

Huawei Technologies Co., Ltd.


Address: Huawei Industrial Base Bantian, Longgang Shenzhen 518129
People's Republic of China
Website: http://e.huawei.com

Huawei Proprietary and Confidential


Copyright © Huawei Technologies Co.,Ltd
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 1

Huawei Certification System


Huawei Certification is an integral part of the company's Platform + Ecosystem
strategy. It supports the development of ICT infrastructure that features Cloud-Pipe-
Device synergy. Our certification is always evolving to reflect the latest trends in ICT
development. Huawei Certification consists of three categories: ICT Infrastructure
Certification, Basic Software & Hardware Certification, and Cloud Platform & Services
Certification, making it the most extensive technical certification program in the
industry.
Huawei offers three levels of certification: Huawei Certified ICT Associate (HCIA),
Huawei Certified ICT Professional (HCIP), and Huawei Certified ICT Expert (HCIE).
Our programs cover all ICT fields and follow the industry's trend of ICT
convergence. With our leading talent development system and certification standards,
we are committed to fostering new digital ICT talent and building a sound ICT talent
ecosystem.
Huawei Certified ICT Professional-Storage (HCIP-Storage) is intended for Huawei
channel engineers, college students, and other ICT practitioners.
Its certification covers storage product technologies and applications, storage
product deployment and implementation, storage system performance tuning, and
storage O&M and fault management.
Huawei Certification introduces you to the industry and market, helps you in
innovation, and enables you to stand atop the storage frontiers.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 2
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 3

About This Document

Overview
This lab guide provides Huawei flash storage O&M and troubleshooting practices to help
trainees consolidate and review previously learned content.

Description
This lab guide introduces the following two lab practices, covering O&M, tool, and
troubleshooting operations.
Lab practice 1: O&M management
Lab practice 2: Troubleshooting

Background Knowledge Required


This lab guide is for HCIP certification. To better understand this course, familiarize
yourself with:
⚫ Basic storage knowledge. You are advised to complete the study of HCIA-Storage and
pass the HCIA-Storage certification exam.
⚫ Basic knowledge of Linux operating system (OS) operations and networks.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 4

Contents

About This Document ............................................................................................................... 3


Overview ............................................................................................................................................................................................. 3
Description ......................................................................................................................................................................................... 3
Background Knowledge Required ............................................................................................................................................. 3
1 References and Tools ............................................................................................................. 5
1.1 References ................................................................................................................................................................................... 5
1.2 Software and Tools .................................................................................................................................................................. 5
2 Storage System O&M Management and Troubleshooting ........................................... 6
2.1 Introduction ................................................................................................................................................................................ 6
2.1.1 About This Lab Practice ...................................................................................................................................................... 6
2.1.2 Objectives ................................................................................................................................................................................ 6
2.1.3 Networking Topology .......................................................................................................................................................... 6
2.2 Routine Management ............................................................................................................................................................. 8
2.2.1 Managing a License File ..................................................................................................................................................... 8
2.2.2 Creating a Local User .......................................................................................................................................................... 9
2.2.3 Configuring a Security Policy ..........................................................................................................................................11
2.2.4 Viewing and Handling an Alarm ...................................................................................................................................14
2.3 SmartKit .....................................................................................................................................................................................16
2.3.1 Installing SmartKit ..............................................................................................................................................................16
2.3.2 Running SmartKit ................................................................................................................................................................19
2.3.3 Using SmartKit to Perform Inspection ........................................................................................................................24
3 Lab Practice Troubleshooting ............................................................................................32
3.1 Collecting Storage System Information .........................................................................................................................32
3.1.1 Collecting Logs on DeviceManager ..............................................................................................................................32
3.2 Troubleshooting Disk Module Faults ..............................................................................................................................34
3.2.1 Setting Faults........................................................................................................................................................................34
3.2.2 Troubleshooting ..................................................................................................................................................................34
3.3 Troubleshooting Interface Module Faults .....................................................................................................................36
3.3.1 Setting Faults........................................................................................................................................................................36
3.3.2 Troubleshooting ..................................................................................................................................................................37
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 5

1 References and Tools

1.1 References
Commands and documents listed in this document are for reference only, and actual
ones may vary with product versions.
Product documentation

Log in to Huawei's technical support website (https://support.huawei.com/enterprise/) and input the


name of a document or tool in the search box to search for, browse, and download the desired
document or tool.

1.2 Software and Tools


1. PuTTY

Use the open-source software PuTTY to log in to a terminal. You can use the common domain name
(putty.org) of PuTTY to browse or download the desired document or tool.

2. WinSCP

Use WinSCP to transfer files between Windows and Linux OSs. You can use the common domain
name (winscp.net) of WinSCP to browse or download the desired document or tool.

3. Huawei OceanStor UltraPath

Log in to Huawei's technical support website (https://support.huawei.com/enterprise/) and type


UltraPath in the search box to search for, browse, and download the desired document or tool.

4. eStor

Log in to Huawei's technical support website (https://support.huawei.com/enterprise/) and type eStor


in the search box to search for, browse, and download the desired document or tool.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 6

2 Storage System O&M Management


and Troubleshooting

2.1 Introduction
2.1.1 About This Lab Practice
Assume that a company purchases a Huawei all-flash storage device to run multiple
services. In its data center DCA, the company creates a LUN on DCA-Storage and plans
to map the LUN to Linux host DCA-Host. To ensure normal service running, O&M
personnel need to install SmartKit on DCA-Win for O&M management and
troubleshooting.

2.1.2 Objectives
⚫ Implement storage system O&M management.
⚫ Learn how to use SmartKit.
⚫ Perform troubleshooting.

2.1.3 Networking Topology


⚫ Lab devices

Device Type Quantity Software Version

Linux host 1 CentOS 7.6

Windows Server 2016 and


Windows host 1
SmartKit 22.0

Huawei flash storage 1 6.1.5, UltraPath 31.2.0

⚫ Networking topology
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 7

⚫ Lab network information

Management
Site Device Storage IP Address Description
IP Address

192.168.1.33 Block services


DCA-Host 192.168.0.33
192.168.2.33 File services

192.168.1.34 Block services


DCA-Win 192.168.0.34
192.168.2.34 File services
DCA
192.168.1.30 Block service A

192.168.1.31 Block service B


DCA-Storage 192.168.0.30
192.168.2.30 File service A

192.168.2.31 File service B

The IP addresses used in this lab are for reference only. The actual IP addresses are
subject to the lab environment plan.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 8

2.2 Routine Management


2.2.1 Managing a License File
Step 1 Check the active license file.

Log in to DeviceManager. Choose Settings > License Management.


In the middle information pane, verify the information about the active license file.

⚫ By runtime: Displays the Invalid Date of each license. If licenses are controlled by
runtime, their Used/Total Capacity is Unlimited or N/A.
⚫ By capacity: Displays the Used/Total Capacity of each license. If licenses are
controlled by capacity, their Invalid Date is Permanent.

Step 2 Back up the active license file.

Log in to DeviceManager, choose Settings > License Management, and click Back Up
License.

The license file is downloaded and saved in the save path set in the browser.
----End
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 9

2.2.2 Creating a Local User


Log in to DeviceManager and choose Settings > User and Security > Users and Roles >
Users. On the Users tab page, click Create.

The Create User dialog box is displayed. Set user information as follows:
Type: Local user
Username/Password: user-defined
Role: Administrator
Retain the default values for other parameters and click OK.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 10

The Execution Result page is displayed. Click Close.


HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 11

Once the user is created, log out of the current user and log in as the new user.

Once it is confirmed the new user can access the system, log out and log in again as user
admin.

2.2.3 Configuring a Security Policy


Step 1 Configure an account policy.

Log in to DeviceManager, choose Settings > User and Security > Security Policies, and
click Modify on the right of Account Policy.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 12

Click Advanced to expand the advanced settings of Account Policy. Set the parameters
as follows:
Set Complexity under Password Policy to A password must contain special characters,
uppercase letters, lowercase letters, and digits.
Retain the default values for other parameters and click Save.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 13

Then, you can create a user by referring to section "Creating a Local User" and verify that
the password policy takes effect.

Step 2 Configure a login policy.

Log in to DeviceManager. Choose Settings > User and Security > Security Policies, and
click Modify on the right of Login Policy.

Click Advanced to expand the advanced settings of Login Policy. Set the parameters as
follows:
Automatic Unlock In: 3 minutes
Retain the default values for other parameters and click Save.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 14

After the operation is complete, log out of the system and log in again. Enter three
consecutive incorrect passwords to verify the account will be automatically locked. Wait
for 3 minutes and log in to the system to verify that the account is automatically
unlocked.
----End

2.2.4 Viewing and Handling an Alarm


Log in to DeviceManager. Choose Insight > Alarms and Events > Current Alarms.
Click Send Simulated Alarm to simulate a fault alarm.

Click on the right to customize the columns to be displayed.

Click next to Severity, Object, or Occurred to filter alarms.

Click next to ID to search for alarms.


Click the content in the Description column of an alarm and handle the alarm by
referring to Suggestion on the right.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 15

Select the alarm to be exported and click to export it as an .xls file.

Select the simulated alarm from the list box and click Clear. In the Clear Alarm dialog
box, click OK.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 16

2.3 SmartKit
2.3.1 Installing SmartKit
Double-click the installation software to install SmartKit.
The Setup page is displayed. Click Next.

On the License Agreement page that is displayed, select I accept the agreement and
click Next.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 17

On the Select Destination Location page that is displayed, click Next.


HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 18

On the Role Selection page that is displayed, select Install just for me or Install for all
users and click Next.

On the Ready to Install page that is displayed, click Install.


HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 19

After the installation is complete, the Completing the SmartKit Setup Wizard page is
displayed. Click Finish.

2.3.2 Running SmartKit


Run SmartKit on the desktop of the maintenance terminal to enter the login
authentication page.
Generally, you need to select a region, scan the QR code, and enter the authentication
code for login authentication. If no external network is available, you can select
Authenticate Later to skip login authentication and directly access the SmartKit home
page. However, functions such as online installation and upgrade are unavailable.
In this lab practice, select Authenticate Later.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 20

The Operation Safety Precautions page is displayed. Click Confirm.

If you use SmartKit for the first time, a usage wizard page is displayed introducing major
functions of SmartKit.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 21

The Welcome to SmartKit page is displayed in the wizard, including a description of the
application fields and paths.

Click . The wizard page displays the functions compatible with its predecessor
OceanStor Toolkit and provides the O&M functions in service scenarios, covering the
storage, server, and cloud computing fields.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 22

Click . The Function Management page is displayed. You can install or upgrade the
required functions with one click. If the network is disconnected, you can export the
function package from another SmartKit environment and import it to the current
environment to perform offline installation or upgrade.

Click . The Devices page is displayed. Once you add a device to the device list, you
do not need to add it again when you perform operations later. You can also add devices
in batches.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 23

Click . The Update reminder page is displayed. If a new version is available or user
authentication has expired or will soon expire, the system sends a notification. You can
set the frequency of notifications in advanced settings.
Click Start.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 24

The software page is displayed.

2.3.3 Using SmartKit to Perform Inspection


Step 1 Start SmartKit.

Start SmartKit to go to the main page.

Step 2 Add a device.

On the main page, click the Devices tab and click Add.

The Add Device Step 2-1: Basic Information dialog box is displayed. Enter the basic
information as follows:
IP Address: 192.168.0.30
Select No Proxy under Select Proxy. Click Next.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 25

Enter configuration information, including the username, password, and port of the
device. Click Finish.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 26

After the storage device is added, it will be displayed in the device list.

Step 3 Inspect devices.

Choose Routine Maintenance > More on the main page.


HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 27

On the Routine Maintenance page that is displayed, click Inspection.

On the Inspection page, click Inspection.


HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 28

On the Inspection Wizard Step 5-1: Welcome page, select Routine inspection and click
Next.

On the Inspection Wizard Step 5-2: Select Devices page, select a storage device and
click Next.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 29

On the Inspection Wizard Step 5-3: Select Check Items page, select the required check
items and click Next.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 30

In the Inspection Wizard Step 5-4: Set Check Policy dialog box, set a path for saving
the result file and click Next.

The Inspection Wizard Step 5-5: Start Inspection page is displayed. During the
inspection, you can view the progress and results of the executed check items. When the
inspection is complete, click a check item to view its details.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 31

After the inspection is complete, you can view the inspection result and view the report
for analysis.

----End
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 32

3 Lab Practice Troubleshooting

3.1 Collecting Storage System Information


3.1.1 Collecting Logs on DeviceManager
Log in to DeviceManager. Choose > Export Data.

Select System Log and click Recent Log, All Logs, or Key Log. In the warning dialog box
that is displayed, select I have read and understand the consequences associated with
performing this operation. and click OK. The system starts collecting logs.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 33

Perform two of the following tasks for practice:


⚫ Select Disk Log and click DHA Runtime Log, HSSD Log, or Disk Data Erasure
Report. In the warning dialog box that is displayed, select I have read and
understand the consequences associated with performing this operation. and
click OK. The system starts collecting logs and expands the log list.
⚫ Select Configuration Info and click Export. In the warning dialog box that is
displayed, select I have read and understand the consequences associated with
performing this operation. and click OK to export the configuration information.
⚫ Select Diagnostic File and click Export. In the warning dialog box that is displayed,
select I have read and understand the consequences associated with performing
this operation. and click OK to export the fault information of the device.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 34

⚫ Select Antivirus Log and click Export. In the warning dialog box that is displayed,
select I have read and understand the consequences associated with performing
this operation. and click OK to export the antivirus scanning information of the
device.
⚫ Select FTDS Log and click Export. In the warning dialog box that is displayed, select I
have read and understand the consequences associated with performing this
operation. and click OK to export FTDS logs of the device.
⚫ Select Performance File, set a file date range, and click Export. In the warning
dialog box that is displayed, select I have read and understand the consequences
associated with performing this operation. and click OK to export the performance
file of the device.

3.2 Troubleshooting Disk Module Faults


3.2.1 Setting Faults
Use the SSH tool to log in to the storage device, go to the command line interface (CLI),
and run the following commands to power off disk CTE0.0:

admin:/>change user_mode current_mode user_mode=engineer


engineer:/>poweroff disk disk_id=CTE0.0
DANGER: You are going to power off the disk. This operation causes the disk to b e
inaccessible.
Suggestion: Before you perform this operation, ensure the disk properties are co
rrect and related storage pool are in proper state. Back up the data before powe ring
off the disk.
Have you read danger alert message carefully?(y/n)Y

Are you sure you really want to perform the operation?(y/n)Y


Command executed successfully.

3.2.2 Troubleshooting
3.2.2.1 Symptom
Log in to DeviceManager and choose System > Hardware > Devices. On the front view
of a 2 U controller enclosure or a disk enclosure that is marked by an exclamation mark
(!), click the identified disk module. You can see that Health Status of the disk module is
Faulty.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 35

3.2.2.2 Alarm Information


On the Alarms and Events page of DeviceManager, click the Current Alarms tab. The
fault alarm indicating that the disk is removed from the disk domain is displayed. You are
advised to reinsert the removed disk.

3.2.2.3 Fault Locating


Check the health status of hardware devices and alarm information. Disk CTE0.0 is found
to be offline. Thus, the disk module may be removed or powered off.

3.2.2.4 Fault Restoration


Use the SSH tool to log in to the storage device, go to the CLI, and run the following
commands to power on disk CTE0.0.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 36

admin:/>change user_mode current_mode user_mode=engineer


engineer:/>poweron disk disk_id=CTE0.0
Command executed successfully.

After the operation is complete, check whether the alarm is cleared and Health Status of
the disk module is Normal.

3.3 Troubleshooting Interface Module Faults


3.3.1 Setting Faults

Log in to DeviceManager, choose System > Hardware > Devices, and click .

The system switches to the rear view of the controller enclosure. Click the IOM4 interface
module of controller A, that is, CTE0.A.IOM4.

In the interface module dialog box that is displayed, choose Operation > Power Off.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 37

In the dialog box prompting a high risk is displayed, confirm the warning information and
click OK.

3.3.2 Troubleshooting
3.3.2.1 Symptom

Log in to DeviceManager and choose System > Hardware > Devices. Click to
switch to the rear view of the storage device. Click the interface module in the yellow
square. Running Status of the interface module is Powered off.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 38

3.3.2.2 Alarms and Events


On the Alarms and Events page of DeviceManager, click the All Events tab. The event
information indicating that the system succeeded in powering the interface module is
displayed.

3.3.2.3 Fault Locating


Check the running status of hardware devices and query event information. Interface
module CTE0.A.IOM4 is found to be powered off.

3.3.2.4 Fault Restoration

Log in to DeviceManager, choose System > Hardware > Devices, and click to
switch to the rear view of the controller enclosure. Click the interface module in the
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 39

yellow square. The Interface Module dialog box is displayed. Choose Operation > Power
On.

The Success dialog box is displayed, indicating that the operation is successful.
After the operation is complete, check whether Running Status of the interface module
is Running.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy