HCIP-Storage V5.5 Lab Guide
HCIP-Storage V5.5 Lab Guide
HCIP-Storage
SmartMulti-Tenant Feature
Configuration
Lab Guide
ISSUE: 5.5
2
Copyright © Huawei Technologies Co., Ltd. 2024. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any
means without prior written consent of Huawei Technologies Co., Ltd.
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of
their respective holders.
Notice
The purchased products, services and features are stipulated by the contract made
between Huawei and the customer. All or part of the products, services and features
described in this document may not be within the purchase scope or the usage scope.
Unless otherwise specified in the contract, all statements, information, and
recommendations in this document are provided "AS IS" without warranties,
guarantees or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has
been made in the preparation of this document to ensure accuracy of the contents, but
all statements, information, and recommendations in this document do not constitute
a warranty of any kind, express or implied.
Overview
This lab guide provides SmartMulti-Tenant feature configuration practices to help
trainees consolidate and review previously learned content. Upon completion of this
course, you will be able to get familiar with the configuration operations related to
SmartMulti-Tenant for Huawei flash storage.
Description
This lab guide provides one lab practice to simulate an environment where one physical
storage device has multiple vStores.
Lab practice 1: SmartMulti-Tenant configuration
Contents
1.1 References
Commands and documents listed in this document are for reference only, and actual
ones may vary depending on product versions.
Product documentation
Use the open-source software PuTTY to log in to a terminal. You can use the common
domain name (putty.org) of PuTTY to browse or download the desired document or
tool.
2. WinSCP
Use WinSCP to transfer files between Windows and Linux OSs. You can use the
common domain name (winscp.net) of WinSCP to browse or download the desired
document or tool.
4. eStor
2 SmartMulti-Tenant Feature
Configuration
2.1 Introduction
2.1.1 About This Lab Practice
This lab practice aims to simulate the use of Huawei flash storage in providing storage
services for two departments of a company, including block and NAS storage services.
The SmartMulti-Tenant feature is used to configure these departments as Tenant-A and
Tenant-B to ensure independent use of storage services.
2.1.2 Objectives
⚫ Understand the principles and usage of SmartMulti-Tenant.
⚫ Master the configuration of SmartMulti-Tenant for block services and file services.
⚫ Networking topology
Management IP Storage IP
Device Description
Address Address
⚫ Resource planning
Tenant-A LUN-A01 3 GB
Tenant-B FS-B01 2 GB
SmartMulti-Tenant Feature Configuration Lab Guide Page 9
The IP addresses used in this lab are for reference only. The actual IP
addresses are subject to the lab environment plan.
Enter the username and password of the system user in Username and Password, and
click Log In. The DeviceManager home page is displayed.
SmartMulti-Tenant Feature Configuration Lab Guide Page 10
Return to the Create vStore page and click Add to create the second logical port.
SmartMulti-Tenant Feature Configuration Lab Guide Page 14
Return to the Create vStore page and click Add to create the second logical port.
SmartMulti-Tenant Feature Configuration Lab Guide Page 20
Choose Services > vStore Service > vStores. Click Tenant-B. On the page that is
displayed on the right, click the User Management tab and then Create.
SmartMulti-Tenant Feature Configuration Lab Guide Page 27
Choose Services > Block Service > LUN Groups > LUNs and click Create.
Set the parameters as follows:
Name: LUN-A01
Capacity: 3 GB
Quantity: 1
Retain the default values for other parameters and click OK.
SmartMulti-Tenant Feature Configuration Lab Guide Page 31
The Create Host dialog box is displayed on the right. Set the parameters as follows:
Name: Host-A01
OS: Linux
Click OK.
The Map LUN dialog box is displayed. Select Existing, select the LUN, and click OK.
SmartMulti-Tenant Feature Configuration Lab Guide Page 34
If the iSCSI software is not installed on the OS or the version is not supported, install the
latest version.
Ensure the initiator name is easy to remember. If necessary, change it to ensure it is
unique. You are advised to replace the suffix with the host name.
Search for the target based on the IP address of logical port A1 of Tenant-A configured
on the storage system.
Click the iSCSI tab, select the desired initiator, and click OK.
SmartMulti-Tenant Feature Configuration Lab Guide Page 36
Choose Services > Block Service > LUN Groups > LUNs. The 3 GB LUN created by
user_a1 of Tenant-A is invisible.
Choose Services > Block Service > Host Groups > Hosts. The 3 GB LUN created by
user_a1 of Tenant-A is invisible.
Choose Services > File Service > File Systems and click Create. In the Create File
System dialog box, set the parameters as follows:
Name: FS-B01
Capacity: 2 GB
NFS: disabled
CIFS: disabled
Add to HyperCDP Schedule: disabled
Retain the default values for other parameters. Click OK.
SmartMulti-Tenant Feature Configuration Lab Guide Page 41
Choose Services > File Service > Shares > NFS Shares. Click More on the right of the
desired NFS share and choose Add Client.
SmartMulti-Tenant Feature Configuration Lab Guide Page 43
The client can access the NFS share through logical port B2 of Tenant-B.
SmartMulti-Tenant Feature Configuration Lab Guide Page 46
HCIP-Storage
Lab Guide
ISSUE: 5.5
2
Copyright © Huawei Technologies Co., Ltd. 2024. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any
means without prior written consent of Huawei Technologies Co., Ltd.
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of
their respective holders.
Notice
The purchased products, services and features are stipulated by the contract made
between Huawei and the customer. All or part of the products, services and features
described in this document may not be within the purchase scope or the usage scope.
Unless otherwise specified in the contract, Huawei does not make any express or
implied representations or warranties in this document.
The information in this document is subject to change without notice. Every effort has
been made in the preparation of this document to ensure accuracy of the contents, but
all statements, information, and recommendations in this document do not constitute
a warranty of any kind, express or implied.
Overview
This lab guide provides HyperCDP feature configuration practices to help trainees
consolidate and review previously learned content. Upon completion of this course, you
will be able to get familiar with the configuration operations related to HyperCDP for
Huawei flash storage.
Content Description
This lab guide introduces two lab practices to simulate the scenarios where HyperCDP is
used to protect service data.
Lab practice 1: HyperCDP for block services
Lab practice 2: HyperCDP for file services
Contents
1.1 Introduction
About This Lab Practice
This lab practice is about HyperCDP for block services.
Objectives
⚫ Understand the principles and usage of HyperCDP for block services.
⚫ Master operations for configuring and managing HyperCDP for block services.
Networking Topology
⚫ Devices
⚫ Networking topology
HyperCDP Feature Configuration Lab Guide Page 7
⚫ Network information
Managemen Storage IP
Device Information Description
t IP Address Address
Note: The IP addresses used in this lab guide are for reference only. The actual IP
addresses are subject to the lab environment plan.
HyperCDP Feature Configuration Lab Guide Page 8
Use the FTP tool to upload the UltraPath installation package to the /root/up directory
on the service host.
Go to the /root/up directory and decompress the UltraPath installation package.
[root@dca-host ~]# cd up
[root@dca-host up]# ls
OceanStor_UltraPath_31.2.0_CentOS.zip
[root@dca-host up]# unzip OceanStor_UltraPath_31.2.0_CentOS.zip
Archive: OceanStor_UltraPath_31.2.0_CentOS.zip
creating: CentOS/
...
The installation program prompts that the installation is complete and asks if you want
to restart the system:
The installation is complete. Whether to restart the system now?
<Y|N>:
Log in to the host again and check whether UltraPath takes effect.
[root@ dca-host ~]# upadmin check status
If the check result of each item is Pass, UltraPath has taken effect.
[root@ dca-host ~]# upadmin check status
------------------------------------------------------------
Checking path status:
There is no array information.
Pass
------------------------------------------------------------
Checking environment and config:
Pass
------------------------------------------------------------
The Create LUN dialog box is displayed. Set LUN parameters and click OK.
HyperCDP Feature Configuration Lab Guide Page 10
Choose Services > Block Service > LUN Groups > LUN Groups and click Create. Set the
LUN group name, select LUN001, LUN002, LUN003, and LUN004, and click OK.
HyperCDP Feature Configuration Lab Guide Page 11
Specify the host name, select an operating system, and click OK.
Choose Services > Block Service > Host Groups > Host Groups, and click Create. Specify
the host group name, select the host, and click OK.
HyperCDP Feature Configuration Lab Guide Page 13
The Map LUN Group dialog box is displayed. Select HostGroup001 and click OK.
HyperCDP Feature Configuration Lab Guide Page 14
In the displayed Danger dialog box, confirm the warning and click OK.
On the displayed Create Logical Port page, set logical port parameters. Click OK.
Similarly, create another logical port. The following figure shows the result.
HyperCDP Feature Configuration Lab Guide Page 16
Note: If the iSCSI software package is not installed, install it before performing
subsequent operations.
To make the initiator name easy to identify, you need to change the initiator name and
replace the suffix with the host name dca-host.
[root@ dca-host ~]# vi /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:dca-host
Search for the targets based on the IP addresses of the logical ports configured on the
storage system:
[root@dca-host ~]# iscsiadm -m discovery -t st -p 192.168.1.30
192.168.1.30:3260,1025 iqn.2014-08.com.example::2100030000040506::20400:192.168.1.30
[root@dca-host ~]# iscsiadm -m discovery -t st -p 192.168.1.31
192.168.1.31:3260,1025 iqn.2014-08.com.example::2100030000040506::20400:192.168.1.31
Click the iSCSI tab, select the initiator with the corresponding host name, and click OK.
HyperCDP Feature Configuration Lab Guide Page 18
Click the host name. On the page that is displayed, click the Initiators tab. Ensure that
the Status of the initiator is Online.
HyperCDP Feature Configuration Lab Guide Page 19
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Check that the HyperCDP object has been created and its running status is Activated.
The Create HyperCDP CG page is displayed. Set PG. Set the name of the HyperCDP CG
to HCDPCG001. Retain the default settings for other parameters and click OK.
Check that the HyperCDP CG has been created and its running status is Activated.
Click Next.
Configuration Policy: Set Fixed Period to every 1 minute and Retained Objects to 60 .
Click Next.
Click OK.
HyperCDP Feature Configuration Lab Guide Page 29
In the Start HyperCDP Object Rollback dialog box that is displayed on the right, click
OK.
The running status is Rolling back and the rollback progress is displayed. After the
rollback is complete, the running status changes back to Activated.
Choose Services > Block Service > LUN Groups > LUNs. The created snapshot duplicate
is displayed.
HyperCDP Feature Configuration Lab Guide Page 35
Managing a HyperCDP CG
1.4.2.1 Modifying Properties of a HyperCDP CG
Choose Data Protection > Plans > Block HyperCDP > HyperCDP CGs.
Select the desired vStore from the vStore drop-down list in the upper left corner.
Click More on the right of the desired HyperCDP CG and select Modify.
xxx
In the Start HyperCDP CG Rollback dialog box that is displayed on the right, click OK.
HyperCDP Feature Configuration Lab Guide Page 38
The running status changes to Rolling back. After the rollback is complete, the running
status changes back to Activated.
Choose Services > Block Service > LUN Groups > LUNs. The created snapshot duplicate
is displayed.
HyperCDP Feature Configuration Lab Guide Page 41
The Add Member page is displayed. In the LUN area, select LUN001, add it to the right
pane, and click OK.
The Remove Member page is displayed. In the LUN area, select LUN001 and add it to
the right pane. Click OK.
HyperCDP Feature Configuration Lab Guide Page 45
2.1 Introduction
About This Lab Practice
This lab practice is about HyperCDP for file services.
Objectives
⚫ Understand the principles and usage of HyperCDP for file services.
⚫ Master operations for configuring and managing HyperCDP for file services.
Networking Topology
⚫ Devices
⚫ Networking topology
⚫ Network information
Managemen Storage IP
Device Information Description
t IP address Address
Note: The IP addresses used in this lab are for reference only. The actual IP addresses are
subject to the lab environment plan.
HyperCDP Feature Configuration Lab Guide Page 48
On the displayed Create Logical Port page, set logical port parameters and click OK.
HyperCDP Feature Configuration Lab Guide Page 52
The Create NFS Share page is displayed on the right. Select file system FS001 for which
you want to create a share.
Click Add to set the access permission of the NFS share.
HyperCDP Feature Configuration Lab Guide Page 53
Note: If the NFS software package is not installed, install it before performing subsequent
operations.
Check that the HyperCDP object has been created and its health status is Normal.
HyperCDP Feature Configuration Lab Guide Page 59
Configuration Policy: Set Fixed Period to every 1 minute and Retained Objects to 60 .
Click Next.
HyperCDP Feature Configuration Lab Guide Page 60
Click OK.
In the Roll Back to HyperCDP Object dialog box that is displayed on the right, click OK.
HyperCDP Feature Configuration Lab Guide Page 64
The rollback progress is displayed. After the rollback is complete, the rollback progress is
displayed as --.
HyperCDP Feature Configuration Lab Guide Page 65
The Add Member page is displayed. In the Available File Systems area, select FS001,
add it to the right pane, and click OK.
The Remove Member page is displayed. In the Available File Systems area, select
FS001, and add it to the right pane. Click OK.
HyperCDP Feature Configuration Lab Guide Page 70
HCIP-Storage
Lab Guide
ISSUE: 5.5
2
Copyright © Huawei Technologies Co., Ltd. 2024. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any
means without prior written consent of Huawei Technologies Co., Ltd.
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of
their respective holders.
Notice
The purchased products, services and features are stipulated by the contract made
between Huawei and the customer. All or part of the products, services and features
described in this document may not be within the purchase scope or the usage scope.
Unless otherwise specified in the contract, all statements, information, and
recommendations in this document are provided "AS IS" without warranties,
guarantees or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has
been made in the preparation of this document to ensure accuracy of the contents, but
all statements, information, and recommendations in this document do not constitute
a warranty of any kind, express or implied.
Overview
This lab guide provides HyperMetro feature configuration practices to help trainees
consolidate and review previously learned content. Upon completion of this course, you
will be able to get familiar with the configuration process and operations related to
HyperMetro for file systems.
Description
This lab guide introduces one lab practice to simulate HyperMetro configuration for file
systems of Huawei flash storage.
⚫ Lab practice 1: Use HyperMetro to implement an active-active solution for file
systems.
Contents
1.1 References
Commands and documents listed in this document are for reference only, and actual
ones may vary with product versions.
Product documentation
Use the open-source software PuTTY to log in to a terminal. You can use the common domain name
(putty.org) of PuTTY to browse or download the desired document or tool.
2. eStor
3. WinSCP
It is used for transferring files between Windows and Linux OSs. WinSCP is recommended. You can
select other similar tools as required.
2.1 Introduction
2.1.1 About This Lab Practice
This lab practice is about the local high availability (HA) disaster recovery (DR) for data
centers where HyperMetro is deployed.
In the lab practice, file systems are configured on the two OceanStor 6.1.x storage
systems and mounted to local hosts. Local HA is implemented using the HyperMetro
feature.
2.1.2 Objectives
⚫ Understand the basic principles of HyperMetro DR for flash storage.
⚫ Master the HyperMetro DR networking architecture for flash storage.
⚫ Master the procedures and methods to configure HyperMetro DR for flash storage.
⚫ Master the operation and configuration methods of the HyperMetro DR for flash
storage.
HyperMetro Feature Configuration Lab Guide Page 8
The preceding figure shows the lab environment networking for the flash storage
HyperMetro feature, where DCA-Host and DCA-Storage are deployed in data center A
(DCA) and DCB-Host and DCB-Storage are deployed in data center B (DCB). DCA and
DCB work as a backup to each other, and a quorum server QuorumServer is deployed
between them.
The storage device can be OceanStor 6.1.x, OceanStor Dorado 6.1.x, or eStor.
Device Configurations
To meet the requirements of HCIP-Storage lab practices, you are advised to use the
following configurations in each lab environment.
Network Planning
The following table lists the network planning for this lab practice.
192.168.3.30 Replication
192.168.4.30 Arbitration
192.168.3.40 Replication
192.168.4.40 Arbitration
The preceding information is for reference only. Before performing the lab practice, contact the
administrator to access or configure devices to meet the actual requirements.
1. On DeviceManager of the storage device in DCA, check whether a license file has
been imported and whether SmartQuota and NAS Foundation are displayed in the
feature list.
2. On the navigation bar, choose Settings > License Management.
HyperMetro Feature Configuration Lab Guide Page 10
3. In the middle function pane, check whether SmartQuota and NAS Foundation exist
in the Feature column.
To ensure that the application servers can use the storage space of the storage systems,
log in to the storage device in DCA, and create a storage pool named StoragePool001.
1. Choose System > Storage Pools.
2. Click Create.
3. Set the storage pool parameters.
4. Click OK.
HyperMetro Feature Configuration Lab Guide Page 11
6. Click OK.
A dtree is created to manage the space used by all files in a directory and the access
permission of the directory.
1. Choose Services > File Service > Dtrees.
2. Select Tennant-B to which the desired file system belongs from the vStore drop-
down list in the upper left corner.
3. Click Create.
4. Set the dtree name to Dtree001.
HyperMetro Feature Configuration Lab Guide Page 13
5. Click OK.
8. Click OK.
6. Confirm the information in the dialog box and select I have read and understand
the consequences associated with performing this operation.
7. Click OK.
3. Configure logical port parameters as follows: Set Name to FSLP-AA, Role to Service,
Data Protocol to NFS, IP Address to 192.168.2.30, and Home Port to
CTE0.A.IOM4.P1. In Activation Status, select Activate.
4. Click OK.
1. Choose Services > File Service > Shares > NFS Shares.
2. Select Tennant-B to which the desired file system belongs from the vStore drop-
down list in the upper left corner.
3. Click Create.
4. Set basic parameters of the NFS share as follows: Set File System to FileSystem001
and Dtree to Dtree001.
5. Click OK.
HyperMetro Feature Configuration Lab Guide Page 17
1. Choose Services > File Service > Shares > NFS Shares.
2. Select Tennant-B to which the desired NFS share belongs from the vStore drop-
down list in the upper left corner.
3. Click More on the right of the desired NFS share and select Add Client.
4. Set client attributes of DCA-Host as follows: Enter 192.168.2.33 in the Client text
box, set UNIX Permission to Read-write and root Permission Constraint to
no_root_squash, and retain the default values for other parameters.
HyperMetro Feature Configuration Lab Guide Page 18
a.txt
Log in to the command-line interface (CLI) of the system and switch to user root.
1. Use the vim tool to edit the network configuration file and modify the network
information.
vim /etc/sysconfig/network-scripts/ifcfg-eth0
2. Modify the configuration of network port eth0 with BOOTPROTO set to static and
ONBOOT set to yes.
BOOTPROTO=static
ONBOOT=yes
IPADDR="192.168.0.70"
HyperMetro Feature Configuration Lab Guide Page 20
PREFIX="24"
GATEWAY="192.168.0.1"
BOOTPROTO=static
ONBOOT=yes
IPADDR="192.168.4.70"
PREFIX="24"
GATEWAY="192.168.4.1"
1. Enable the firewall function and set the port number allowed by the firewall to
30002.
success
2. Restart the firewall service and check whether the firewall configuration takes effect.
yes
unzip OceanStor_QuorumServer_6.1.5_X86_64.zip
cd package
sh ./quorum_server.sh -install
Verify the QuorumServer existence.
The QuorumServer is not installed.
The current user is the root user. A quorum server administrator account needs to be provided.
Continue to install?
<Y|N>:Y
qsadmin
start main!
Waiting for connecting to server...
admin:/>
1. In the CLI of the quorum server software, run the add server_ip command to add all
IP addresses and port ID of the quorum server to the quorum server software for
management.
admin:/>show server_ip
1. Log in to the quorum server and run the qsadmin command to go to the CLI of the
quorum server software.
2. On the CLI, run the change white_list enable_switch=no command to disable the
whitelist.
2. Select HyperMetro arbitration certificate and click Export Request File. Set
Certificate Key Algorithm to RSA 2048.
HyperMetro Feature Configuration Lab Guide Page 23
5. On the CLI of the quorum server, run the generate tls_cert csr=? [days=?]
[cert_name=?] [sign_algorithm=?] command to issue certificates.
This command will generate a certificate file (with the same name as the certificate
request file but with the extension of .crt) and a CA file cps_ca.crt.
admin:/>quit
Logout.
[root@qs export_import]# ls
cps_ca.crt DCA-Storage.csr DCA-Storage.csr.crt
8. Delete the certificate file from the quorum server and repeat the preceding steps to
generate the certificate to be imported to DCB-Storage.
HyperMetro Feature Configuration Lab Guide Page 25
9. In the CLI of the arbitration software, run the export tls_cert command to export the
device information. The qs_certreq.csr file is generated in the
/opt/quorum_server/export_import directory of the quorum server.
admin:/>export tls_cert
Command executed succesfully.
10. On the CLI of the quorum server, run the generate tls_cert csr=? [days=?]
[cert_name=?] [sign_algorithm=?] command to issue certificates.
11. On the CLI of the quorum server software, run the import tls_cert cert_name=?
ca=? cert=? [private_key=?] [class=?] command to import the certificates.
5. Click OK.
HyperMetro Feature Configuration Lab Guide Page 27
6. Click Close.
5. Click OK.
6. Create another logical port as follows: Set Name to AA-2, Role to Replication, IP
Address Type to IPv4, IP Address to 192.168.3.31, Subnet Mask to 255.255.255.0,
Port Type to Ethernet port, and Home Port to CTE0.B.IOM2.P2. Then, click OK.
7. Log in to DeviceManager of the remote device and repeat the preceding steps to
create logical ports for remote replication.
⚫ Name: AA-1; Role: Replication; IP Address Type: IPv4; IP Address: 192.168.3.40;
Subnet Mask: 255.255.255.0; Port Type: Ethernet port; Home Port:
CTE0.A.IOM2.P2
⚫ Name: AA-2; Role: Replication; IP Address Type: IPv4; IP Address: 192.168.3.41;
Subnet Mask: 255.255.255.0; Port Type: Ethernet port; Home Port:
CTE0.B.IOM2.P2
HyperMetro Feature Configuration Lab Guide Page 29
5. Click Connect.
HyperMetro Feature Configuration Lab Guide Page 30
7. Click Close to return to the Create HyperMetro Domain page, select the new
quorum server, and click OK.
1. Choose Data Protection > Protection Entities > File Systems > HyperMetro Pairs.
2. Select the vStore to which the desired file system belongs from the vStore drop-
down list in the upper left corner.
3. Click Create. The Create HyperMetro Pair page is displayed on the right.
4. Select the vStore to which the desired file system belongs from the vStore drop-
down list in the upper left corner.
5. In the Available File Systems area, select one or more file systems based on service
requirements.
6. Specify Remote Storage Pool for creating a remote file system.
The system will create a remote file system on the remote device of the HyperMetro
vStore pair and create a HyperMetro pair for the local and remote file system.
7. Click OK.
HyperMetro Feature Configuration Lab Guide Page 35
8. Click Close.
6. Choose Services > File Service > Shares and click the displayed share path to view
details.
5. Click OK.
HyperMetro Feature Configuration Lab Guide Page 38
2. Run the showmount -e 192.168.2.40 command to view available NFS shares of the
storage system.
5. Run the following commands to go to the /mnt directory and view the file:
cd /mnt
cat a.txt
Hello Huawei!
2.5 Quiz
Question:
What are the deployment scenarios of HyperMetro for file services?
Answer:
There are two deployment scenarios: single-DC deployment and cross-DC deployment.
1. Single-DC deployment
In this scenario, the storage systems are deployed in two equipment rooms in the same
DC.
Hosts communicate with storage systems through a switched fabric (IP switches for NAS
file systems). HyperMetro replication links are deployed between the storage systems to
ensure continuous operation of services.
2. Cross-DC deployment
In this scenario, the storage systems are deployed in two DCs in the same city or in two
cities located in close proximity. The distance between the two DCs is within 300 km.
HyperMetro Feature Configuration Lab Guide Page 40
Both of the DCs can handle service requests concurrently, thereby accelerating service
response and improving resource utilization. If one DC fails, its services are automatically
switched to the other DC.
In cross-DC deployment scenarios involving long-distance transmission, dense wavelength
division multiplexing (DWDM) devices must be used to ensure a short transmission
latency. In addition, HyperMetro replication links must be deployed between the active-
active storage systems for data synchronization.
Huawei Storage Certification Training
HCIP-Storage
HyperReplication Feature
Configuration
Lab Guide
ISSUE: 5.5
2
Copyright © Huawei Technologies Co., Ltd. 2024. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any
means without prior written consent of Huawei Technologies Co., Ltd.
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of
their respective holders.
Notice
The purchased products, services and features are stipulated by the contract made
between Huawei and the customer. All or part of the products, services and features
described in this document may not be within the purchase scope or the usage scope.
Unless otherwise specified in the contract, all statements, information, and
recommendations in this document are provided "AS IS" without warranties,
guarantees or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has
been made in the preparation of this document to ensure accuracy of the contents, but
all statements, information, and recommendations in this document do not constitute
a warranty of any kind, express or implied.
Overview
This lab guide provides HyperReplication (remote replication) feature configuration
practice to help trainees consolidate and review previously learned content. Upon
completion of this lab guide, you will be familiar with the configuration process and
operations related to HyperReplication for file systems.
Description
This lab guide introduces the following lab practice to simulate HyperReplication
configuration for file systems in storage project delivery:
⚫ Lab practice 1: Use HyperReplication to implement remote replication of file systems.
Contents
1.1 References
Commands and documents listed in this document are for reference only, and actual
ones may vary depending on the product version.
Product documentation
Use the open-source software PuTTY to log in to a terminal. You can use the common
domain name (putty.org) of PuTTY to browse or download the desired document or
tool.
2. eStor
3. WinSCP
It is recommended that you use WinSCP to transfer files between devices running
Windows and Linux OSs. However, you can also use other similar tools to do so.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 1
2 HyperReplication Feature
Configuration
2.1 Introduction
2.1.1 About This Lab Practice
Substantial progress has been made in the digital transformation of various industries,
and data is increasingly becoming the core component of business operations. As a
result, enterprises have a growing need for more stable data storage systems. Storage
systems that support file systems are also being widely applied in various industries.
Many enterprises have adopted highly stable storage systems. However, these storage
systems may still be subject to unrecoverable damages caused by natural disasters.
Remote disaster recovery (DR) solutions have been developed to ensure continuous data
access and high recoverability and availability, of which remote replication for file
systems is one of the critical technologies. This lab guide describes how to configure the
HyperReplication (remote replication) feature.
2.1.2 Objectives
⚫ Understand the technical principles of HyperReplication.
⚫ Master the networking architecture of HyperReplication.
⚫ Master the procedures and methods of configuring HyperReplication.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 2
The preceding figure shows the networking topology of the active-passive DR lab
environment for flash storage. DCA serves as the production center and DCB as the DR
center. DCA-Host and DCA-Storage are deployed in the production center and DCB-Host
and DCB-Storage are deployed in the DR center.
The storage device can be OceanStor 6.1.x, OceanStor Dorado 6.1.x, or eStor.
Device Configurations
You are advised to use the following configurations in each lab environment when
performing the HCIP-Storage lab practice.
Earliest Software
Device Type Model Quantity
Version
OceanStor 6.1.x
Flash storage OceanStor Dorado 6.1.x 2 6.1.5
eStor
Earliest Software
Device Type Model Quantity
Version
switch
Network Planning
The following table lists the network planning for this lab practice.
Management
Site Device Storage IP Address Description
IP Address
192.168.4.30 Arbitration
192.168.4.40 Arbitration
The preceding information is for reference only. Before performing the lab practice,
contact the administrator to access or configure devices to meet the actual requirements.
To ensure that the application servers can use the storage space of storage systems, log
in to the storage system in DCA and create a storage pool named StoragePool001.
Choose System > Storage Pools, click Create, and set parameters for the storage pool.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 4
Click OK.
Click OK.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 6
A dtree is created to manage the space used by all files in a directory and the access
permission of the directory.
Choose Services > File Service > Dtrees.
Select Tenant-B to which the desired file system belongs from the vStore drop-down list
in the upper left corner.
Click Create.
Set the dtree name to Dtree001.
Click OK.
Click OK.
Confirm the information in the dialog box and select I have read and understand the
consequences associated with performing this operation.
Click OK.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 9
Click OK.
Choose Services > File Service > Shares > NFS Shares.
Select Tenant-B to which the desired file system belongs from the vStore drop-down list
in the upper left corner.
Click Create.
Set basic parameters of the NFS share as follows: Set File System to FileSystem001 and
Dtree to Dtree001.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 10
Click OK.
Choose Services > File Service > Shares > NFS Shares.
Select Tenant-B to which the desired NFS share belongs from the vStore drop-down list
in the upper left corner.
Click More on the right of the desired NFS share and choose Add Client.
Set client information as follows: Enter the IP address 192.168.2.33 of DCA-Host in the
Clients text box, set UNIX Permission to Read-write and root Permission Constraint to
no_root_squash, and retain the default values for other parameters.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 11
a.txt
Log in to DeviceManager of the remote device DCB-Storage and repeat the preceding
steps to create logical ports for remote replication.
⚫ Set Name to REP-1, Role to Replication, IP Address Type to IPv4, IP Address to
192.168.3.40, Subnet Mask to 255.255.255.0, Port Type to Ethernet port, and
Home Port to CTE0.A.IOM4.P2.
⚫ Set Name to REP-2, Role to Replication, IP Address Type to IPv4, IP Address to
192.168.3.41, Subnet Mask to 255.255.255.0, Port Type to Ethernet port, and
Home Port to CTE0.B.IOM4.P2.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 15
Click OK.
Click Close.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 16
Click Connect.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 17
Choose Data Protection > Protection Entities > vStores > Remote Replication vStore
Pairs.
Click Create.
The Create Remote Replication vStore Pair page is displayed on the right.
Select the local vStore Tenant-B for which you want to create a remote replication
vStore pair.
From the Remote Device drop-down list, select the remote device DCB-Storage with
which you want to create a remote replication vStore pair.
Set Pair Creation to Automatic.
Select Synchronize Share and Authentication and Synchronize Network
Configuration.
Click Close.
Choose Data Protection > Protection Entities > File Systems > Remote Replication
Pairs.
Click Create. The Create Remote Replication Pair page is displayed on the right.
In the Available File Systems area, select the file system for which you want to create a
remote replication pair and add it to the Selected File Systems area.
Set Synchronize Configuration to Yes, Pair Creation to Automatic, and Sync Type to
Manual. Retain the default values for other parameters.
Confirm information about the created remote replication pair and click OK.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 20
Click Close.
Log in to DeviceManager of the storage system at the primary site and choose Data
Protection > Protection Entities > File Systems > Remote Replication Pairs. Select
Tenant-B from the vStore drop-down list in the upper left corner. Select local resource
FileSystem001.
In the Synchronize Remote Replication Pair dialog box that is displayed, select I have
read and understand the consequences associated with performing this operation
and click OK.
After the manual synchronization is complete, choose Data Protection > Protection
Entities > vStores > Remote Replication vStore Pairs and select the remote replication
vStore pair Tenant-B.
Click Split.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 22
In the Split Remote Replication vStore Pair dialog box that is displayed, select I have
read and understand the consequences associated with performing this operation
and click OK.
Click OK.
Choose Data Protection > Protection Entities > File Systems > Remote Replication
Pairs and select FileSystem001.
Click OK.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 24
Choose Services > File Service > Shares > NFS Shares, select Tenant-B, and click
More > Add Client on the right of share path /FileSystem001/Dtree001.
Set client information as follows: Enter the IP address 192.168.2.43 of DCB-Host in the
Clients text box, set UNIX Permission to Read-write and root Permission Constraint to
no_root_squash, and retain the default values for other parameters.
Click OK.
On the right of Tenant-B, click More, and select Disable Logical Port.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 26
In the Disable Logical Port dialog box that is displayed, select I have read and
understand the consequences associated with performing this operation and click
OK.
On the right of Tenant-B, click More, and select Enable Logical Port.
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 27
In the Enable Logical Port dialog box that is displayed, select I have read and
understand the consequences associated with performing this operation and click
OK.
Run the following commands to go to the /mnt directory and view the file:
cd /mnt
cat a.txt
HCIP-Storage HyperReplication Feature Configuration Lab Guide Page 28
Hello Huawei!
2.4 Quiz
Question:
After creating a remote replication vStore pair and a remote replication pair, can I
perform operations on the remote replication pair?
Answer:
If the secondary resource protection status and data synchronization direction of the
remote replication vStore pair are the same as those of the remote replication pair, you
are not allowed to perform a primary/secondary switchover or disable or enable
secondary resource protection for the remote replication pair. You are advised to perform
the operations on the remote replication vStore pair.
If the secondary resource protection status or data synchronization direction of the
remote replication vStore pair is inconsistent with that of the remote replication pair, you
can perform related operations on the remote replication pair to ensure that the status
of the remote replication vStore pair is consistent with that of the remote replication pair.
After you synchronize, split, perform a primary/secondary switchover for, or
disable/enable secondary resource protection for a remote replication vStore pair, you
can view the result of these operations for the remote replication vStore pair on
DeviceManager. The remote replication pair runs in the background. You can run the
following command to check the running status of the remote replication pair and
ensure that the operations for the remote replication pair are complete:
HCIP-Storage
Lab Guide
ISSUE: 5.5
2
Copyright © Huawei Technologies Co., Ltd. 2024. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any
means without prior written consent of Huawei Technologies Co., Ltd.
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of
their respective holders.
Notice
The purchased products, services and features are stipulated by the contract made
between Huawei and the customer. All or part of the products, services and features
described in this document may not be within the purchase scope or the usage scope.
Unless otherwise specified in the contract, all statements, information, and
recommendations in this document are provided "AS IS" without warranties,
guarantees or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has
been made in the preparation of this document to ensure accuracy of the contents, but
all statements, information, and recommendations in this document do not constitute a
warranty of any kind, express or implied.
Overview
This course uses Huawei flash storage products as an example to describe operations
related to storage performance tuning. Performance assurance of storage products is very
important. Storage products play a vital role in the IT system. They store historical service
data and also support the normal running of services. In addition, storage performance
assurance and optimization are necessary for normal running and affect the operating
status of enterprises.
Based on the performance assurance operations of storage products, this course classifies
the operations that are frequently performed during performance tuning into tasks, and
deepens the understanding through analysis and discussion, operation drills, and
summarization.
Description
This lab guide consists of three lab practices covering installation and use of storage
system performance monitoring tools, performance problem locating, and usage of
performance test tools.
⚫ Lab practice 1: storage system performance monitoring
⚫ Lab practice 2: storage system performance problem locating
⚫ Lab practice 3: storage system performance test
1.1 References
Commands and documents listed in this document are for reference only, and actual
ones may vary with product versions.
Product documentation
Use the open-source software PuTTY to log in to a terminal. You can use the common domain name
(putty.org) of PuTTY to browse or download the desired document or tool.
2. WinSCP
Use WinSCP to transfer files between Windows and Linux OSs. You can use the common domain
name (winscp.net) of WinSCP to browse or download the desired document or tool.
3. Iometer
Use the open-source software Iometer to test disk performance. You can use the common domain
name (iometer.org) of Iometer to browse or download the corresponding document or tool.
5. eStor
HCIP-Storage Storage System Performance Tuning Lab Guide Page 5
2.1 Introduction
2.1.1 About This Lab Practice
A company purchases a Huawei all-flash storage device to run multiple services.
According to the plan, LUNs will be created on storage device DCA-Storage in data
center DCA and separately mapped to Linux host DCA-Host and Windows host DCA-
Win. NAS shares will be created and separately mounted to Linux host DCA-Host and
Windows host DCA-Win. DCA-Host carries Oracle database services, while DCA-Win
carries SQL Server database services. To ensure high performance, you need to check
whether performance bottlenecks exist on the host or storage device and check the
performance parameters or features of the storage device.
2.1.2 Objectives
⚫ After the lab practice, you will be able to complete:
⚫ Performance monitoring
⚫ Performance problem locating
⚫ Performance test
⚫ Networking topology
HCIP-Storage Storage System Performance Tuning Lab Guide Page 7
Note: The IP addresses used in this lab practice are only for reference. The actual IP
addresses are subject to the lab environment plan.
⚫ Environment initialization
Initialize the environment according to the following table.
Create and map LUNs to Linux and Windows service hosts.
Install Huawei UltraPath on the hosts.
Mount the LUNs to the hosts and format the LUNs.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 8
Host to
LUN
LUN Name LUN Group Which a LUN Description
Size
Is Mapped
The share is
N/A FileSystem001 5 GB Linux mounted to
/mnt/fs.
The share is
share_Win FileSystem002 5 GB DCA-Win
mounted to Z:\.
2.2 Quiz
What metrics are used for performance measurement? Describe the service scenarios
related to each metric.
[Suggested answer]
IOPS: This metric is typically used to measure system performance in application
scenarios such as online transaction processing (OLTP) services and SPC-1 authentication.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 9
To view the CPU information of a Linux host, run the cat /proc/cpuinfo command.
mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt
xsavec xgetbv1 arat md_clear flush_l1d
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa
itlb_multihit mmio_stale_data retbleed gds
bogomips : 6000.00
clflush size : 64
cache_alignment : 64
address sizes : 42 bits physical, 48 bits virtual
power management:
...
03:02:10 PM
CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
03:02:10 PM all 1.07 0.00 0.22 0.03 0.02 0.01 0.00 0.00 0.00
98.64
perf is a performance analysis tool provided by the Linux kernel. It can be used to
perform statistical analysis on CPU performance.
You can run the perf top command to view the processes and function calls with high
CPU usage in the system.
According to the free -h command output, the free memory is 14 GiB and the buffer for
storing the data to be output to disks is 450 MiB.
You can run the following commands to release the data occupied by the system cache
and check the memory usage again:
HCIP-Storage Storage System Performance Tuning Lab Guide Page 12
sync
echo 3 > /proc/sys/vm/drop_caches
[root@linux cpu]# sync
[root@linux cpu]# echo 3 > /proc/sys/vm/drop_caches
[root@linux cpu]# free -h
total used free shared buff/cache available
Mem: 15Gi 481Mi 14Gi 10Mi 102Mi 14Gi
Swap: 4.0Gi 0B 4.0Gi
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await
r_await w_await svctm %util
sda 0.00 0.98 1.04 0.89 0.05 0.01 62.07 0.00 0.88
0.97 0.77 0.31 0.06
up-0 0.00 1.75 0.01 0.02 0.00 0.23 13695.71 0.00 122.47
7.06 209.35 5.12 0.02
up-1 0.00 1.76 0.02 0.02 0.00 0.28 12956.74 0.00 65.09
3.20 137.30 2.57 0.01
up-2 0.00 0.03 0.01 0.00 0.00 0.22 31816.18 0.00 13.11
3.96 34.69 3.02 0.00
up-3 0.00 1.75 0.00 0.02 0.00 0.28 30202.79 0.00 196.53
3.25 200.98 2.80 0.01
sdb 0.00 0.00 0.02 0.02 0.00 0.01 334.12 0.00 90.62
5.81 178.49 4.25 0.02
sdc 0.00 0.00 0.02 0.04 0.00 0.01 475.53 0.00 104.45
3.20 167.52 1.97 0.01
avgrq-sz: indicates the average size (in 512-byte sectors) of the requests that were issued
to the device.
avgqu-sz: indicates the average queue length of the requests that were issued to the
device.
await: indicates the average time (in milliseconds) for I/O requests issued to the device
to be served. This includes the time spent by the requests in queue and the time spent
servicing them.
r_wait: indicates the average time required for each read operation, including not only
the read operation time of the disk but also the time in the kernel queue.
w_wait: indicates the average time required for each write operation, including not only
the write operation time of the disk but also the time in the kernel queue.
To display information about a specified disk, run the following command:
Step 1 Check whether the memory usage and CPU usage of the host exceed the
corresponding thresholds.
Open the Windows task manager, click the Performance tab, and check the memory
usage and CPU usage of the host.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 15
After the performance counter is added, you can view the corresponding chart.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 18
Resource Monitor in Windows Server provides detailed information about the real-time
performance of the server. Resource Monitor monitors the usage and performance of
CPUs, disks, networks, and memory resources in real time. It helps you identify and
resolve resource conflicts and bottlenecks.
In the Start Menu search box, type Resource Monitor. Start Resource Monitor.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 19
Click Maintenance.
This operation enables you to configure the monitoring status and whether to save
monitoring files into the storage system.
Log in to DeviceManager. Choose Settings > Monitoring Settings to switch to the
Monitoring Settings page.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 22
If the current value of the Sampling Interval of Real-Time Statistics parameter is less
than the value corresponding to the Sampling Interval of Real-Time Statistics in the
preceding table, the storage system automatically adjusts the parameter value to the
value in the table.
This operation enables you to configure alarm thresholds and alarm clearance thresholds
for performance metrics of specific objects (such as controllers and front-end ports) in a
storage device. If the performance metric value is above (or below) the alarm threshold,
DeviceManager sends an alarm to notify you of checking and handling the device fault. If
the performance metric value is above (or below) the alarm clearance threshold,
DeviceManager clears the alarm and includes it in the historical alarm list.
Log in to DeviceManager. Choose Settings > Monitoring Settings to switch to the
Monitoring Settings page.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 24
You can create a metric chart to view metrics involved in services that you are concerned
about.
Log in to DeviceManager. Choose Insight > Performance > Analysis to Create a metric
template.
Click Create to switch to the Create Chart dialog box.
Configure the metric template as prompted.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 26
Set the basic information, monitored object, statistical metric, and chart display mode of
the chart.
Select Avg. CPU Usage (%) when creating a metric chart for a controller. After the
metric chart is created, you can view it.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 27
Metric chart
On the CLI, run show performance controller to check the CPU usage of the controller
with controller_id of 0A.
View the CPU usage of controllers by entering the corresponding number. Press Q to exit.
CPU Usage(%) : 16
CPU Usage(%) : 16
CPU Usage(%) : 17
CPU Usage(%) : 16
To view performance metrics, you can create a metric chart for the controller as
prompted on the Analysis page.
Metric chart
HCIP-Storage Storage System Performance Tuning Lab Guide Page 29
On the CLI, run the show port general command to query port information.
ID Health Status Running Status Type IPv4 Address IPv6 Address MAC
Role Working Rate(Mbps) Enabled Max Speed(Mbps) Number Of Initiators
-------------- ------------- -------------- --------- ------------ ------------ ---- -
------------ ----------- ------------------ ------- --------------- ------------------
--
CTE0.A.IOM4.P0 Normal Link Up Host Port -- -- fa:1
6:3e:2b:84:8d INI and TGT 56000 Yes 1000 0
CTE0.A.IOM4.P1 Normal Link Up Host Port -- -- fa:1
6:3e:2b:85:8d INI and TGT 56000 Yes 1000 0
CTE0.A.IOM4.P2 Normal Link Up Host Port -- -- fa:1
6:3e:2b:86:8d INI and TGT 56000 Yes 1000 0
CTE0.A.IOM4.P3 Normal Link Up Host Port -- -- fa:1
6:3e:2b:87:8d INI and TGT 56000 Yes 1000 0
CTE0.B.IOM4.P0 Normal Link Up Host Port -- -- fa:1
6:3e:2b:84:8e INI and TGT 56000 Yes 1000 0
CTE0.B.IOM4.P1 Normal Link Up Host Port -- -- fa:1
6:3e:2b:85:8e INI and TGT 56000 Yes 1000 0
CTE0.B.IOM4.P2 Normal Link Up Host Port -- -- fa:1
6:3e:2b:86:8e INI and TGT 56000 Yes 1000 0
CTE0.B.IOM4.P3 Normal Link Up Host Port -- -- fa:1
6:3e:2b:87:8e INI and TGT 56000 Yes 1000 0
Run the show performance port command to query information of the port with the
corresponding port_id. Then, enter the number for the metric you want to query. To exit,
enter Q.
88.IO Average Response Time For SMB(us) 89.Read IO Average Response Time For
NFS V3(us)
90.Read IO Average Response Time For NFS V4(us) 91.Read IO Average Response Time For
NFS(us)
92.Read IO Average Response Time For SMB1(us) 93.Read IO Average Response Time For
SMB2(us)
94.Read IO Average Response Time For SMB(us) 95.Write IO Average Response Time
For NFS V3(us)
96.Write IO Average Response Time For NFS V4(us) 97.Write IO Average Response Time For
NFS(us)
98.Write IO Average Response Time For SMB1(us) 99.Write IO Average Response Time For
SMB2(us)
100.Write IO Average Response Time For SMB(us) 101.Other IO Average Response Time
For NFS V3(us)
102.Other IO Average Response Time For NFS V4(us) 103.Other IO Average Response Time
For NFS(us)
104.Other IO Average Response Time For SMB1(us) 105.Other IO Average Response Time
For SMB2(us)
106.Other IO Average Response Time For SMB(us) 107.Avg. Write I/O Link Transmission
Delay(us)
Input item(s) number separated by comma:
On a Linux host, run cat /proc/cpuinfo |grep -i Mhz to view the CPU frequency.
To check the CPU frequency of a Windows host, you can use the DirectX Diagnostic Tool.
Choose Start > Run. In the Run dialog box, enter DXDIAG and click OK to check whether
the CPU frequency decreases.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 32
Change the host running mode to the high-performance mode to ensure that the CPU
frequency does not decrease.
Operation path: Start > Control Panel > System and Security > Power Options >
Choose or customize a power plan > High performance
HCIP-Storage Storage System Performance Tuning Lab Guide Page 33
[Task requirements]
/dev/sda1 configured for the Linux host is a solid state disk (SSD). Select a proper I/O
scheduling algorithm and provide the configuration.
[Procedure]
The Linux OS supports four block device scheduling algorithms: noop, anticipatory,
deadline, and cfq. You can configure the related scheduling algorithm in the
/sys/block/sda1/queue/scheduler file. (In this task, /dev/sda1 needs to be configured
according to the actual drive letter.)
[Task requirements]
Increase the number of /dev/sda1 disk queues on the Linux host to 1024 and provide the
configuration.
[Procedure]
The queue depth determines the maximum number of concurrent I/Os written to the
block device. In Linux, the default value is 128. Do not change the value unless absolutely
necessary. In the event of testing the highest system performance, you can set the queue
depth to a larger value by modifying the /sys/block/sda1/queue/nr_requests file to
increase the I/O write pressure and the probability of combining I/Os in the queue. (In
this lab practice, /dev/sda1 is used and can be replaced based on actual requirements.)
[Task requirements]
1. To improve the memory access performance of the Linux host, try to avoid using the
swap partition to reduce the possibility of hard page faults. Provide the related
configuration.
2. To reduce dirty pages in the memory and prevent unexpected data loss, ensure that
the proportion of dirty pages in the memory does not exceed 5%. Provide the related
configuration.
[Procedure]
1. Modify the vm.swappiness parameter in the /etc/sysctl.conf configuration file to
minimize use of the swap partition and reduce the possibility of hard page faults.
vi /etc/sysctl.conf
vm.swappiness = 0
HCIP-Storage Storage System Performance Tuning Lab Guide Page 34
echo 5 >/proc/sys/vm/dirty_ratio
[Task requirements]
Modify the kernel parameters according to the following requirements:
1. Change the maximum buffer for receiving TCP data to 513920.
2. Change the maximum buffer for transmitting TCP data to 513920.
3. Change the default FIN_TIMEOUT value to 30s.
4. Change the maximum number of kernel connections to 65535.
5. Change the maximum number of NIC queues to 30000.
6. Increase the maximum number of connections waiting for the client to establish to
20000.
7. Set the range of ports that can be opened by the system to 1024 to 65000.
[Procedure]
Add the following content to the /etc/sysctl.conf configuration file:
1. net.core.rmem_max: indicates the maximum buffer for receiving TCP data.
2. net.core.wmem_max: indicates the maximum buffer for transmitting TCP data.
3. net.ipv4.tcp_fin_timeout: indicates the default FIN_TIMEOUT time.
4. net.core.somaxconn: indicates the maximum number of kernel connections.
5. net.core.netdev_max_backlog: indicates the maximum value of the NIC queue.
6. net.ipv4.tcp_max_syn_backlog: indicates the maximum number of connections
waiting for the client to establish.
7. net.ipv4.ip_local_port_range: indicates the range of ports that can be opened by the
system.
vi /etc/sysctl.conf
1. net.core.rmem_max = 513920
2. net.core.wmem_max = 513920
3. net.ipv4.tcp_fin_timeout = 30
4. net.core.somaxconn = 65535
5. net.core.netdev_max_backlog = 30000
6. net.ipv4.tcp_max_syn_backlog = 20000
7. net.ipv4.ip_local_port_range = 1024 65000
On a Windows host, open UltraPath Console and check whether links are normal and
whether owning controllers are working properly.
On a Linux host, run the upadmin show path command to check whether physical paths
are normal.
On a Linux host, run the upadmin show upconfig command to query the multipathing
parameter settings.
Advanced Configuration
Io Retry Times : 10
Io Retry Delay : 0
Faulty path check interval : 10
Idle path check interval : 60
Failback Delay Time : 60
Io Suspension Time : 60
Max io retry timeout : 1800
Performance Record : off
Pending delete period of obsolete paths : 28800
HyperMetro configuration
HyperMetro Primary Array SN : Not configured
HyperMetro WorkingMode : read write within primary array
HyperMetro Split Size : 128MB
HyperMetro Load Balance Mode : round-robin
When the CPU usage is high, the latency of system scheduling increases. As a result, the
I/O latency increases.
The CPU usage of a storage system is closely related to and varies with I/O models and
networking modes. To query the CPU usage of the current controller, use DeviceManager
or run the CLI command.
Performance monitoring using DeviceManager
Navigation path: Insight > Performance > Analysis. Select Avg. CPU Usage (%) when
creating a metric chart for a controller.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 40
On the CLI, run the show performance controller command to check the CPU usage of
controllers.
...
95.Total Block Bandwidth (MB/S)
Input item(s) number separated by comma:54
CPU Usage(%) : 15
Before analyzing the performance of front-end host ports, confirm the positions of
interface modules and the number, working status, and speeds of connected ports.
You can use DeviceManager or the CLI to query information about front-end host ports.
Use DeviceManager to query information about front-end ports.
On the CLI, you can run the show port general command to query information about
front-end ports.
ID Health Status Running Status Type IPv4 Address IPv6 Address MAC
Role Working Rate(Mbps) Enabled Max Speed(Mbps) Number Of Initiators
-------------- ------------- -------------- --------- ------------ ------------ ----------------- -------
---- ------------------ ------- --------------- --------------------
CTE0.A.IOM4.P0 Normal Link Up Host Port -- --
fa:16:3e:2b:84:8d INI and TGT 56000 Yes 1000 2
CTE0.A.IOM4.P1 Normal Link Up Host Port -- --
fa:16:3e:2b:85:8d INI and TGT 56000 Yes 1000 0
CTE0.A.IOM4.P2 Normal Link Up Host Port -- --
fa:16:3e:2b:86:8d INI and TGT 56000 Yes 1000 0
CTE0.A.IOM4.P3 Normal Link Up Host Port -- --
fa:16:3e:2b:87:8d INI and TGT 56000 Yes 1000 0
CTE0.B.IOM4.P0 Normal Link Up Host Port -- --
fa:16:3e:2b:84:8e INI and TGT 56000 Yes 1000 2
CTE0.B.IOM4.P1 Normal Link Up Host Port -- --
fa:16:3e:2b:85:8e INI and TGT 56000 Yes 1000 0
HCIP-Storage Storage System Performance Tuning Lab Guide Page 42
Run the CLI command to obtain the approximate number of front-end concurrent tasks.
This method applies to scenarios where pressure changes. You can run the show
controller io io_type=frontEnd controller_id=XX command to query the front-end
concurrent I/O tasks delivered to a specified controller. Run this command multiple times
and use a stable value as an approximate number of front-end concurrent tasks. XX
indicates the controller ID.
Controller Id : 0A
Front End IO :0
Front End Limit : 17408
Step 4 Check whether the front-end host ports experience bit errors.
Run the show port bit_error command to view the bit errors of front-end ports.
PCIE port:
RDMA port:
ID Error Packets Lost Packets Over Flowed Packets CRC Errors Frame Errors
Frame Length Errors
-------------- ------------- ------------ ------------------- ---------- ------------ -------------------
CTE0.A.IOM2.P0 0 0 0 0 0
0
CTE0.A.IOM2.P1 0 0 0 0 0
0
CTE0.A.IOM2.P2 0 0 0 0 0
0
CTE0.A.IOM2.P3 0 0 0 0 0
0
CTE0.B.IOM2.P0 0 0 0 0 0
0
CTE0.B.IOM2.P1 0 0 0 0 0
0
CTE0.B.IOM2.P2 0 0 0 0 0
0
CTE0.B.IOM2.P3 0 0 0 0 0
0
RoCE port:
A cache is the key module that improves performance and user experience. When
analyzing the cache performance, pay attention to the impact of cache configuration on
the write performance.
If the cache write policy is write-back, a write success message is returned to the host as
soon as each write I/O arrives at the cache. Then, the cache sorts and combines data
before writing to disks.
If problems such as backup battery unit (BBU) fault, single-controller state, controller
overheating, and excessive number of LUN fault pages occur during service running, the
LUN health status may switch to write protection. In this situation, data cannot be
written to disks but can be read. Write protection prevents data in disks from being
modified.
You can query the properties of a LUN on DeviceManager or the CLI and obtain the
current LUN health status and cache write policy.
To query the information on DeviceManager:
HCIP-Storage Storage System Performance Tuning Lab Guide Page 45
To query the information on the CLI, run the show lun general command.
ID :1
Name : LUN_OLTP_Linux
Pool ID :0
Capacity : 4.000GB
Subscribed Capacity : 2.203MB
Protection Capacity : 0.000B
Sector Size : 512.000B
Health Status : Normal
Running Status : Online
Type : Thin
IO Priority : Low
WWN : 6030000100040506137346a400000001
Exposed To Initiator : Yes
Data Distributing : --
Write Policy : Write Back
Running Write Policy : Write Back
Prefetch Policy : None
Read Cache Policy : --
Write Cache Policy : --
Cache Partition ID : --
Prefetch Value : --
Owner Controller : --
Work Controller : --
Snapshot ID(s) : --
LUN Copy ID(s) : --
Remote Replication ID(s) : --
Split Clone ID(s) : --
Relocation Policy : --
Initial Distribute Policy : --
SmartQoS Policy ID : --
HCIP-Storage Storage System Performance Tuning Lab Guide Page 46
Protection Duration(days) : --
Has Protected For(h) : --
Estimated Data To Move To Tier0 : --
Estimated Data To Move To Tier1 : --
Estimated Data To Move To Tier2 : --
Is Add To Lun Group : Yes
Smart Cache Partition ID : --
DIF Switch : No
Remote LUN WWN : --
Disk Location : Internal
LUN Migration : --
Progress(%) : --
Smart Cache Cached Size : --
Smart Cache Hit Rage(%) :0
Mirror Type : --
Thresholds Percent(%) : 90
Thresholds Switch : Off
Usage Type : Internal
HyperMetro ID(s) : --
Dedup Enabled : --
Compression Enabled : --
Workload Type Name : Oracle_OLTP
Is Clone : No
LUN Clone ID(s) : --
Snapshot Schedule ID : --
Description :
HyperCopy ID(s) : --
HyperCDP Schedule ID : --
LUN consistency group ID : --
Clone ID(s) : --
LUN protection group ID(s) : --
Function Type : Lun
NGUID : 7100040506137346030000a400000001
Create Time : 20XX-XX-XX/11:06:17 UTC+08:00
Vstore ID :0
Workload Type ID :2
2.4.4 Quiz
What are the common performance problems?
[Suggested answer]
From the perspective of users, storage performance is reflected through the application
response time or service processing duration. Common performance problems are as
follows:
1. I/O latency is high and users can obviously feel the slow response.
2. The IOPS and bandwidth do not meet customers' service requirements.
3. Performance data fluctuates significantly.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 47
Test the two LUNs mounted in the OLTP and OLAP scenarios and check the performance
difference between the LUNs.
Use 8 KB I/Os and 512 KB I/Os to perform tests on /dev/sdb in the scenario where the
LUN is created with the application type of Oracle_OLTP and default block size of 8 KB.
Perform an 8 KB read test.
You can view the bandwidth change of the storage system on DeviceManager.
Use 32 KB I/Os and 512 KB I/Os to perform read tests on /dev/sdc in the scenario where
the LUN is created with the application type of Oracle_OLAP and default block size of 32
KB.
Perform a 32 KB read test.
Use fio to simulate the impact of I/Os with different read/write ratios on storage
performance in OLTP and OLAP scenarios.
Use 8 KB random I/Os (70% read I/Os and 30% write I/Os) to test the two LUNs.
Perform tests on /dev/sdb.
Starting 64 processes
Jobs: 64 (f=64): [m(64)][100.0%][r=226MiB/s,w=95.1MiB/s][r=7232,w=3044 IOPS][eta 00m:00s]
RB: (groupid=0, jobs=64): err= 0: pid=20304: Wed Nov 1 16:06:24 2023
read: IOPS=6609, BW=207MiB/s (217MB/s)(36.3GiB/180025msec)
slat (usec): min=3, max=504, avg=10.10, stdev= 8.84
clat (usec): min=963, max=407875, avg=7263.47, stdev=9819.44
lat (usec): min=1076, max=407880, avg=7273.72, stdev=9819.09
clat percentiles (usec):
| 1.00th=[ 1270], 5.00th=[ 2024], 10.00th=[ 2278], 20.00th=[ 2802],
| 30.00th=[ 3294], 40.00th=[ 3884], 50.00th=[ 4555], 60.00th=[ 5538],
| 70.00th=[ 6980], 80.00th=[ 9372], 90.00th=[ 14615], 95.00th=[ 21103],
| 99.00th=[ 40109], 99.50th=[ 50594], 99.90th=[ 96994], 99.95th=[181404],
| 99.99th=[316670]
bw ( KiB/s): min= 320, max= 6080, per=1.56%, avg=3304.46, stdev=750.63, samples=23031
iops : min= 10, max= 190, avg=103.22, stdev=23.46, samples=23031
write: IOPS=2830, BW=88.4MiB/s (92.7MB/s)(15.5GiB/180025msec)
slat (usec): min=3, max=417, avg=10.48, stdev= 8.96
clat (msec): min=2, max=408, avg= 5.61, stdev= 7.11
lat (msec): min=2, max=408, avg= 5.62, stdev= 7.11
clat percentiles (msec):
| 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 4],
| 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 5], 60.00th=[ 5],
| 70.00th=[ 6], 80.00th=[ 7], 90.00th=[ 9], 95.00th=[ 12],
| 99.00th=[ 22], 99.50th=[ 29], 99.90th=[ 84], 99.95th=[ 140],
| 99.99th=[ 342]
bw ( KiB/s): min= 64, max= 2880, per=1.56%, avg=1415.28, stdev=406.68, samples=23028
iops : min= 2, max= 90, avg=44.18, stdev=12.71, samples=23028
lat (usec) : 1000=0.01%
lat (msec) : 2=3.31%, 4=38.07%, 10=43.78%, 20=10.57%, 50=3.85%
lat (msec) : 100=0.33%, 250=0.07%, 500=0.02%
cpu : usr=0.05%, sys=0.28%, ctx=1699503, majf=0, minf=2405
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=1189857,509541,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Download Iometer by referring to section 1.2 and perform the following operations to
test the storage device mounted to Windows host DCA-Win:
Decompress the downloaded installation package. The obtained folder contains the
Dynamo.exe and IOmeter.exe files. Run IOmeter.exe as the administrator.
Switch to the Disk Targets tab page, and select disk E:\ to be tested.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 55
Switch to the Access Specifications tab page, select the type for the test (for example, 4
KiB; 25% Read; 0% random), and click Add.
Switch to the Results Display tab page. Set Results Since to Start of Test and Update
Frequency (seconds) to 5.
Click on the tool bar to start the test. In the displayed dialog box, select the path
for saving the test result. Wait several minutes until the iobw.tst file generated by
Iometer is filled in the disk.
View the test information.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 56
Swithc to the Access Specifications tab page. Select 4 KiB; 25% Read; 0% random and
click Remove. Click New to create a scenario configuration.
Switch to the Disk Targets tab page, and select disk Z:\ to be tested.
Switch to the Access Specifications tab page, select the type for the test (for example,
select FileSystem for the first scenario with the maximum I/O processing capability), and
click Add.
HCIP-Storage Storage System Performance Tuning Lab Guide Page 58
2.5.3 Quiz
How do I test storage performance using 8 KB random I/Os (30% read I/Os and 70%
write I/Os)?
Huawei Storage Certification Training
HCIP-Storage
Lab Guide
Issue: 5.5
2
Copyright © Huawei Technologies Co., Ltd. 2024. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any
means without prior written consent of Huawei Technologies Co., Ltd.
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of
their respective holders.
Notice
The purchased products, services and features are stipulated by the contract made
between Huawei and the customer. All or part of the products, services and features
described in this document may not be within the purchase scope or the usage scope.
Unless otherwise specified in the contract, all statements, information, and
recommendations in this document are provided "AS IS" without warranties,
guarantees or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has
been made in the preparation of this document to ensure accuracy of the contents, but
all statements, information, and recommendations in this document do not constitute
a warranty of any kind, express or implied.
Overview
This lab guide provides Huawei flash storage O&M and troubleshooting practices to help
trainees consolidate and review previously learned content.
Description
This lab guide introduces the following two lab practices, covering O&M, tool, and
troubleshooting operations.
Lab practice 1: O&M management
Lab practice 2: Troubleshooting
Contents
1.1 References
Commands and documents listed in this document are for reference only, and actual
ones may vary with product versions.
Product documentation
Use the open-source software PuTTY to log in to a terminal. You can use the common domain name
(putty.org) of PuTTY to browse or download the desired document or tool.
2. WinSCP
Use WinSCP to transfer files between Windows and Linux OSs. You can use the common domain
name (winscp.net) of WinSCP to browse or download the desired document or tool.
4. eStor
2.1 Introduction
2.1.1 About This Lab Practice
Assume that a company purchases a Huawei all-flash storage device to run multiple
services. In its data center DCA, the company creates a LUN on DCA-Storage and plans
to map the LUN to Linux host DCA-Host. To ensure normal service running, O&M
personnel need to install SmartKit on DCA-Win for O&M management and
troubleshooting.
2.1.2 Objectives
⚫ Implement storage system O&M management.
⚫ Learn how to use SmartKit.
⚫ Perform troubleshooting.
⚫ Networking topology
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 7
Management
Site Device Storage IP Address Description
IP Address
The IP addresses used in this lab are for reference only. The actual IP addresses are
subject to the lab environment plan.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 8
⚫ By runtime: Displays the Invalid Date of each license. If licenses are controlled by
runtime, their Used/Total Capacity is Unlimited or N/A.
⚫ By capacity: Displays the Used/Total Capacity of each license. If licenses are
controlled by capacity, their Invalid Date is Permanent.
Log in to DeviceManager, choose Settings > License Management, and click Back Up
License.
The license file is downloaded and saved in the save path set in the browser.
----End
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 9
The Create User dialog box is displayed. Set user information as follows:
Type: Local user
Username/Password: user-defined
Role: Administrator
Retain the default values for other parameters and click OK.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 10
Once the user is created, log out of the current user and log in as the new user.
Once it is confirmed the new user can access the system, log out and log in again as user
admin.
Log in to DeviceManager, choose Settings > User and Security > Security Policies, and
click Modify on the right of Account Policy.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 12
Click Advanced to expand the advanced settings of Account Policy. Set the parameters
as follows:
Set Complexity under Password Policy to A password must contain special characters,
uppercase letters, lowercase letters, and digits.
Retain the default values for other parameters and click Save.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 13
Then, you can create a user by referring to section "Creating a Local User" and verify that
the password policy takes effect.
Log in to DeviceManager. Choose Settings > User and Security > Security Policies, and
click Modify on the right of Login Policy.
Click Advanced to expand the advanced settings of Login Policy. Set the parameters as
follows:
Automatic Unlock In: 3 minutes
Retain the default values for other parameters and click Save.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 14
After the operation is complete, log out of the system and log in again. Enter three
consecutive incorrect passwords to verify the account will be automatically locked. Wait
for 3 minutes and log in to the system to verify that the account is automatically
unlocked.
----End
Select the simulated alarm from the list box and click Clear. In the Clear Alarm dialog
box, click OK.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 16
2.3 SmartKit
2.3.1 Installing SmartKit
Double-click the installation software to install SmartKit.
The Setup page is displayed. Click Next.
On the License Agreement page that is displayed, select I accept the agreement and
click Next.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 17
On the Role Selection page that is displayed, select Install just for me or Install for all
users and click Next.
After the installation is complete, the Completing the SmartKit Setup Wizard page is
displayed. Click Finish.
If you use SmartKit for the first time, a usage wizard page is displayed introducing major
functions of SmartKit.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 21
The Welcome to SmartKit page is displayed in the wizard, including a description of the
application fields and paths.
Click . The wizard page displays the functions compatible with its predecessor
OceanStor Toolkit and provides the O&M functions in service scenarios, covering the
storage, server, and cloud computing fields.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 22
Click . The Function Management page is displayed. You can install or upgrade the
required functions with one click. If the network is disconnected, you can export the
function package from another SmartKit environment and import it to the current
environment to perform offline installation or upgrade.
Click . The Devices page is displayed. Once you add a device to the device list, you
do not need to add it again when you perform operations later. You can also add devices
in batches.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 23
Click . The Update reminder page is displayed. If a new version is available or user
authentication has expired or will soon expire, the system sends a notification. You can
set the frequency of notifications in advanced settings.
Click Start.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 24
On the main page, click the Devices tab and click Add.
The Add Device Step 2-1: Basic Information dialog box is displayed. Enter the basic
information as follows:
IP Address: 192.168.0.30
Select No Proxy under Select Proxy. Click Next.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 25
Enter configuration information, including the username, password, and port of the
device. Click Finish.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 26
After the storage device is added, it will be displayed in the device list.
On the Inspection Wizard Step 5-1: Welcome page, select Routine inspection and click
Next.
On the Inspection Wizard Step 5-2: Select Devices page, select a storage device and
click Next.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 29
On the Inspection Wizard Step 5-3: Select Check Items page, select the required check
items and click Next.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 30
In the Inspection Wizard Step 5-4: Set Check Policy dialog box, set a path for saving
the result file and click Next.
The Inspection Wizard Step 5-5: Start Inspection page is displayed. During the
inspection, you can view the progress and results of the executed check items. When the
inspection is complete, click a check item to view its details.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 31
After the inspection is complete, you can view the inspection result and view the report
for analysis.
----End
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 32
Select System Log and click Recent Log, All Logs, or Key Log. In the warning dialog box
that is displayed, select I have read and understand the consequences associated with
performing this operation. and click OK. The system starts collecting logs.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 33
⚫ Select Antivirus Log and click Export. In the warning dialog box that is displayed,
select I have read and understand the consequences associated with performing
this operation. and click OK to export the antivirus scanning information of the
device.
⚫ Select FTDS Log and click Export. In the warning dialog box that is displayed, select I
have read and understand the consequences associated with performing this
operation. and click OK to export FTDS logs of the device.
⚫ Select Performance File, set a file date range, and click Export. In the warning
dialog box that is displayed, select I have read and understand the consequences
associated with performing this operation. and click OK to export the performance
file of the device.
3.2.2 Troubleshooting
3.2.2.1 Symptom
Log in to DeviceManager and choose System > Hardware > Devices. On the front view
of a 2 U controller enclosure or a disk enclosure that is marked by an exclamation mark
(!), click the identified disk module. You can see that Health Status of the disk module is
Faulty.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 35
After the operation is complete, check whether the alarm is cleared and Health Status of
the disk module is Normal.
Log in to DeviceManager, choose System > Hardware > Devices, and click .
The system switches to the rear view of the controller enclosure. Click the IOM4 interface
module of controller A, that is, CTE0.A.IOM4.
In the interface module dialog box that is displayed, choose Operation > Power Off.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 37
In the dialog box prompting a high risk is displayed, confirm the warning information and
click OK.
3.3.2 Troubleshooting
3.3.2.1 Symptom
Log in to DeviceManager and choose System > Hardware > Devices. Click to
switch to the rear view of the storage device. Click the interface module in the yellow
square. Running Status of the interface module is Powered off.
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 38
Log in to DeviceManager, choose System > Hardware > Devices, and click to
switch to the rear view of the controller enclosure. Click the interface module in the
HCIP-Storage Storage System O&M Management and Troubleshooting Lab Guide Page 39
yellow square. The Interface Module dialog box is displayed. Choose Operation > Power
On.
The Success dialog box is displayed, indicating that the operation is successful.
After the operation is complete, check whether Running Status of the interface module
is Running.